Home » Cloud Computing strategy » Microsoft products for the Cloud OS

Microsoft products for the Cloud OS

Prerequisites (June 2015⇒):

Welcome to technologies trend tracking for 2015⇒2019 !!! v0.7
5G: 2015⇒2019 5G Technologies for the New Era of Wireless Internet of the 2020’s and 2030’s
Networked Society—WTF ??? v0.5
Microsoft Cloud state-of-the-art v0.7
• Service/telco for Networked Society
• Cloud for Networked Society
• Chrome for Networked Society
• Windows for Networked Society

Opportunity for Microsoft and its Partners in FY17:

As progressed since FY15:

Or enter your email address to subscribe to this blog and receive notifications of new posts by email:

Join 94 other followers

2010 – the 1st grand year of:

3.5G...3.9G level mobile Internet
• system-on-a-chip (SoC) and
reflective display technologies

Why viewed most (till Feb 1):

Marvell SoC leadership
Android 2.3 & 3.0
Hanvon's strategy
Welcome! or Home pages
Treesaver (LATELY #2!) and
IMT-Advanced (4G)
MORE ON THE STATISTICS PAGE

Core information:

Part of: Microsoft Cloud OS vision, delivery and ecosystem rollout

1. The Microsoft way
2. Microsoft Cloud OS vision
3. Microsoft Cloud OS delivery and ecosystem rollout
4. Microsoft products for the Cloud OS

4. 1 Windows Server 2012 R2 & System Center 2012 R2
4.2 Unlock Insights from any Data – SQL Server 2014
4.3 Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights
4.4 Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI)
4.5 Microsoft talking about Cloud OS and private clouds: starting with Ray Ozzie in November, 2009 (separate post)

4.5.1 Tiny excerpts from official executive and/or corporate communications
4.5.2 More official communications in details from executives and/or corporate

4. 1 Windows Server 2012 R2 & System Center 2012 R2 [MPNUK YouTube channel, Nov 18, 2013]

Hosting technical training overview.

Windows Server 2012 R2: 0:00
Server Virtualization: 4:40
Storage: 11:07
Networking: 17:37
Server Management and Automation: 23:14
Web and Application Platform: 27:05

System Center 2012 R2: 31:14
Infrastructure Provisioning: 36:15
Infrastructure Monitoring: 42:48
Automation and Self-service: 45:30
Application Performance Monitoring: 48:50
IT Service Management: 51:05

More information is in the What’s New in 2012 R2 [Windows Server 2012 R2, System Center 2012 R2] series of “In the Cloud” articles by Brad Anderson:

Over the last three weeks, Microsoft has made an exciting series of announcements about its next wave of products, including Windows Server 2012 R2, System Center 2012 R2, SQL Server 2014, Visual Studio 2013, Windows Intune and several new Windows Azure services. The preview bits are now available, and the customer feedback has been incredible!

The most common reaction I have heard from our customers and partners is that they cannot believe how much innovation has been packed into these releases – especially in such a short period of time. There is a truly amazing amount new value in these releases and, with this in mind, we want to help jump-start your understanding of the key scenarios that we are enabling.

As I’ve discussed this new wave of products with customers, partners, and press, I’ve heard the same question over and over: “How exactly did Microsoft build and deliver so much in such a short period of time?” My answer is that we have modified our own internal processes in a very specific way: We build for the cloud first.

A cloud-first design principle manifests itself in every aspect of development; it means that at every step we architect and design for the scale, security and simplicity of a high-scale cloud service. As a part of this cloud-first approach, we assembled a ‘Scenario Focus Team’ that identified the key user scenarios we needed to support – this meant that our engineers knew exactly what needed to be built at every stage of development, thus there was no time wasted debating what happened next. We knew our customers, we knew our scenarios, and that allowed all of the groups and stakeholders to work quickly and efficiently.

The cloud-first design approach also means that we build and deploy these products within our own cloud services first and then deliver them to our customers and partners. This enables us to first prove-out and battle-harden new capabilities at cloud scale, and then deliver them for enterprise use. The Windows Azure Pack is a great example of this: In Azure we built high-density web hosting where we could literally host 5,000 web servers on a single Windows Server instance. We exhaustively battle-hardened that feature, and now you can run it in your datacenters.

At Microsoft we operate more than 200 cloud services, many of which are servicing 100’s of millions of users every day. By architecting everything to deliver for that kind of scale, we are sure to meet the needs of enterprise anywhere and in any industry.

Our cloud-first approach was unique for another reason: It was the first time we had common/unified planning across Windows Client, Windows Server, System Center, Windows Azure, and Windows Intune. I know that may sound crazy, but it’s true – this is a first. We spent months planning and prioritizing the end-to-end scenarios together, with the goal of identifying and enabling all the dependencies and integration required for an effort this broad. Next we aligned on a common schedule with common engineering milestones.

The results have been fantastic. Last week, within 24 hours, we were able to release the previews bits of Windows Client 8.1, Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.

By working together throughout the planning and build process, we established a common completion and Release to Manufacturing date, as well as a General Availability date. Because of these shared plans and development milestones, by the time we started the actual coding, the various teams were well aware of each dependency and the time to build the scenarios was much shorter.

The bottom-line impact of this Cloud-first approach is simple:  Better value, faster.

This wave of products shows that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and those scenarios are all delivered at a higher quality.

This wave of products demonstrates that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and each of those scenarios are all delivered at a higher quality.  This cloud-first approach also helps us deliver the Cloud OS vision that drives the STB business strategy.

The story behind the technologies that support the Cloud OS vision is an important part of how we enable customers to embrace cloud computing concepts.  Over the next eight weeks, we’ll examine in great detail the three core pillars (see the table below) that support and inspire these R2 products:  Empower People-centric IT, Transform the Datacenter, and Enable Modern Business Apps.  The program managers who defined these scenarios and worked within each pillar throughout the product development process, have authored in-depth overviews of these pillars and their specific scenarios, and we’ll release those on a weekly basis.

Pillar

Scenarios

Empower People-centric IT

People-centric IT (PCIT) empowers each person you support to work virtually anywhere on PCs and devices of their choice, while providing IT with an easy, consistent, and secure way to manage it all. Microsoft’s approach helps IT offer a consistent self-service experience for people, their PCs, and their devices while ensuring security. You can manage all your client devices in a single tool while reducing costs and simplifying management.

Transform the Datacenter

Transforming the datacenter means driving your business with the power of a hybrid cloud infrastructure. Our goal is to help you leverage your investments, skills and people by providing a consistent datacenter and public cloud services platform, as well as products and technologies that work across your datacenter, and service provider clouds.

Enable Modern Business Apps

Modern business apps live and move wherever you want, and Microsoft offers the tools and resources that deliver industry-leading performance, high availability, and security. This means boosting the impact of both new and existing applications, and easily extending applications with new capabilities – including deploying across multiple devices.

The story behind these pillars and these products is an important part of our vision for the future of corporate computing and the modern datacenter, and in the following post, David B. Cross, the Partner Director of Test and Operations for Windows Server, shares some of the insights the Windows Server & System Center team have applied during every stage of our planning, build, and deployment of this awesome new wave of products.

People want access to information and applications on the devices of their choice. IT needs keep data protected and without breaking the budget. Learn how the Microsoft People-centric IT vision helps businesses address their consumerization of IT challenges. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/pcit.aspx
Hear from Dell and Accenture how Microsoft Windows Server 2012 R2 and System Center 2012 R2 enable a more flexible workstyle and people-centric IT through virtual desktop infrastructure (VDI). Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.

If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.

Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT(PCIT).

In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.

The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:

  1. The identity of the user
  2. The user’s specific device
  3. The network the user is working from

What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.

In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.

This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.

People want access to corporate applications from anywhere, on whatever device they choose—laptop, smartphone, tablet, or PC. IT departments are challenged to provide consistent, rich experiences across all these device types, with access to native, web, and remote applications or desktops. In this video we take a look at how IT can enable people to choose their devices, reduce costs and complexity, as well as maintain security and compliance by protecting data and having comprehensive settings management across platforms.

In today’s post, we tackle a common question I get from customers: “Why move to the cloud right now?” Recently, however, this question has changed a bit to, “What should I move to the cloud first?

An important thing to keep in mind with either of these questions is that every organization has their own unique journey to the cloud. There are a lot of different workloads that run on Windows Server, and the reality is that these various workloads are moving to the cloud at very different rates. Web servers, e-mail and collaboration are examples of workloads moving to the cloud very quickly. I believe that management, and the management of smart devices, will be one of the next workloads to make that move to the cloud – and, when the time comes, that move will happen fast.

Using a SaaS solution is a move to the cloud, and taking this approach is a game changer because of its ability to deliver an incredible amount of value and agility without an IT pro needing to manage any of the required infrastructure.

Cloud-based device management is a particularly interesting development because it allows IT pros to manage this rapidly growing population of smart, cloud-connected devices, and manage them “where they live.” Today’s smart phones and tablets were built to consume cloud services, and this is one of the reasons why I believe that a cloud-based management solution for them is so natural. As you contemplate your organization’s move to the cloud, I suggest that managing all of your smart devices from the cloud should be one of your top priorities.

I want to be clear, however, about the nature of this kind of management: We believe that there should be one consistent management experience across PC’s and devices.

Achieving this single management experience was a major focus of these 2012 R2 releases, and I am incredibly proud to say we have successfully engineered products which do exactly that. The R2 releases deliver this consistent end-user experience through something we call the “Company Portal.” The Company Portal is already deployed here at Microsoft, and it is what we are currently using to upgrade our entire workforce to Windows 8.1. I’ve personally used it to upgrade my desktop, laptop, and Surface – and the process could not have been easier.

In this week’s post, Paul Mayfield, the Partner Program Manager for System Center Configuration Manager/Windows Intune, and his team return to discuss in deep technical detail some of the specific scenarios our PCIT [“People Centric IT”] team has enabled (cloud-based management, Company Portal, etc.).

Cloud computing is bringing new opportunities and new challenges to IT. Learn how Microsoft can help transform your datacenter to take advantage of the vast possibilities of the cloud while leveraging your existing resources. Learn more: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-data-center.aspx
  • Part 4, July 24, 2013: Enabling Open Source Software

image

There are a lot of great surprises in these new R2 releases – things that are going to make a big impact in a majority of IT departments around the world. Over the next four weeks, the 2012 R2 series will cover the 2nd pillar of this release:Transform the Datacenter. In these four posts (starting today) we’ll cover many of the investments we have made that better enable IT pros to transform their datacenter via a move to a cloud-computing model.

This discussion will outline the ambitious scale of the functionality and capability within the 2012 R2 products. As with any conversation about the cloud, however, there are key elements to consider as you read. Particularly, I believe it’s important in all these discussions – whether online or in person – to remember that cloud computing is a computing model, not a location. All too often when someone hears the term “cloud computing” they automatically think of a public cloud environment. Another important point to consider is that cloud computing is much more than just virtualization – it is something that involves change: Change in the tools you use (automation and management), change in processes, and a change in how your entire organization uses and consumes its IT infrastructure.

Microsoft is extremely unique in this perspective, and it is leading the industry with its investments to deliver consistency across private, hosted and public clouds. Over the course of these next four posts, we will cover our innovations in the infrastructure (storage, network, compute), in both on-premise and hybrid scenarios, support for open source, cloud service provider & tenant experience, and much, much more.

As I noted above, it simply makes logical sense that running the Microsoft workloads in the Microsoft Clouds will deliver the best overall solution. But what about Linux? And how well does Microsoft virtualize and manage non-Windows platforms, in particular Linux?  Today we’ll address these exact questions.

Our vision regarding other operating platforms is simple: Microsoft is committed to being your cloud partner. This means end-to-end support that is versatile, flexible, and interoperable for any industry, in any environment, with any guest OS. This vision ensures we remain realistic – we know that users are going to build applications on open source operating systems, so we have built a powerful set of tools for hosting and managing them.

A great deal of the responsibility to deliver the capabilities that enable the Microsoft Clouds (private, hosted, Azure) to effectively host Linux and the associated open source applications falls heavily on the shoulders of the Windows Server and System Center team. In today’s post Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will detail how building the R2 wave with an open source environment in mind has led to a suite of products that are more adaptable and more powerful than ever.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors. I repeat: 50%. The bulk of the savings comes from our storage innovations and the low costs of our licenses.

At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.

Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer. If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro.

It really all comes down to the friction-free movement of applications and VMs across clouds. Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.

We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.

And speaking of today’s post – this IaaS topic will be published in two parts, with the second half appearing tomorrow morning.

In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

I recently had an opportunity to speak with a number of leaders from the former VMWare User Group (VMUG), and it was an incredibly educational experience. I say “former” because many of the VMUG user group chapters are updating their focus/charter and are renaming themselves the Virtual Technology User Group (VTUG). This change is a direct result of how they see market share and industry momentum moving to solutions like the consistent clouds developed by Microsoft.

In a recent follow up conversation with these leaders, I asked them to describe some common topics they hear discussed in their meetings. One of the leaders commented that the community is saying something really specific: “If you want to have job security and a high paying job for the next 10 years, you better be on your way to becoming an expert in the Microsoft clouds. That is where this industry is going.” 

When I look at what is delivered in these R2 releases, the innovation is just staggering. This industry-leading innovation – the types of technical advances that VTUG groups are confidently betting on – is really exciting.

With this innovation in mind, in today’s post I want to discuss some of the work we are doing around the user experience for the teams creating the services that are offered, and I want to examine the experience that can be offered to the consumer of the cloud (i.e. the tenants). While we were developing R2, we spent a lot of time ensuring that we truly understood exactly who would be using our solutions. We exhaustively researched their needs, their motivations, and how various IT users and IT teams relate to each other. This process was incredibly important because these individuals and teams all have very different needs – and we were committed to supporting all of them.

The R2 wave of products have been built with this understanding.  The IT teams actually building and operating a cloud(s) have very different needs than individuals who are consuming the cloud (tenants).  The experience for the infrastructure teams will focus on just that – the infrastructure; the experience for the tenants will focus on the applications/ services and their seamless operation and maintenance.

In yesterday’s post we focused heavily on the innovations in these R2 releases in the infrastructure – storage, network, and compute – and, in this post, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will provide an in-depth look at Service Provider and Tenant experience and innovations with Windows Server 2012 R2, System Center 2012 R2, and the new features in Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

Today, people want to work anywhere, on any device and have access to all the resources they need to do their job. How do you enable your users to be productive on the device of their choice, yet retain control of information and meet compliance requirements? In this video we take a look at how the Microsoft access and information protection solutions allow you to enable your users to be productive, provide them with a single identity to access all resources, and protect your data. Learn more: http://www.microsoft.com/aip

In the 13+ years since the original Active Directory product launched with Windows 2000, it has grown to become the default identity management and access-control solution for over 95% of organizations around the world.  But, as organizations move to the cloud, their identity and access control also need to move to the cloud. As companies rely more and more on SaaS-based applications, as the range of cloud-connected devices being used to access corporate assets continue to grow, and as more hosted and public cloud capacity is used companies must expand their identity solutions to the cloud.

Simply put, hybrid identity management is foundational for enterprise computing going forward.

With this in mind, we set out to build a solution in advance of these requirements to put our customers and partners at a competitive advantage.

To build this solution, we started with our “Cloud first” design principle. To meet the needs of enterprises working in the cloud, we built a solution that took the power and proven capabilities of Active Director and combined it with the flexibility and scalability of Windows Azure. The outcome is the predictably named Windows Azure Active Directory.

By cloud optimizing Active Directory, enterprises can stretch their identity and access management to the cloud and better manage, govern, and ensure compliance throughout every corner of their organization, as well as across all their utilized resources.

This can take the form of seemingly simple processes (albeit very complex behind the scenes) like single sign-on which is a massive time and energy saver for a workforce that uses multiple devices and multiple applications per person.  It can also enable the scenario where a user’s customized and personalized experience can follow them from device to device regardless of when and where they’re working. Activities like these are simply impossible without a scalable, cloud-based identity management system.

If anyone doubts how serious and enterprise-ready Windows Azure AD already is, consider these facts:

  • Since we released Windows Azure AD, we’ve had over 265 billion authentications.
  • Every two minutes Windows Azure AD services over 1,000,000 authentication requests for users and devices around the world (that’s about 9,000 requests per second).
  • There are currently more than 420,000 unique domains uploaded and now represented inside of Azure Active Directory.

Windows Azure AD is battle tested, battle hardened, and many other verbs preceded by the word “battle.”

But, perhaps even more importantly, Windows Azure AD is something Microsoft has bet its own business on: Both Office 365 (the fastest growing product in Microsoft history) and Windows Intune authenticate every user and device with Windows Azure AD.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center), Alex Simons (Director of Program Management for Active Directory), Sam Devasahayam (Principle Program Management Lead for Windows Azure AD), and Mark Wahl (Principle Program Manager for Active Directory) take a look at one of R2’s most innovative features, Hybrid Identity Management.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.

One of the key elements in delivering hybrid cloud is networking. Learn how software-defined networking helps make hybrid real. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/software-defined-networking.aspx
[so called Application Centric Infrastructure (ACI)] Microsoft and Cisco will deliver unique customer value through new integrated networking solutions that will combine software-enabled flexibility with hardware-enabled scale/performance. These solutions will keep apps and workloads front and center and have the network adapt to their needs. Learn more by visiting: http://www.cisco.com/web/learning/le21/onlineevts/acim/index.html

One of the foundational requirements we called out in the 2012 R2 vision document was our promise to help you transform the datacenter. A core part of delivering on that promise is enabling Hybrid IT.

By focusing on Hybrid IT we were specifically calling out the fact that almost every customer we interacted with during our planning process believed that in the future they would be using capacity from multiple clouds. That may take the form of multiple private clouds an organization had stood up, or utilizing cloud capacity from a service provider [i.e. managed cloud] or a public cloud like Azure, or using SaaS solutions running from the public cloud.

We assumed Hybrid IT would really be the norm going forward, so we challenged ourselves to really understand and simplify the challenges associated with configuring and operating in a multi-cloud environment. Certainly one of the biggest challenges associated with operating in a hybrid cloud environment is associated with the network – everything from setting up the secure connection between clouds, to ensuring you could use your IP addresses (BYOIP) in the hosted and public clouds you chose to use.

The setup, configuration and operation of a hybrid IT environment is, by its very nature incredibly complex – and we have poured hundreds of thousands of hours into the development of R2 to solve this industry-wide problem.

With the R2 wave of products – specifically Windows Server 2012 R2 and System Center 2012 R2 – enterprises can now benefit from the highly-available and secure connection that enables the friction-free movement of VMs across those clouds. If you want or need to move a VM or application between clouds, the transition is seamless and the data is secure while it moves.

The functionality and scalability of our support for hybrid IT deployments has not been easy to build, and each feature has been methodically tested and refined in our own datacenters. For example, consider that within Azure there are over 50,000 network changes every day, and every single one of them is fully automated. If even 1/10 of 1% of those changes had to be done manually, it would require a small army of people working constantly to implement and then troubleshoot the human errors. With R2, the success of processes like these, and our learnings from Azure, come in the box.

Whether you’re a service provider or working in the IT department of an enterprise (which, in a sense, is like being a service provider to your company’s workforce), these hybrid networking features are going to remove a wide range of manual tasks, and allow you to focus on scaling, expanding and improving your infrastructure.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center) and Bala Rajagopalan(Principle Program Manager for Windows Server & System Center), provide a detailed overview of 2012 R2’s hybrid networking features, as well as solutions for common scenarios like enabling customers to create extended networks spanning clouds, and enabling access to virtualized networks.

Don’t forget to take a look at the “Next Steps” section at the bottom of this post, and check back tomorrow for the second half of this week’s hybrid IT content which will examine the topic of Disaster Recovery.

As business becomes more dependent on technology, business continuity becomes increasingly vital for IT. Learn how Micosoft is making it easier to build out business continuity plans. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/business-continuity.aspx

With Windows Server 2012 R2, with Hyper-V Replica, and with System Center 2012 R2 we have delivered a DR solution for the masses.

This DR solution is a perfect example of how the cloud changes everything

Because Windows Azure offers a global, highly available cloud platform with an application architecture that takes full advantage of the HA capabilities – you can build an app on Azure that will be available anytime and anywhere.  This kind of functionality is why we made the decision to build the control plane or administrative console for our DR solution on Azure. The control plane and all the meta-data required to perform a test, planned, or unplanned recovery will always be available.  This means you don’t have to make the huge investments that have been required in the past to build a highly-available platform to host your DR solution – Azure automatically provides this.

(Let me make a plug here that you should be looking to Azure for all the new application you are going to build – and we’ll start covering this specific topic in next week’s R2 post.)

With this R2 wave of products, organizations of all sizes and maturity, anywhere in the world, can now benefit from a simple and cost-effective DR solution.

There’s also another other thing that I am really proud of here: Like most organizations, we regularly benchmark ourselves against our competition.   We use a variety of metrics, like: ‘Are we easier to deploy and operate?’ and ‘Are we delivering more value and doing it a lower price?’  Measurements like these have provided a really clear answer: Our competitors are not even in the same ballpark when it comes to DR.

During the development of R2, I watched a side-by-side comparison of what was required to setup DR for 500 VMs with our solution compared to a competitive offering, and the contrast was staggering. The difference in simplicity and the total amount of time required to set everything up was dramatic.  In a DR scenario, one interesting unit of measurement is total mouse clicks. It’s easy to get carried away with counting clicks (hey, we’re engineers after all!), but, in the side-by-side comparison, the difference was 10’s of mouse clicks compared to 100’s. It is literally a difference of minutes vs. days.

You can read some additional perspectives I’ve shared on DR here.

In yesterday’s post we looked at the new hybrid networking functionality in R2 (if you haven’t seen it yet, it is a must-read), and in this post Vijay Tewari (Principal Program Manager for Windows Server & System Center) goes deep into the architecture of this DR solution, as well this solution’s deployment and operating principles.

As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

A revolution is taking place, impacting the speed at which Business Apps need to be built, and the jaw dropping capabilities they need to deliver. Ignoring these trends isn’t an option and yet you have no time to hit the reset button. Learn how to deliver revolutionary benefits in an evolutionary way. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-business-apps.aspx
Hear from Accenture and Hostway how Microsoft Windows Azure enables the development and deployment of modern business applications faster and more cost effectively through cloud computing. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The future of the IT Pro role will require you to know how applications are built for the cloud, as well as the cloud infrastructures where these apps operate, is something every IT Pro needs in order to be a voice in the meetings that will define an organization’s cloud strategy. IT pros are also going to need to know how their team fits in this cloud-centric model, as well as how to proactively drive these discussions.

These R2 posts will get you what you need, and this “Enable Modern Business Apps” pillar will be particularly helpful.

Throughout the posts in this series we have spoken about the importance of consistency across private, hosted and public clouds, and we’ve examined how Microsoft is unique in its vision and execution of delivering consistent clouds. The Windows Azure Pack is a wonderful example of Microsoft innovating in the public cloud and then bringing the benefits of that innovation to your datacenter.

The Windows Azure Pack is – literally speaking – a set of capabilities that we have battle-hardened and proven in our public cloud. These capabilities are now made available for you to enhance your cloud and ensure that “consistency across clouds” that we believe is so important.

A major benefit of the Windows Azure Pack is the ability to build an application once and then deploy and operate it in any Microsoft Cloudprivate, hosted or public.

This kind of flexibility means that you can build an application, initially deploy it in your private cloud, and then, if you want to move that app to a Service Provider or Azure in the future, you can do it without having to modify the application. Making tasks like this simple is a major part of our promise around cloud consistency, and it is something only Microsoft (not VMware, not AWS) can deliver.

This ability to migrate an app between these environments means that your apps and your data are never locked in to a single cloud. This allows you to easily adjust as your organization’s needs, regulatory requirements, or any operational conditions change.

A big part of this consistency and connection is the Windows Azure Service Bus which will be a major focus of today’s post.

The Windows Azure Service Bus has been a big part of Windows Azure since 2010. I don’t want to overstate this, but Service Bus has been battle-hardened in Azure for more than 3 years, and now we are delivering it to you to run in your datacenters. To give you a quick idea of how critical Service Bus is for Microsoft, consider this: Service Bus is used in all the billing for Windows Azure, and it is responsible for gathering and posting all the scoring and achievement data to the Halo 4 leaderboards (now that is really, really important – just ask my sons!). It goes without saying that the people in charge of Azure billing and the hardcore gamers are not going to tolerate any latency or downtime getting to their data.

With today’s topic, take the time to really appreciate the app development and app platform functionality in this R2 wave. I think you’ll be really excited about how you can plug into this process and lead your organization.

This post, written by Bradley Bartz (Principal Program Manager from Windows Azure) and Ziv Rafalovich (Senior Program Manager in Windows Azure), will get deep into these new features and the amazing scenarios that the Windows Azure Pack and Windows Azure Service Bus enable. As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this for links to additional information about the topics covered in this post.

A major promise underlying all of the 2012 R2 products is really simple: Consistency.

Consistency in the user experiences, consistency for IT professionals, consistency for developers and consistency across clouds. A major part of delivering this consistency is the Windows Azure Pack (WAP). Last week we discussed how Service Bus enables connections across clouds, and in this post we’ll examine more of the PaaS capabilities built and tested in Azure data centers and now offered for Windows Server. With the WAP, Windows Server 2012 R2, and System Center IT pros can make their data center even more scalable, flexible, and secure.

Throughout the development of this R2 wave, we looked closely at what organizations needed and wanted from the cloud. A major piece of feedback was the desire to build an app once and then have that app live in any data center or cloud. For the first time this kind of functionality is now available. Whether your app is in a private, public, or hosted cloud, the developers and IT Professionals in you organization will have consistency across clouds.

One of the elements that I’m sure will be especially popular is the flexibility and portability of this PaaS. I’ve had countless customers comment that they love the idea of PaaS, but don’t want to be locked-in or restricted to only running it in specific data centers. Now, our customers and partners can build a PaaS app and run it anywhere. This is huge! Over the last two years the market has really began to grasp what PaaS has to offer, and now the benefits (auto-scale, agility, flexibility, etc.) are easily accessible and consistent across the private, hosted and public clouds Microsoft delivers.

This post will spend a lot of time talking about Web Sites for Windows Azure and how this high density web site hosting delivers a level of power, functionality, and consistency that is genuinely next-gen.

Microsoft is literally the only company offering these kinds of capabilities across clouds – and I am proud to say that we are the only ones with a sustained track record of enterprise-grade execution.

With the features added by the WAP [Windows Azure Pack], organizations can now take advantage of PaaS without being locked into a cloud. This is, at its core, the embodiment of Microsoft’s commitment to make consistency across clouds a workable, viable reality.

This is genuinely PaaS for the modern web.

Today’s post was written by Bradley Bartz, a Principal Program Manager from Windows Azure. For more information about the technology discussed here, or to see demos of these features in action, check out the “Next Steps” at the bottom of this post.

More information: in the Success with Hybrid Cloud series blog posts [Brad Anderson, Nov 12, Nov 14, Nov 20, Dec 2, Dec 5, and 21 upcoming blogs posts] which “will examine the building/deployment/operation of Hybrid Clouds, how they are used in various industries, how they manage and deliver different workloads, and the technical details of their operation.”


4.2
Unlock Insights from any Data –
SQL Server 2014:

With growing demand for data, you need database scale with minimal cost increases. Learn how SQL Server 2014 provides speed and scalability with in-memory technologies to support your key data workloads, including OLTP, data warehousing, and BI. Learn more: http://www.microsoft.com/sqlserver2014
Hosting technical training overview.

Microsoft SQL Server 2014 CTP2 was announced by Quentin Clark during the Microsoft SQL PASS 2013 keynote.  This second public CTP is essentially feature complete and enables you to try and test all of the capabilities of the full SQL Server 2014 release. Below you will find an overview of SQL Server 2014 as well as key new capabilities added in CTP2:

SQL Server 2014 helps organizations by delivering:

  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel

Thank you to those that have already downloaded SQL Server 2014 CTP1 and started seeing first hand the performance gains that in-memory capabilities deliver along with better high availability with AlwaysOn enhancements.  CTP2 introduces additional mission critical capabilities with further enhancements to the in-memory technologies along with new hybrid cloud capabilities.

What’s new in SQL Server 2014 CTP2?

New Mission Critical Capabilities and Enhancements

  • Enhanced In-Memory OLTP, including new tools which will help you identify and migrate the tables and stored procedures will benefit most from In-Memory OLTP, as well as greater T-SQL compatibility and new indexes which enables more customers to take advantage of our solution.
  • High Availability for In-Memory OLTP Databases:  AlwaysOn Availability Groups are supported for In-Memory OLTP, giving you in-memory performance gains with high availability.  IO Resource Governance, enabling customers to more effectively manage IO across multiple databases and/or classes of databases to provide more predictable IO for your most critical workloads.  Customers today can already manage CPU and memory.
  • Improved resiliency with Windows Server 2012 R2 by taking advantage of Cluster Shared Volumes (CSVs).  CSV’s provide improved fault detection and recovery in the case of downtime.
  • Delayed Durability, providing the option for increased transaction throughput and lower latency for OLTP applications where performance and latency needs outweigh the need for 100% durability.

New Hybrid Cloud Capabilities and Enhancements

By enabling the above in-memory performance capabilities for your SQL Server instances running in Windows Azure Virtual Machines, you will see significant transaction and query performance gains.  In addition there are new capabilities listed below that will allow you to unlock new hybrid scenarios for SQL Server.

  • Managed Backup to Windows Azure, enabling you to backup on-premises SQL Server databases to Windows Azure storage directly in SSMS.  Managed Backup also optimizes backup policy based on usage, an advantage over the manual Backup to Windows Azure.
  • Encrypted Backup, offering customer the ability to encrypt both on-premises backup and backups to Windows Azure for enhance security.
  • Enhanced disaster recovery to Windows Azure with simplified UI, enabling customers to more easily add Windows Azure Virtual Machines as AlwaysOn secondaries in SQL Server Management Studio for greater cost-effective data protection and disaster recovery solution.  Customers may also use the secondaries in Windows Azure for to scale and offload reporting and backups.
  • SQL Server Data Files in Windows Azure – New capability to store large databases (>16TB) in Windows Azure and the ability to stream the database as a backend for SQL Server applications running on-premises or in the cloud.

Learn more and download SQL Server 2014 CTP2

SQL Server 2014 helps address key business challenges of ever growing data volumes, the need to transact and process data faster, the scalability and efficiency of cloud computing and an ever growing hunger for business insights.   With SQL Server 2014 you can now unlock real-time insights with mission critical and cloud performance and take advantage of one of the most comprehensive BI solutions in the marketplace today.

Many customers are already realizing the significant benefits of the new in-memory technologies in SQL Server 2014 including: Edgenet, Bwin, SBI Liquidity, TPP and Ferranti.  Stay tuned for an upcoming blog highlighting the impact in-memory had to each of their businesses.

Learn more about SQL Server 2014 and download the datasheet and whitepapers here.  Also if you would like to learn more about SQL Server In-Memory best practices, check out this SQL Server 2014 in-memory blog series compilation. There is also a SQL Server 2014 hybrid cloud scenarios blog compilation for learning best practices.

Also if you haven’t already download SQL Server 2014 CTP 2 and see how much faster your SQL Server applications run!  The CTP2 image is also available on Windows Azure, so you can easily develop and test the new features of SQL Server 2014.

To ensure that its customers received timely, accurate product data, Edgenet decided to enhance its online selling guide with In-Memory OLTP in Microsoft SQL Server 2014.

At the SQL PASS conference last November, we announced the In-memory OLTP (project code-named Hekaton) database technology built into the next release of SQL Server. Microsoft’s technical fellow Dave Campbell’s blog provides a broad overview of the motivation and design principles behind this project codenamed In-memory OLTP.

In a nutshell – In-memory OLTP is a new database engine optimized for memory resident data and OLTP workloads. In-memory OLTP is fully integrated into SQL Server – not a separate system. To take advantage of In-memory OLTP, a user defines a heavily accessed table as memory optimized. In-memory OLTP tables are fully transactional, durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both In-memory OLTP tables and regular tables, and a transaction can update data in both types of tables. Expensive T-SQL stored procedures that reference only In-memory OLTP tables can be natively compiled into machine code for further performance improvements. The engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out mid-tier. To achieve this it uses latch-free data structures and a new optimistic, multi-version concurrency control technique. The end result is a selective and incremental migration into In-memory OLTP to provide predictable sub-millisecond low latency and high throughput with linear scaling for DB transactions. The actual performance gain depends on many factors but we have typically seen 5X-20X in customer workloads.

In the SQL Server product group, many years ago we started the investment of reinventing the architecture of the RDBMS engine to leverage modern hardware trends. This resulted in PowerPivot and In-memory ColumnStore Index in SQL2012, and In-memory OLTP is the new addition for OLTP workloads we are introducing for SQL2014 together with the updatable clustered ColumnStore index and (SSD) bufferpool extension. It has been a long and complex process to build this next generation relational engine, especially with our explicit decision of seamlessly integrating it into the existing SQL Server instead of releasing a separate product – in the belief that it provides the best customer value and onboarding experience.

Now we are releasing SQL2014 CTP1 as a public preview, it’s a great opportunity for you to get hands-on experience with this new technology and we are eager to get your feedback and improve the product. In addition to BOL (Books Online) content, we will roll out a series of technical blogs on In-memory OLTP to help you understand and leverage this preview release effectively.

In the upcoming series of blogs, you will see the following in-depth topics on In-memory OLTP:

  • Getting started – to walk through a simple sample database application using In-memory OLTP so that you can start experimenting with the public CTP release.
  • Architecture – to understand at a high level how In-memory OLTP is designed and built into SQL Server, and how the different concepts like memory optimized tables, native compilation of SPs and query inter-op fit together under the hood.
  • Customer experiences so far – we had many TAP customer engagements since about 2 years ago and their feedback helped to shape the product, and we would like to share with you some of the learnings and customer experiences, such as typical application patterns and performance results.
  • Hardware guidance – it is apparent that memory size is a factor, but since most of the applications require full durability, In-memory OLTP still requires log and checkpointing IO, and with the much higher transactional throughput, it can put actually even higher demand on the IO subsystem as a result. We will also cover how Windows Azure VMs can be used with In-memory OLTP.
  • Application migration – how to get started with migrating to or building a new application with In-memory OLTP. You will see multiple blog posts covering the AMR tool, Table and SP migrations and pointers on how to work around some unsupported data types and T-SQL surface area, as well as the transactional model used. We will highlight the unique approach on SQL server integration which supports a partial database migration.
  • Managing In-memory OLTP – this will cover the DBA considerations, and you will see multiple posts ranging from the tooling supporting (SSMS) to more advanced topics such as how memory and storage are managed.
  • Limitations and what’s coming – explain what limitations exist in CTP1 and new capabilities expected to be coming in CTP2 and RTM, so that you can plan your roadmap with clarity.

In addition – we will also have blog coverage on what’s new with In-memory ColumnStore and introduction to bufferpool extension. 

SQL2014 CTP1 is available for download here or you can read the complete blog series here:

bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012, gaining significant in-memory benefit using xVelocity Column Store. Here, bwin takes their systems one step further by using the technology preview of SQL Server 2014 In-memory OLTP (formerly known as Project “Hekaton”). Prior to using OLTP their online gaming systems were handling about 15,000 requests per second. Using OLTP the fastest tests so far have scaled to 250,000 transactions per second.

Recently I posted a video about how the SQL Server Community was looking into emerging trends in BI and Database technologies – one of the key technologies mentioned in that video was in-memory.

Many Microsoft customers have been using in-memory technologies as part of SQL Server since 2010 including xVelocity Analytics, xVelocity Column Store and Power Pivot, something we recently covered in a blog post following the ‘vaporware’ outburst from Oracle SVP of Communications, Bob Evans. Looking forward, Ted Kummert recently announced project codenamed “Hekaton,” available in the next major release of SQL Server. “Hekaton” will provide a full in-memory transactional engine, and is currently in private technology preview with a small set of customers. This technology will provide breakthrough performance gains of up to 50 times.

For those who are keen to get a first view of customers using the technology, below is the video of online gaming company bwin using “Hekaton”.

Bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012 – a story you can read here. Bwin had already gained significant in-memory benefit using xVelocity Column Store, for example – a large report that used to take 17 minutes to render now takes only three seconds.

Given the benefits, they had seen with in-memory technologies, they were keen to trial the technology preview of “Hekaton”. Prior to using “Hekaton”, their online gaming systems were handling about 15,000 requests per second, a huge number for most companies. However, bwin needed to be agile and stay at ahead of the competition and so they wanted access to the latest technology speed.

Using “Hekaton” bwin were hoping they could at least double the number of transactions. They were ‘pretty amazed’ to see that the fastest tests so far have scaled to 250,000 transactions per second.

So how fast is “Hekaton” – just ask Rick Kutschera, the Database Engineering Manager at bwin – in his words it’s ‘Wicked Fast’! However, this is not the only point that Rick highlights, he goes on to mention that “Hekaton” integrates seamlessly into the SQL Server engine, so if you know SQL Server, you know “Hekaton”.

— David Hobbs-Mallyon, Senior Product Marketing Manager

Quentin Clark
Corporate Vice President, Data Platform Group

This morning, during my keynote at the Professional Association of SQL Server (PASS) Summit 2013, I discussed how customers are pushing the boundaries of what’s possible for businesses today using the advanced technologies in our data platform. It was my pleasure to announce the second Community Technology Preview (CTP2) of SQL Server 2014 which features breakthrough performance with In-Memory OLTP and simplified backup and disaster recovery in Windows Azure.

Pushing the boundaries

We are pushing the boundaries of our data platform with breakthrough performance, cloud capabilities and the pace of delivery to our customers. Last year at PASS Summit, we announced our In-Memory OLTP project “Hekaton” and since then released SQL Server 2012 Parallel Data Warehouse and public previews of Windows Azure HDInsight and Power BI for Office 365. Today we have SQL Server 2014 CTP2, our public and production-ready release shipping a mere 18 months after SQL Server 2012. 

Our drive to push the boundaries comes from recognizing that the world around data is changing.

  • Our customers are demanding more from their data – higher levels of availability as their businesses scale and globalize, major advancements in performance to align to the more real-time nature of business, and more flexibility to keep up with the pace of their innovation. So we provide in-memory, cloud-scale, and hybrid solutions. 
  • Our customers are storing and collecting more data – machine signals, devices, services and data from outside even their organizations. So we invest in scaling the database and a Hadoop-based solution. 
  • Our customers are seeking the value of new insights for their business. So we offer them self-service BI in Office 365 delivering powerful analytics through a ubiquitous product and empowering users with new, more accessible ways of gaining insights.

In-memory in the box for breakthrough performance

A few weeks ago, one of our competitors announced plans to build an in-memory column store into their database product some day in the future. We shipped similar technology two years ago in SQL Server 2012, and have continued to advance that technology in SQL Server 2012 Parallel Data Warehouse and now with SQL Server 2014. In addition to our in-memory columnar support in SQL Server 2014, we are also pushing the boundaries of performance with in-memory online transaction processing (OLTP). A year ago we announced project “Hekaton,” and today we have customers realizing performance gains of up to 30x. This work, combined with our early investments in Analysis Services and Excel, means Microsoft is delivering the most complete in-memory capabilities for all data workloads – analytics, data warehousing and OLTP. 

We do this to allow our customers to make breakthroughs for their businesses. SQL Server is enabling them to rethink how they can accelerate and exceed the speed of their business.

image

  • TPP is a clinical software provider managing more than 30 million patient records – half the patients in England – including 200,000 active registered users from the UK’s National Health Service.  Their systems handle 640 million transactions per day, peaking at 34,700 transactions per second. They tested a next-generation version of their software with the SQL Server 2014 in-memory capabilities, which has enabled their application to run seven times faster than before – all of this done and running in half a day. 
  • Ferranti provides solutions for the energy market worldwide, collecting massive amounts of data using smart metering. With our in-memory technology they can now process a continuous data flow up to 200 million measurement channels making the system fully capable of meeting the demands of smart meter technology.
  • SBI Liquidity Market in Japan provides online services for foreign currency trading. By adopting SQL Server 2014, the company has increased throughput from 35,000 to 200,000 transactions per second. They now have a trading platform that is ready to take on the global marketplace.

A closer look into In-memory OLTP

Previously, I wrote about the journey of the in-memory OLTP project Hekaton, where a group of SQL Server database engineers collaborated with Microsoft Research. Changes in the ratios between CPU performance, IO latencies and bandwidth, cache and memory sizes as well as innovations in networking and storage were changing assumptions and design for the next generation of data processing products. This gave us the opening to push the boundaries of what we could engineer without the constraints that existed when relational databases were first built many years ago. 

Challenging those assumptions, we engineered for dramatically changing latencies and throughput for so-called “hot” transactional tables in the database. Lock-free, row-versioning data structures and compiling T-SQL and queries into native code, combined with making the programming semantics consistent with SQL Server means our customers can apply the performance benefits of extreme transaction processing without application rewrites or the adoption of entirely new products. 

image

The continuous data platform

Windows Azure fulfills new scenarios for our customers – transcending what is on-premises or in the cloud. Microsoft is providing a continuous platform from our traditional products that are run on-premises to our cloud offerings. 

With SQL Server 2014, we are bringing the cloud into the box. We are delivering high availability and disaster recovery on Windows Azure built right into the database. This enables customers to benefit from our global datacenters: AlwaysOn Availability Groups that span on-premises and Windows Azure Virtual Machines, database backups directly into Windows Azure storage, and even the ability to store and run database files directly in Windows Azure storage. That last scenario really does something interesting – now you can have an infinitely-sized hard drive with incredible disaster recovery properties with all the great local latency and performance of the on-premises database server. 

We’re not just providing easy backup in SQL Server 2014, today we announced backup to Windows Azure would be available for all our currently supported SQL Server releases. Together, the backup to Windows Azure capabilities in SQL Server 2014 and via the standalone tool offer customers a single, cost-effective backup strategy for secure off-site storage with encryption and compression across all supported versions of SQL Server.

By having a complete and continuous data platform we strive to empower billions of people to get value from their data. It’s why I am so excited to announce the availability of SQL Server 2014 CTP2, hot on the heels of the fastest-adopted release in SQL Server’s history, SQL Server 2012. Today, more businesses solve their data processing needs with SQL Server than any other database. It’s about empowering the world to push the boundaries.


4.3
Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights
:

Data is being generated faster than ever before, so what can it do for your business? Learn how to unlock insights on any data by empowering people with BI and big data tools to go from raw data to business insights faster and easier. Learn more: http://www.microsoft.com/datainsights
With the abundance of information available today, BI shouldn’t be confined to analysts or IT. Learn how to empower all with analytics through familiar Office tools, and how to manage all your data needs with a powerful and scalable data platform. Learn more: http://www.microsoft.com/BI
With data volumes exploding by 10x every five years, and much of this growth coming from new data types, data warehousing is at a tipping point. Learn how to evolve your data warehouse infrastructure to support variety, volume, and velocity of data. Learn more: http://www.microsoft.com/datawarehousing
Hear from HP, Dell and Hortonworks how Microsoft SQL Server Parallel Data Warehouse and Windows Azure HDInsights can unlock data insights and respond to business opportunities through big data analytics. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ
The idea that big data will transform businesses and the world is indisputable, but are there enough resources to fully embrace this opportunity? Join Quentin Clark, Microsoft Corporate Vice President, who will share Microsoft’s bold goal to consumerize big data — simplifying the data science process and providing easy access to data with everyday tools. This keynote is sponsored by Microsoft
Quentin Clark discusses the ever-changing big data market and how Microsoft is meeting its demands.

Announcing Windows Azure HDInsight: Where big data meets the cloud [The Official Microsoft Blog, Oct 28, 2013]

post is from Quentin Clark, Corporate Vice President of the Data Platform Group at Microsoft

I am pleased to announce that Windows Azure HDInsight – our cloud-based distribution of Hadoop – is now generally available on Windows Azure. The GA of HDInsight is an important milestone for Microsoft, as its part of our broader strategy to bring big data to a billion people.

On Tuesday at Strata + Hadoop World 2013, I will discuss the opportunity of big data in my keynote, “Can Big Data Reach One Billion People?” Microsoft’s perspective is that embracing the new value of data will lead to a major transformation as significant as when line of business applications matured to the point where they touched everyone inside an organization. But how do we realize this transformation? It happens when big data finds its way to everyone in business – when anyone with a question that can be answered by data, gets their answer. The impact of this is beyond just making businesses smarter and more efficient. It’s about changing how business works through both people and data-driven insights. Data will drive the kinds of changes that, for example, allow personalization to become truly prevalent. People will drive change by gaining insights into what impacts their business, enabling them to change the kinds of partnerships and products they offer.

Our goal to empower everyone with insights is the reason why Microsoft is investing, not just in technology like Hadoop, but the whole circuit required to get value from big data. Our customers are demanding more from the data they have – not just higher availability, global scale and longer histories of their business data, but that their data works with business in real time and can be leveraged in a flexible way to help them innovate. And they are collecting more signals – from machines and devices and sources outside their organizations.

Some of the biggest changes to businesses driven by big data are created by the ability to reason over data previously thought unmanageable, as well as data that comes from adjacent industries. Think about the use of equipment data to do better operational cost and maintenance management, or a loan company using shipping data as part of the loan evaluation. All of this data needs all forms of analyticsand the ability to reach the people making decisions. Organizations that complete this circuit, thereby creating the capability to listen to what the data can tell them, will accelerate.

Bringing Hadoop to the enterprise

Hadoop is a cornerstone of how we will realize value from big data. That’s why we’ve engineered HDInsight as 100 percent Apache Hadoop offered as an Azure cloud service. The service has been in public production preview for a number of months now – the reception has been tremendous and we are excited to bring it to full GA status in Azure. 

Microsoft recognizes Hadoop as a standard and is investing to ensure that it’s an integral part of our enterprise offerings. We have invested through real contributions across the project – not just to make Hadoop work great on Windows, but even in projects like Tez, Stinger and Hive. We have put in thousands of engineering hours and tens of thousands of lines of code. We have been doing this in partnership with Hortonworks, who will make HDP (Hortwonworks Data Platform) 2.0 for Windows Server generally available next month, giving the world access to a supported Apache-pure Hadoop v2 distribution for Windows Server. Working with Hortonworks, we will support Hadoop v2 in a future update to HDInsight.

Windows Azure HDInsight combines the best of Hadoop open source technology with the security, elasticity and manageability that enterprises require. We have built it to integrate with Excel and Power BI – our business intelligence offering that is part of Office 365 – allowing people to easily connect to data through HDInsight, then refine and do business analytics in a turnkey fashion. For the developer, HDInsight also supports choice of languages: .NET, Java and more.

We have key customers currently using HDInsight, including:

  • The City of Barcelona uses Windows Azure HDInsight to pull in data about traffic patterns, garbage collection, city festivals, social media buzz and more to make critical decisions about public transportation, security and overall spending.
  • A team of computer scientists at Virginia Tech developed an on-demand, cloud-computing model using the Windows Azure HDInsight Service, enabling  easier, more cost-effective access to DNA sequencing tools and resources.
  • Christian Hansen, a developer of natural ingredients for several industries, collects electronic data from a variety of sources, including automated lab equipment, sensors and databases. With HDInsight in place, they are able to collect and process data from trials 100 time times faster than before.

End-to-end solutions for big data

These kinds of uses of Hadoop are examples of how big data is changing what’s possible. Our Hadoop-based solution HDInsight is a building block – one important piece of the end-to-end solutions required to get value from data.

All this comes together in solutions where people can use Excel to pull data directly from a range of sources, including SQL Server (the most widely-deployed database product), HDInsight, external Hadoop clusters and publicly available datasets. They can then use our business intelligence tools in Power BI to refine that data, visualize it and just ask it questions. We believe that by putting widely accessible and easy-to-deploy tools in everyone’s hands, we are helping big data reach a billion people. 

I am looking forward to tomorrow. The Hadoop community is pushing what’s possible, and we could not be happier that we made the commitment to contribute to it in meaningful ways.

Quentin Clark, Microsoft, at Big Data NYC 2013 with John Furrier and Dave Vellante

“We’re here here because we’re super committed to Hadoop,” Clark said, explaining that Microsoft is dedicated to help its customers embrace the benefits Big Data can provide them with. “Hadoop is the cornerstone of Big Data but not the entire infrastructure,” he added. Microsoft is focusing around adding security and tool integration, with thousands of hours of development put into Hadoop, to make it ready for the enterprise. “There’s a foundational piece where customers are starting,” which they can build upon and Microsoft focuses on helping them embrace Hadoop as part of the IT giant’s business goals.

Asked to compare the adoption of traditional Microsoft products with the company’s Hadoop products, Clark said, “a big part of our effort was to get to that enterprise expectations.” Security and tools integration, getting Hadoop to work on Windows is part of that effort. Microsoft aims to help people “have a conversation and dialogue with the data. We make sure we funnel all the data to help them get the BI and analytics” they need.

Commenting on Microsoft’s statement of bringing Big Data to its one billion Office users, Vellante asked if the company’s strategy was to put the power of Big Data into Excel. Clark explained it was about putting Big Data in the Office suite, going on to explain that there is already more than a billion people who are passively using Big Data. Microsoft focuses on those actively using it.

Clark mentions Microsoft has focused on the sports arena, helping major sports leagues use Big Data to power fantasy teams. “We actually have some models, use some data sets. I have a fantasy team that I’m doing pretty well with, partly because of my ability to really have a conversation with the data. On the business side, it’s transformational. Our ability to gain insight in real time and interact is very different using these tools,” Clark stated.

Why not build its own Hadoop distro?

Asked why Microsoft decided not to have its own Hadoop distribution, Clark explained that “primarily our focus has been in improving the Apache core, make Hadoop work on Windows and work great. Our partnership with Hortonworks just made sense. They are able to continue to push and have that cross platform capability, we are able to offer our customers a solution.”

Explaining there were great discrepancies in how different companies in the same industries made use of the benefits Big Data, he advised our viewers to “look at what the big companies are doing” embracing the data, and to look what they are achieving with it.

As far as the future of the Big Data industry is concerned, Clark stated: “There’s a consistent meme of how is this embraced by business for results. Sometimes with the evolution of technology, everyone is exploring what it’s capable of.” Now there’s a focus shift of the industry towards what greater purpose it leads to, what businesses can accomplish.

@thecube

#BigDataNYC


4.4
Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI)
:

Microsoft Virtual Desktop Infrastructure (VDI) enables IT to deliver desktops and applications to users that employees can access from anywhere on both personal and corporate devices . Centralizing and controlling applications and data through a virtual desktop enables your people to get their work done on the devices they choose while helping maintain compliance. Learn more: http://www.microsoft.com/msvdi
With dramatic growth in the number of mobile users and personal devices at work, and mounting pressure to comply with governmental regulations, IT organizations are increasingly turning to Microsoft Virtual Desktop Infrastructure (VDI) solutions. This session will provide an overview of Microsoft’s VDI solutions and will drill into some of the new, exciting capabilities that Windows Server 2012 R2 offers for VDI solutions.

In October, we announced Windows Server 2012 R2 which delivers several exciting improvements for VDI solutions. Among the benefits, Windows Server 2012 R2 reduces the cost per seat for VDI as well as enhances your end user’s experience. The following are just some of the features and benefits of Windows Server 2012 R2 for VDI:

  • Online data deduplication on actively running VMs reduces storage capacity requirements by up to 90% on persistent desktops.
  • Tiered storage spaces manage your tiers of storage (fast SSDs vs. slower HDDs) intelligently so that the most frequently accessed data blocks are automatically moved onto faster-tier drives. Likewise, older or seldom-accessed files are moved onto the cheaper and slower SAS drives.
  • The Microsoft Remote Desktop App provides easy access to a variety devices and platforms including Windows, Windows RT, iOS, Mac OS X and Android. This is good news for your end users and your mobility/BYOD strategy!
  • Your user experience is also enhanced due to improvements on several fronts including RemoteFX, DirectX 11.1 support, RemoteApp, quick reconnect, session shadowing, dynamic monitor and resolution changes.

If your VDI solutions run on Dell servers or if you are looking at deploying new VDI infrastructure, we are excited to let you know about the work we have been doing in partnership with Dell around VDI. Dell recently updated their Desktop Virtualization Solution (DVS) for Windows Server to support Windows Server 2012 R2, and DVS now delivers all of the benefits mentioned above. Dell is also delivering additional enhancements into Dell DVS for Windows Server so it will also support:

  • Windows 8.1 with touch screen devices and new Intel Haswell processors
  • Unified Communication with Lync 2013, via an endpoint plug-in that enables P2P audio and video. (Dell Wyse has certified selected Windows thin clients to this effect, such as the D90 and Z90.)
  • Virtualized shared graphics on NVidia GRID K1/K2 and AMD FirePro cards using Microsoft RemoteFX technology
  • Affordable persistent desktops
  • Highly-secure and dual/quad core Dell Wyse thin clients, for a true end-to-end capability, even when using high-end server graphics cards or running UC on Lync 2013
  • Optional Dell vWorkspace software, also supporting Windows Server 2012 R2, that brings scalability to tens of thousands of seats, advanced VM provisioning, IOPS efficiency to reduce storage requirement and improve performance, diagnostics and monitoring, flexible resource assignments, support for multi-tenancy and more.
  • Availability in more than 30 countries

Depending on where you stand in the VDI deployment cycle in your organization, Dell DVS for Windows Server is already supported today on multiple Dell PowerEdge server platforms:

  • The T110 for a pilot/POC up to 10 seats
  • The VRTX for implementation in a remote or branch office of up to about 500 users
  • The R720 for a traditional enterprise-like, flexible and scalable implementation to several thousand seats. It supports flexible deployments such as application virtualization, RDSH, pooled and persistent VMs.

This week, Microsoft and Dell will present a technology showcase at Dell World in Austin (TX), USA. If you happen to be at the show, you will be able to see for yourself how well Windows Server 2012 R2 and Windows 8.1 integrate into Dell DVS. We will show:

  • The single management console of Windows Server 2012 installed on a Dell PowerEdge VRTX, demonstrating how easy it can be for an IT administrator to manage VDI workloads based on Hyper-V in a remote or branch office environment
  • How users can chat, talk, share, meet, transfer files and conduct video conferencing within virtualized desktops set up for unified communication
  • That you can watch HD multimedia and 3D graphics files on multiple virtual desktops sharing a graphic card installed remotely in a server
  • How affordable it is to run persistent desktops with DVS and Windows Server 2012 R2

We are excited about the work that we are doing with Dell around VDI and hope you have a chance to come visit our joint VDI showcase in Austin. We will be located in the middle of the Dell booth in show expo hall. Also, we will show a VDI demo as part of the Microsoft Cloud OS breakout session at noon on Thursday (December 12th ) in room 9AB. Finally, we will show a longer VDI demo in the show expo theater (next to the Microsoft booth) at 10am on Friday (December 13th ) morning. We are looking forward to seeing you there.

With the Microsoft Remote Desktop app, you can connect to a remote PC and your work resources from almost anywhere. Experience the power of Windows with RemoteFX in a Remote Desktop client designed to help you get your work done wherever you are.

Post from Brad Anderson,
Corporate Vice President of Windows Server & System Center at Microsoft.

As of yesterday afternoon, the Microsoft Remote Desktop App is available in the Android, iOS, and Mac stores (see screen shots below). There was a time, in the very recent past, when many thought something like this would never happen.

If your company has users who work on iPads, Android, and Windows RT devices, you also likely have a strategy (or at least of point-of-view) for how you will deliver Windows applications to those devices. With the Remote Desktop App and the 2012 R2 platforms made available earlier today, you now have a great solution from Microsoft to deliver Windows applications to your users across all the devices they are using.

As I have written about before, one of the things I am actively encouraging organizations to do is to step back and look at their strategy for delivering applications and protecting data across all of their devices. Today, most enterprises are using different tools for enabling users on PCs, and then they deploy another tool for enabling users on their tablets and smart phones. This kind of overheard and the associated costs are unnecessary – but, even more important (or maybe I should say worse), is that your end-users therefore have different and fragmented experiences as they transition across their various devices. A big part of an IT team’s job must be to radically simplify the experience end users have in accomplishing their work – and users are doing that work across all their devices.

I keep bolding “all” here because I am really trying to make a point:  Let’s stop thinking about PCs and devices in a fragmented way. What we are trying to accomplish is pretty straightforward: Enable users to access the apps and datathey need to be productive in a way that can ensure the corporate assets are secure. Notice that nowhere in that sentence did I mention devices. We should stop talking about PC Lifecycle management, Mobile Device Management and Mobile Application Management – and instead focus our conversation on how we are enabling users. We need a user-enablement Magic Quadrant!

OK – stepping off my soapbox. Smile

Delivering Windows applications in a server-computing model, through solutions like Remote Desktop Services, is a key requirement in your strategy for application access management. But keep in mind that this is only one of many ways applications can be delivered – and we should consider and account for all of them.

For example, you also have to consider Win32 apps running in a distributed model, modern Windows apps, iOS native apps (side-loaded and deep-linked), Android native apps (side-loaded and deep-linked), SaaS applications, and web applications.

Things have really changed from just 5 years ago when we really only had to worry about Windows apps being delivered to Windows devices.

As you are rethinking your application access strategy, you need solutions that enable you to intelligently manage all these applications types across all the devices your workforce will use.

You should also consider what the Remote Desktop Apps released yesterday are proof of Microsoft’s commitment to enable you to have a single solution to manage all the devices your users will use.

Microsoft describes itself as a “devices and services company.” Let me provide a little more insight into this.

Devices: We will do everything we can to earn your business on Windows devices.

Services: We will light up those Windows devices with the cloud services that we build, and these cloud services will alsolight-up all (there’s that bold again) your other devices.

The funny thing about cloud services is that they want every device possible to connect to them – we are working to make sure the cloud services that we are building for the enterprise will bring value to all (again!) the devices your users will want to use – whether those are Windows, iOS, or Android.

The RDP clients that we released into the stores yesterday are not v1 apps. Back in June, we acquired IP assets from an organization in Austria (HLW Software Development GMBH) that had been building and delivering RDP clients for a number of years. In fact, there were more than 1 million downloads of their RDP clients from the Apple and Android stores.  The team has done an incredible job using them as a base for development of our Remote Desktop App, creating a very simple and compelling experience on iOS, Mac OS X and Android. You should definitely give them a try!

Also: Did I mention they are free?

To start using the Microsoft Remote Desktop App for any of these platforms, simply follow these links:

setup: – Windows 8.1 Pro run on a slow Netbook, BenQ Joybook Lite U101 with Aton N270! – HTC One X running Android 4.2.2 – HTC Fly running Android 3.2.1 How to: http://android-er.blogspot.com/2013/10/basic-setup-for-microsoft-remote.html

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: