Home » Articles posted by Nacsa Sándor (Page 15)
Author Archives: Nacsa Sándor
Microsoft products for the Cloud OS
Part of: Microsoft Cloud OS vision, delivery and ecosystem rollout
1. The Microsoft way
|
4. 1 Windows Server 2012 R2 & System Center 2012 R2 [MPNUK YouTube channel, Nov 18, 2013]
Windows Server 2012 R2: 0:00
Server Virtualization: 4:40
Storage: 11:07
Networking: 17:37
Server Management and Automation: 23:14
Web and Application Platform: 27:05System Center 2012 R2: 31:14
Infrastructure Provisioning: 36:15
Infrastructure Monitoring: 42:48
Automation and Self-service: 45:30
Application Performance Monitoring: 48:50
IT Service Management: 51:05
More information is in the What’s New in 2012 R2 [Windows Server 2012 R2, System Center 2012 R2] series of “In the Cloud” articles by Brad Anderson:
- Part 1, July 3, 2013: Beginning and Ending with Customer-specific Scenarios
Over the last three weeks, Microsoft has made an exciting series of announcements about its next wave of products, including Windows Server 2012 R2, System Center 2012 R2, SQL Server 2014, Visual Studio 2013, Windows Intune and several new Windows Azure services. The preview bits are now available, and the customer feedback has been incredible!
The most common reaction I have heard from our customers and partners is that they cannot believe how much innovation has been packed into these releases – especially in such a short period of time. There is a truly amazing amount new value in these releases and, with this in mind, we want to help jump-start your understanding of the key scenarios that we are enabling.
As I’ve discussed this new wave of products with customers, partners, and press, I’ve heard the same question over and over: “How exactly did Microsoft build and deliver so much in such a short period of time?” My answer is that we have modified our own internal processes in a very specific way: We build for the cloud first.
A cloud-first design principle manifests itself in every aspect of development; it means that at every step we architect and design for the scale, security and simplicity of a high-scale cloud service. As a part of this cloud-first approach, we assembled a ‘Scenario Focus Team’ that identified the key user scenarios we needed to support – this meant that our engineers knew exactly what needed to be built at every stage of development, thus there was no time wasted debating what happened next. We knew our customers, we knew our scenarios, and that allowed all of the groups and stakeholders to work quickly and efficiently.
The cloud-first design approach also means that we build and deploy these products within our own cloud services first and then deliver them to our customers and partners. This enables us to first prove-out and battle-harden new capabilities at cloud scale, and then deliver them for enterprise use. The Windows Azure Pack is a great example of this: In Azure we built high-density web hosting where we could literally host 5,000 web servers on a single Windows Server instance. We exhaustively battle-hardened that feature, and now you can run it in your datacenters.
At Microsoft we operate more than 200 cloud services, many of which are servicing 100’s of millions of users every day. By architecting everything to deliver for that kind of scale, we are sure to meet the needs of enterprise anywhere and in any industry.
Our cloud-first approach was unique for another reason: It was the first time we had common/unified planning across Windows Client, Windows Server, System Center, Windows Azure, and Windows Intune. I know that may sound crazy, but it’s true – this is a first. We spent months planning and prioritizing the end-to-end scenarios together, with the goal of identifying and enabling all the dependencies and integration required for an effort this broad. Next we aligned on a common schedule with common engineering milestones.
The results have been fantastic. Last week, within 24 hours, we were able to release the previews bits of Windows Client 8.1, Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.
By working together throughout the planning and build process, we established a common completion and Release to Manufacturing date, as well as a General Availability date. Because of these shared plans and development milestones, by the time we started the actual coding, the various teams were well aware of each dependency and the time to build the scenarios was much shorter.
The bottom-line impact of this Cloud-first approach is simple: Better value, faster.
This wave of products shows that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and those scenarios are all delivered at a higher quality.
This wave of products demonstrates that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and each of those scenarios are all delivered at a higher quality. This cloud-first approach also helps us deliver the Cloud OS vision that drives the STB business strategy.
The story behind the technologies that support the Cloud OS vision is an important part of how we enable customers to embrace cloud computing concepts. Over the next eight weeks, we’ll examine in great detail the three core pillars (see the table below) that support and inspire these R2 products: Empower People-centric IT, Transform the Datacenter, and Enable Modern Business Apps. The program managers who defined these scenarios and worked within each pillar throughout the product development process, have authored in-depth overviews of these pillars and their specific scenarios, and we’ll release those on a weekly basis.
Pillar
Scenarios
Empower People-centric IT
People-centric IT (PCIT) empowers each person you support to work virtually anywhere on PCs and devices of their choice, while providing IT with an easy, consistent, and secure way to manage it all. Microsoft’s approach helps IT offer a consistent self-service experience for people, their PCs, and their devices while ensuring security. You can manage all your client devices in a single tool while reducing costs and simplifying management.
Transform the Datacenter
Transforming the datacenter means driving your business with the power of a hybrid cloud infrastructure. Our goal is to help you leverage your investments, skills and people by providing a consistent datacenter and public cloud services platform, as well as products and technologies that work across your datacenter, and service provider clouds.
Enable Modern Business Apps
Modern business apps live and move wherever you want, and Microsoft offers the tools and resources that deliver industry-leading performance, high availability, and security. This means boosting the impact of both new and existing applications, and easily extending applications with new capabilities – including deploying across multiple devices.
The story behind these pillars and these products is an important part of our vision for the future of corporate computing and the modern datacenter, and in the following post, David B. Cross, the Partner Director of Test and Operations for Windows Server, shares some of the insights the Windows Server & System Center team have applied during every stage of our planning, build, and deployment of this awesome new wave of products.
-
Empowering People-centric IT [MSCloudOS YouTube channel, Oct 30, 2013]
- People-centric IT — Dell and Accenture [MSCloudOS YouTube channel, Nov 1, 2013]
- Part 2, July 10, 2013:
Making Device Users Productive and Protecting Corporate Information
The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.
If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.
Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT(PCIT).
In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.
…
The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:
- The identity of the user
- The user’s specific device
- The network the user is working from
…
What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.
In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.
This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.
- User and Device Management [MSCloudOS YouTube channel, Nov 1, 2013]
- Part 3, July 17, 2013: People-centric IT in Action – End-to-end Scenarios Across Products
In today’s post, we tackle a common question I get from customers: “Why move to the cloud right now?” Recently, however, this question has changed a bit to, “What should I move to the cloud first?”
An important thing to keep in mind with either of these questions is that every organization has their own unique journey to the cloud. There are a lot of different workloads that run on Windows Server, and the reality is that these various workloads are moving to the cloud at very different rates. Web servers, e-mail and collaboration are examples of workloads moving to the cloud very quickly. I believe that management, and the management of smart devices, will be one of the next workloads to make that move to the cloud – and, when the time comes, that move will happen fast.
Using a SaaS solution is a move to the cloud, and taking this approach is a game changer because of its ability to deliver an incredible amount of value and agility without an IT pro needing to manage any of the required infrastructure.
Cloud-based device management is a particularly interesting development because it allows IT pros to manage this rapidly growing population of smart, cloud-connected devices, and manage them “where they live.” Today’s smart phones and tablets were built to consume cloud services, and this is one of the reasons why I believe that a cloud-based management solution for them is so natural. As you contemplate your organization’s move to the cloud, I suggest that managing all of your smart devices from the cloud should be one of your top priorities.
I want to be clear, however, about the nature of this kind of management: We believe that there should be one consistent management experience across PC’s and devices.
Achieving this single management experience was a major focus of these 2012 R2 releases, and I am incredibly proud to say we have successfully engineered products which do exactly that. The R2 releases deliver this consistent end-user experience through something we call the “Company Portal.” The Company Portal is already deployed here at Microsoft, and it is what we are currently using to upgrade our entire workforce to Windows 8.1. I’ve personally used it to upgrade my desktop, laptop, and Surface – and the process could not have been easier.
In this week’s post, Paul Mayfield, the Partner Program Manager for System Center Configuration Manager/Windows Intune, and his team return to discuss in deep technical detail some of the specific scenarios our PCIT [“People Centric IT”] team has enabled (cloud-based management, Company Portal, etc.).
-
Transform the Datacenter [MSCloudOS YouTube channel, Oct 30, 2013]
- Transform the Datacenter Partner Video — Hostway and Cisco [MSCloudOS YouTube channel, Oct 30, 2013]
- Part 4, July 24, 2013: Enabling Open Source Software
…
There are a lot of great surprises in these new R2 releases – things that are going to make a big impact in a majority of IT departments around the world. Over the next four weeks, the 2012 R2 series will cover the 2nd pillar of this release:Transform the Datacenter. In these four posts (starting today) we’ll cover many of the investments we have made that better enable IT pros to transform their datacenter via a move to a cloud-computing model.
This discussion will outline the ambitious scale of the functionality and capability within the 2012 R2 products. As with any conversation about the cloud, however, there are key elements to consider as you read. Particularly, I believe it’s important in all these discussions – whether online or in person – to remember that cloud computing is a computing model, not a location. All too often when someone hears the term “cloud computing” they automatically think of a public cloud environment. Another important point to consider is that cloud computing is much more than just virtualization – it is something that involves change: Change in the tools you use (automation and management), change in processes, and a change in how your entire organization uses and consumes its IT infrastructure.
Microsoft is extremely unique in this perspective, and it is leading the industry with its investments to deliver consistency across private, hosted and public clouds. Over the course of these next four posts, we will cover our innovations in the infrastructure (storage, network, compute), in both on-premise and hybrid scenarios, support for open source, cloud service provider & tenant experience, and much, much more.
As I noted above, it simply makes logical sense that running the Microsoft workloads in the Microsoft Clouds will deliver the best overall solution. But what about Linux? And how well does Microsoft virtualize and manage non-Windows platforms, in particular Linux? Today we’ll address these exact questions.
Our vision regarding other operating platforms is simple: Microsoft is committed to being your cloud partner. This means end-to-end support that is versatile, flexible, and interoperable for any industry, in any environment, with any guest OS. This vision ensures we remain realistic – we know that users are going to build applications on open source operating systems, so we have built a powerful set of tools for hosting and managing them.
A great deal of the responsibility to deliver the capabilities that enable the Microsoft Clouds (private, hosted, Azure) to effectively host Linux and the associated open source applications falls heavily on the shoulders of the Windows Server and System Center team. In today’s post Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will detail how building the R2 wave with an open source environment in mind has led to a suite of products that are more adaptable and more powerful than ever.
As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.
- Part 5.1, July 31, 2013: IaaS Innovations
…
Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors. I repeat: 50%. The bulk of the savings comes from our storage innovations and the low costs of our licenses.
…
At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.
Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer. If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro.
It really all comes down to the friction-free movement of applications and VMs across clouds. Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.
We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.
And speaking of today’s post – this IaaS topic will be published in two parts, with the second half appearing tomorrow morning.
In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.
As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post. Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!
- Part 5.2, Aug 1, 2013: Service Provider & Tenant IaaS Experience
I recently had an opportunity to speak with a number of leaders from the former VMWare User Group (VMUG), and it was an incredibly educational experience. I say “former” because many of the VMUG user group chapters are updating their focus/charter and are renaming themselves the Virtual Technology User Group (VTUG). This change is a direct result of how they see market share and industry momentum moving to solutions like the consistent clouds developed by Microsoft.
In a recent follow up conversation with these leaders, I asked them to describe some common topics they hear discussed in their meetings. One of the leaders commented that the community is saying something really specific: “If you want to have job security and a high paying job for the next 10 years, you better be on your way to becoming an expert in the Microsoft clouds. That is where this industry is going.”
When I look at what is delivered in these R2 releases, the innovation is just staggering. This industry-leading innovation – the types of technical advances that VTUG groups are confidently betting on – is really exciting.
With this innovation in mind, in today’s post I want to discuss some of the work we are doing around the user experience for the teams creating the services that are offered, and I want to examine the experience that can be offered to the consumer of the cloud (i.e. the tenants). While we were developing R2, we spent a lot of time ensuring that we truly understood exactly who would be using our solutions. We exhaustively researched their needs, their motivations, and how various IT users and IT teams relate to each other. This process was incredibly important because these individuals and teams all have very different needs – and we were committed to supporting all of them.
The R2 wave of products have been built with this understanding. The IT teams actually building and operating a cloud(s) have very different needs than individuals who are consuming the cloud (tenants). The experience for the infrastructure teams will focus on just that – the infrastructure; the experience for the tenants will focus on the applications/ services and their seamless operation and maintenance.
In yesterday’s post we focused heavily on the innovations in these R2 releases in the infrastructure – storage, network, and compute – and, in this post, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will provide an in-depth look at Service Provider and Tenant experience and innovations with Windows Server 2012 R2, System Center 2012 R2, and the new features in Windows Azure Pack.
As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post. Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!
- Access and Information Protection [MSCloudOS YouTube channel, Nov 1, 2013]
- Part 6, Aug 9, 2013: Identity Management for Hybrid IT
…
In the 13+ years since the original Active Directory product launched with Windows 2000, it has grown to become the default identity management and access-control solution for over 95% of organizations around the world. But, as organizations move to the cloud, their identity and access control also need to move to the cloud. As companies rely more and more on SaaS-based applications, as the range of cloud-connected devices being used to access corporate assets continue to grow, and as more hosted and public cloud capacity is used companies must expand their identity solutions to the cloud.
Simply put, hybrid identity management is foundational for enterprise computing going forward.
With this in mind, we set out to build a solution in advance of these requirements to put our customers and partners at a competitive advantage.
To build this solution, we started with our “Cloud first” design principle. To meet the needs of enterprises working in the cloud, we built a solution that took the power and proven capabilities of Active Director and combined it with the flexibility and scalability of Windows Azure. The outcome is the predictably named Windows Azure Active Directory.
By cloud optimizing Active Directory, enterprises can stretch their identity and access management to the cloud and better manage, govern, and ensure compliance throughout every corner of their organization, as well as across all their utilized resources.
This can take the form of seemingly simple processes (albeit very complex behind the scenes) like single sign-on which is a massive time and energy saver for a workforce that uses multiple devices and multiple applications per person. It can also enable the scenario where a user’s customized and personalized experience can follow them from device to device regardless of when and where they’re working. Activities like these are simply impossible without a scalable, cloud-based identity management system.
If anyone doubts how serious and enterprise-ready Windows Azure AD already is, consider these facts:
- Since we released Windows Azure AD, we’ve had over 265 billion authentications.
- Every two minutes Windows Azure AD services over 1,000,000 authentication requests for users and devices around the world (that’s about 9,000 requests per second).
- There are currently more than 420,000 unique domains uploaded and now represented inside of Azure Active Directory.
Windows Azure AD is battle tested, battle hardened, and many other verbs preceded by the word “battle.”
But, perhaps even more importantly, Windows Azure AD is something Microsoft has bet its own business on: Both Office 365 (the fastest growing product in Microsoft history) and Windows Intune authenticate every user and device with Windows Azure AD.
In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center), Alex Simons (Director of Program Management for Active Directory), Sam Devasahayam (Principle Program Management Lead for Windows Azure AD), and Mark Wahl (Principle Program Manager for Active Directory) take a look at one of R2’s most innovative features, Hybrid Identity Management.
As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.
- Extend Your Datacenter with Software-Defined Networking [MSCloudOS YouTube channel, Oct 30, 2013]
- Microsoft-Cisco to Deliver Application-Centric Networking [MSCloudOS YouTube channel, Nov 22, 2013]
- Part 7.1, Aug 14, 2013: Hybrid Networking
One of the foundational requirements we called out in the 2012 R2 vision document was our promise to help you transform the datacenter. A core part of delivering on that promise is enabling Hybrid IT.
By focusing on Hybrid IT we were specifically calling out the fact that almost every customer we interacted with during our planning process believed that in the future they would be using capacity from multiple clouds. That may take the form of multiple private clouds an organization had stood up, or utilizing cloud capacity from a service provider [i.e. managed cloud] or a public cloud like Azure, or using SaaS solutions running from the public cloud.
We assumed Hybrid IT would really be the norm going forward, so we challenged ourselves to really understand and simplify the challenges associated with configuring and operating in a multi-cloud environment. Certainly one of the biggest challenges associated with operating in a hybrid cloud environment is associated with the network – everything from setting up the secure connection between clouds, to ensuring you could use your IP addresses (BYOIP) in the hosted and public clouds you chose to use.
The setup, configuration and operation of a hybrid IT environment is, by its very nature incredibly complex – and we have poured hundreds of thousands of hours into the development of R2 to solve this industry-wide problem.
With the R2 wave of products – specifically Windows Server 2012 R2 and System Center 2012 R2 – enterprises can now benefit from the highly-available and secure connection that enables the friction-free movement of VMs across those clouds. If you want or need to move a VM or application between clouds, the transition is seamless and the data is secure while it moves.
The functionality and scalability of our support for hybrid IT deployments has not been easy to build, and each feature has been methodically tested and refined in our own datacenters. For example, consider that within Azure there are over 50,000 network changes every day, and every single one of them is fully automated. If even 1/10 of 1% of those changes had to be done manually, it would require a small army of people working constantly to implement and then troubleshoot the human errors. With R2, the success of processes like these, and our learnings from Azure, come in the box.
Whether you’re a service provider or working in the IT department of an enterprise (which, in a sense, is like being a service provider to your company’s workforce), these hybrid networking features are going to remove a wide range of manual tasks, and allow you to focus on scaling, expanding and improving your infrastructure.
In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center) and Bala Rajagopalan(Principle Program Manager for Windows Server & System Center), provide a detailed overview of 2012 R2’s hybrid networking features, as well as solutions for common scenarios like enabling customers to create extended networks spanning clouds, and enabling access to virtualized networks.
Don’t forget to take a look at the “Next Steps” section at the bottom of this post, and check back tomorrow for the second half of this week’s hybrid IT content which will examine the topic of Disaster Recovery.
- Ensuring Business Continuity [MSCloudOS YouTube channel, Oct 30, 2013]
- Part 7.2, Aug 14, 2013: Cloud-integrated Disaster Recovery
…
With Windows Server 2012 R2, with Hyper-V Replica, and with System Center 2012 R2 we have delivered a DR solution for the masses.
This DR solution is a perfect example of how the cloud changes everything.
Because Windows Azure offers a global, highly available cloud platform with an application architecture that takes full advantage of the HA capabilities – you can build an app on Azure that will be available anytime and anywhere. This kind of functionality is why we made the decision to build the control plane or administrative console for our DR solution on Azure. The control plane and all the meta-data required to perform a test, planned, or unplanned recovery will always be available. This means you don’t have to make the huge investments that have been required in the past to build a highly-available platform to host your DR solution – Azure automatically provides this.
(Let me make a plug here that you should be looking to Azure for all the new application you are going to build – and we’ll start covering this specific topic in next week’s R2 post.)
With this R2 wave of products, organizations of all sizes and maturity, anywhere in the world, can now benefit from a simple and cost-effective DR solution.
There’s also another other thing that I am really proud of here: Like most organizations, we regularly benchmark ourselves against our competition. We use a variety of metrics, like: ‘Are we easier to deploy and operate?’ and ‘Are we delivering more value and doing it a lower price?’ Measurements like these have provided a really clear answer: Our competitors are not even in the same ballpark when it comes to DR.
During the development of R2, I watched a side-by-side comparison of what was required to setup DR for 500 VMs with our solution compared to a competitive offering, and the contrast was staggering. The difference in simplicity and the total amount of time required to set everything up was dramatic. In a DR scenario, one interesting unit of measurement is total mouse clicks. It’s easy to get carried away with counting clicks (hey, we’re engineers after all!), but, in the side-by-side comparison, the difference was 10’s of mouse clicks compared to 100’s. It is literally a difference of minutes vs. days.
You can read some additional perspectives I’ve shared on DR here.
In yesterday’s post we looked at the new hybrid networking functionality in R2 (if you haven’t seen it yet, it is a must-read), and in this post Vijay Tewari (Principal Program Manager for Windows Server & System Center) goes deep into the architecture of this DR solution, as well this solution’s deployment and operating principles.
As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.
-
Enable Modern Business Applications [MSCloudOS YouTube channel, Nov 1, 2013]
- Enable Modern Apps — Accenture, Avanade and Hostway [MSCloudOS YouTube channel, Nov 1, 2013]
- Part 8, Aug 21, 2013: Enabling Modern Apps with the Windows Azure Pack
…
The future of the IT Pro role will require you to know how applications are built for the cloud, as well as the cloud infrastructures where these apps operate, is something every IT Pro needs in order to be a voice in the meetings that will define an organization’s cloud strategy. IT pros are also going to need to know how their team fits in this cloud-centric model, as well as how to proactively drive these discussions.
These R2 posts will get you what you need, and this “Enable Modern Business Apps” pillar will be particularly helpful.
Throughout the posts in this series we have spoken about the importance of consistency across private, hosted and public clouds, and we’ve examined how Microsoft is unique in its vision and execution of delivering consistent clouds. The Windows Azure Pack is a wonderful example of Microsoft innovating in the public cloud and then bringing the benefits of that innovation to your datacenter.
The Windows Azure Pack is – literally speaking – a set of capabilities that we have battle-hardened and proven in our public cloud. These capabilities are now made available for you to enhance your cloud and ensure that “consistency across clouds” that we believe is so important.
A major benefit of the Windows Azure Pack is the ability to build an application once and then deploy and operate it in any Microsoft Cloud – private, hosted or public.
This kind of flexibility means that you can build an application, initially deploy it in your private cloud, and then, if you want to move that app to a Service Provider or Azure in the future, you can do it without having to modify the application. Making tasks like this simple is a major part of our promise around cloud consistency, and it is something only Microsoft (not VMware, not AWS) can deliver.
This ability to migrate an app between these environments means that your apps and your data are never locked in to a single cloud. This allows you to easily adjust as your organization’s needs, regulatory requirements, or any operational conditions change.
A big part of this consistency and connection is the Windows Azure Service Bus which will be a major focus of today’s post.
The Windows Azure Service Bus has been a big part of Windows Azure since 2010. I don’t want to overstate this, but Service Bus has been battle-hardened in Azure for more than 3 years, and now we are delivering it to you to run in your datacenters. To give you a quick idea of how critical Service Bus is for Microsoft, consider this: Service Bus is used in all the billing for Windows Azure, and it is responsible for gathering and posting all the scoring and achievement data to the Halo 4 leaderboards (now that is really, really important – just ask my sons!). It goes without saying that the people in charge of Azure billing and the hardcore gamers are not going to tolerate any latency or downtime getting to their data.
With today’s topic, take the time to really appreciate the app development and app platform functionality in this R2 wave. I think you’ll be really excited about how you can plug into this process and lead your organization.
This post, written by Bradley Bartz (Principal Program Manager from Windows Azure) and Ziv Rafalovich (Senior Program Manager in Windows Azure), will get deep into these new features and the amazing scenarios that the Windows Azure Pack and Windows Azure Service Bus enable. As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this for links to additional information about the topics covered in this post.
- Part 9, Aug 28, 2013: PaaS for the Modern Web
A major promise underlying all of the 2012 R2 products is really simple: Consistency.
Consistency in the user experiences, consistency for IT professionals, consistency for developers and consistency across clouds. A major part of delivering this consistency is the Windows Azure Pack (WAP). Last week we discussed how Service Bus enables connections across clouds, and in this post we’ll examine more of the PaaS capabilities built and tested in Azure data centers and now offered for Windows Server. With the WAP, Windows Server 2012 R2, and System Center IT pros can make their data center even more scalable, flexible, and secure.
Throughout the development of this R2 wave, we looked closely at what organizations needed and wanted from the cloud. A major piece of feedback was the desire to build an app once and then have that app live in any data center or cloud. For the first time this kind of functionality is now available. Whether your app is in a private, public, or hosted cloud, the developers and IT Professionals in you organization will have consistency across clouds.
One of the elements that I’m sure will be especially popular is the flexibility and portability of this PaaS. I’ve had countless customers comment that they love the idea of PaaS, but don’t want to be locked-in or restricted to only running it in specific data centers. Now, our customers and partners can build a PaaS app and run it anywhere. This is huge! Over the last two years the market has really began to grasp what PaaS has to offer, and now the benefits (auto-scale, agility, flexibility, etc.) are easily accessible and consistent across the private, hosted and public clouds Microsoft delivers.
This post will spend a lot of time talking about Web Sites for Windows Azure and how this high density web site hosting delivers a level of power, functionality, and consistency that is genuinely next-gen.
Microsoft is literally the only company offering these kinds of capabilities across clouds – and I am proud to say that we are the only ones with a sustained track record of enterprise-grade execution.
With the features added by the WAP [Windows Azure Pack], organizations can now take advantage of PaaS without being locked into a cloud. This is, at its core, the embodiment of Microsoft’s commitment to make consistency across clouds a workable, viable reality.
This is genuinely PaaS for the modern web.
Today’s post was written by Bradley Bartz, a Principal Program Manager from Windows Azure. For more information about the technology discussed here, or to see demos of these features in action, check out the “Next Steps” at the bottom of this post.
More information: in the Success with Hybrid Cloud series blog posts [Brad Anderson, Nov 12, Nov 14, Nov 20, Dec 2, Dec 5, and 21 upcoming blogs posts] which “will examine the building/deployment/operation of Hybrid Clouds, how they are used in various industries, how they manage and deliver different workloads, and the technical details of their operation.”
4.2
Unlock Insights from any Data – SQL Server 2014:
- Breakthrough Data Platform Performance with SQL Server 2014 [MSCloudOS YouTube channel, Nov 1, 2013]
- SQL Server 2014 [MPNUK YouTube channel, Nov 18, 2013]
- SQL Server 2014 CTP 2 Now Available [SQL Server Blog, Oct 17, 2013]
Microsoft SQL Server 2014 CTP2 was announced by Quentin Clark during the Microsoft SQL PASS 2013 keynote. This second public CTP is essentially feature complete and enables you to try and test all of the capabilities of the full SQL Server 2014 release. Below you will find an overview of SQL Server 2014 as well as key new capabilities added in CTP2:
SQL Server 2014 helps organizations by delivering:
- Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
- Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
- Faster Insights from Any Data with a complete BI solution using familiar tools like Excel
Thank you to those that have already downloaded SQL Server 2014 CTP1 and started seeing first hand the performance gains that in-memory capabilities deliver along with better high availability with AlwaysOn enhancements. CTP2 introduces additional mission critical capabilities with further enhancements to the in-memory technologies along with new hybrid cloud capabilities.
What’s new in SQL Server 2014 CTP2?
New Mission Critical Capabilities and Enhancements
- Enhanced In-Memory OLTP, including new tools which will help you identify and migrate the tables and stored procedures will benefit most from In-Memory OLTP, as well as greater T-SQL compatibility and new indexes which enables more customers to take advantage of our solution.
- High Availability for In-Memory OLTP Databases: AlwaysOn Availability Groups are supported for In-Memory OLTP, giving you in-memory performance gains with high availability. IO Resource Governance, enabling customers to more effectively manage IO across multiple databases and/or classes of databases to provide more predictable IO for your most critical workloads. Customers today can already manage CPU and memory.
- Improved resiliency with Windows Server 2012 R2 by taking advantage of Cluster Shared Volumes (CSVs). CSV’s provide improved fault detection and recovery in the case of downtime.
- Delayed Durability, providing the option for increased transaction throughput and lower latency for OLTP applications where performance and latency needs outweigh the need for 100% durability.
New Hybrid Cloud Capabilities and Enhancements
By enabling the above in-memory performance capabilities for your SQL Server instances running in Windows Azure Virtual Machines, you will see significant transaction and query performance gains. In addition there are new capabilities listed below that will allow you to unlock new hybrid scenarios for SQL Server.
- Managed Backup to Windows Azure, enabling you to backup on-premises SQL Server databases to Windows Azure storage directly in SSMS. Managed Backup also optimizes backup policy based on usage, an advantage over the manual Backup to Windows Azure.
- Encrypted Backup, offering customer the ability to encrypt both on-premises backup and backups to Windows Azure for enhance security.
- Enhanced disaster recovery to Windows Azure with simplified UI, enabling customers to more easily add Windows Azure Virtual Machines as AlwaysOn secondaries in SQL Server Management Studio for greater cost-effective data protection and disaster recovery solution. Customers may also use the secondaries in Windows Azure for to scale and offload reporting and backups.
- SQL Server Data Files in Windows Azure – New capability to store large databases (>16TB) in Windows Azure and the ability to stream the database as a backend for SQL Server applications running on-premises or in the cloud.
Learn more and download SQL Server 2014 CTP2
SQL Server 2014 helps address key business challenges of ever growing data volumes, the need to transact and process data faster, the scalability and efficiency of cloud computing and an ever growing hunger for business insights. With SQL Server 2014 you can now unlock real-time insights with mission critical and cloud performance and take advantage of one of the most comprehensive BI solutions in the marketplace today.
Many customers are already realizing the significant benefits of the new in-memory technologies in SQL Server 2014 including: Edgenet, Bwin, SBI Liquidity, TPP and Ferranti. Stay tuned for an upcoming blog highlighting the impact in-memory had to each of their businesses.
Learn more about SQL Server 2014 and download the datasheet and whitepapers here. Also if you would like to learn more about SQL Server In-Memory best practices, check out this SQL Server 2014 in-memory blog series compilation. There is also a SQL Server 2014 hybrid cloud scenarios blog compilation for learning best practices.
Also if you haven’t already download SQL Server 2014 CTP 2 and see how much faster your SQL Server applications run! The CTP2 image is also available on Windows Azure, so you can easily develop and test the new features of SQL Server 2014.
- Edgenet Gain Real-Time Access to Retail Product Data with In-Memory Technology [MSCloudOS YouTube channel, June 3, 2013]
- SQL Server 2014 In-Memory Technologies: Blog Series Introduction [SQL Server Blog, June 26, 2013]
At the SQL PASS conference last November, we announced the In-memory OLTP (project code-named Hekaton) database technology built into the next release of SQL Server. Microsoft’s technical fellow Dave Campbell’s blog provides a broad overview of the motivation and design principles behind this project codenamed In-memory OLTP.
In a nutshell – In-memory OLTP is a new database engine optimized for memory resident data and OLTP workloads. In-memory OLTP is fully integrated into SQL Server – not a separate system. To take advantage of In-memory OLTP, a user defines a heavily accessed table as memory optimized. In-memory OLTP tables are fully transactional, durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both In-memory OLTP tables and regular tables, and a transaction can update data in both types of tables. Expensive T-SQL stored procedures that reference only In-memory OLTP tables can be natively compiled into machine code for further performance improvements. The engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out mid-tier. To achieve this it uses latch-free data structures and a new optimistic, multi-version concurrency control technique. The end result is a selective and incremental migration into In-memory OLTP to provide predictable sub-millisecond low latency and high throughput with linear scaling for DB transactions. The actual performance gain depends on many factors but we have typically seen 5X-20X in customer workloads.
In the SQL Server product group, many years ago we started the investment of reinventing the architecture of the RDBMS engine to leverage modern hardware trends. This resulted in PowerPivot and In-memory ColumnStore Index in SQL2012, and In-memory OLTP is the new addition for OLTP workloads we are introducing for SQL2014 together with the updatable clustered ColumnStore index and (SSD) bufferpool extension. It has been a long and complex process to build this next generation relational engine, especially with our explicit decision of seamlessly integrating it into the existing SQL Server instead of releasing a separate product – in the belief that it provides the best customer value and onboarding experience.
Now we are releasing SQL2014 CTP1 as a public preview, it’s a great opportunity for you to get hands-on experience with this new technology and we are eager to get your feedback and improve the product. In addition to BOL (Books Online) content, we will roll out a series of technical blogs on In-memory OLTP to help you understand and leverage this preview release effectively.
In the upcoming series of blogs, you will see the following in-depth topics on In-memory OLTP:
- Getting started – to walk through a simple sample database application using In-memory OLTP so that you can start experimenting with the public CTP release.
- Architecture – to understand at a high level how In-memory OLTP is designed and built into SQL Server, and how the different concepts like memory optimized tables, native compilation of SPs and query inter-op fit together under the hood.
- Customer experiences so far – we had many TAP customer engagements since about 2 years ago and their feedback helped to shape the product, and we would like to share with you some of the learnings and customer experiences, such as typical application patterns and performance results.
- Hardware guidance – it is apparent that memory size is a factor, but since most of the applications require full durability, In-memory OLTP still requires log and checkpointing IO, and with the much higher transactional throughput, it can put actually even higher demand on the IO subsystem as a result. We will also cover how Windows Azure VMs can be used with In-memory OLTP.
- Application migration – how to get started with migrating to or building a new application with In-memory OLTP. You will see multiple blog posts covering the AMR tool, Table and SP migrations and pointers on how to work around some unsupported data types and T-SQL surface area, as well as the transactional model used. We will highlight the unique approach on SQL server integration which supports a partial database migration.
- Managing In-memory OLTP – this will cover the DBA considerations, and you will see multiple posts ranging from the tooling supporting (SSMS) to more advanced topics such as how memory and storage are managed.
- Limitations and what’s coming – explain what limitations exist in CTP1 and new capabilities expected to be coming in CTP2 and RTM, so that you can plan your roadmap with clarity.
In addition – we will also have blog coverage on what’s new with In-memory ColumnStore and introduction to bufferpool extension.
SQL2014 CTP1 is available for download here or you can read the complete blog series here:
- Getting Started with SQL Server 2014 In-Memory OLTP
- In-Memory OLTP: Q & A Myths and Realities
- Architectural Overview of SQL Server 2014’s In-Memory OLTP Technology
- SQL Server 2014 In-Memory OLTP bwin Migration and Production Experience
- Hardware Considerations for In-Memory OLTP in SQL Server 2014
- How In-Memory Optimized Database Technology is Integrated into SQL Server 2014
- SQL Server 2014 In-Memory OLTP App Migration Scenario Leveraging the Integrated Approach
- Improved Application Availability During Online Operations in SQL Server 2014
- Solving Session Management Database Bottlenecks with In-Memory OLTP
- New AMR Tool: Simplifying the Migration to In-Memory OLTP
- In-Memory OLTP Common Design Pattern – High Data Input Rate/Shock Absorber
- In-Memory OLTP Programmability: Concurrency and Transaction Isolation for Memory-Optimized Tables
- Concurrency Control in the In-Memory OLTP Engine
- Troubleshooting Common Performance Problems with Memory-Optimized Hash Indexes
- In-Memory OLTP: How Durability is Achieved for Memory-Optimized Tables
- Bwin wins with SQL Server 2014 [MSCloudOS YouTube channel, June 25, 2013]
- How Fast is Project Codenamed “Hekaton” – It’s ‘Wicked Fast’! [SQL Server Blog, Dec 11, 2012]
Recently I posted a video about how the SQL Server Community was looking into emerging trends in BI and Database technologies – one of the key technologies mentioned in that video was in-memory.
Many Microsoft customers have been using in-memory technologies as part of SQL Server since 2010 including xVelocity Analytics, xVelocity Column Store and Power Pivot, something we recently covered in a blog post following the ‘vaporware’ outburst from Oracle SVP of Communications, Bob Evans. Looking forward, Ted Kummert recently announced project codenamed “Hekaton,” available in the next major release of SQL Server. “Hekaton” will provide a full in-memory transactional engine, and is currently in private technology preview with a small set of customers. This technology will provide breakthrough performance gains of up to 50 times.
For those who are keen to get a first view of customers using the technology, below is the video of online gaming company bwin using “Hekaton”.
Bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012 – a story you can read here. Bwin had already gained significant in-memory benefit using xVelocity Column Store, for example – a large report that used to take 17 minutes to render now takes only three seconds.
Given the benefits, they had seen with in-memory technologies, they were keen to trial the technology preview of “Hekaton”. Prior to using “Hekaton”, their online gaming systems were handling about 15,000 requests per second, a huge number for most companies. However, bwin needed to be agile and stay at ahead of the competition and so they wanted access to the latest technology speed.
Using “Hekaton” bwin were hoping they could at least double the number of transactions. They were ‘pretty amazed’ to see that the fastest tests so far have scaled to 250,000 transactions per second.
So how fast is “Hekaton” – just ask Rick Kutschera, the Database Engineering Manager at bwin – in his words it’s ‘Wicked Fast’! However, this is not the only point that Rick highlights, he goes on to mention that “Hekaton” integrates seamlessly into the SQL Server engine, so if you know SQL Server, you know “Hekaton”.
— David Hobbs-Mallyon, Senior Product Marketing Manager
- SQL Server 2014: Pushing the Boundaries of In-Memory Performance [SQL Server Blog, Oct 16, 2013]
Quentin Clark
Corporate Vice President, Data Platform GroupThis morning, during my keynote at the Professional Association of SQL Server (PASS) Summit 2013, I discussed how customers are pushing the boundaries of what’s possible for businesses today using the advanced technologies in our data platform. It was my pleasure to announce the second Community Technology Preview (CTP2) of SQL Server 2014 which features breakthrough performance with In-Memory OLTP and simplified backup and disaster recovery in Windows Azure.
Pushing the boundaries
We are pushing the boundaries of our data platform with breakthrough performance, cloud capabilities and the pace of delivery to our customers. Last year at PASS Summit, we announced our In-Memory OLTP project “Hekaton” and since then released SQL Server 2012 Parallel Data Warehouse and public previews of Windows Azure HDInsight and Power BI for Office 365. Today we have SQL Server 2014 CTP2, our public and production-ready release shipping a mere 18 months after SQL Server 2012.
Our drive to push the boundaries comes from recognizing that the world around data is changing.
- Our customers are demanding more from their data – higher levels of availability as their businesses scale and globalize, major advancements in performance to align to the more real-time nature of business, and more flexibility to keep up with the pace of their innovation. So we provide in-memory, cloud-scale, and hybrid solutions.
- Our customers are storing and collecting more data – machine signals, devices, services and data from outside even their organizations. So we invest in scaling the database and a Hadoop-based solution.
- Our customers are seeking the value of new insights for their business. So we offer them self-service BI in Office 365 delivering powerful analytics through a ubiquitous product and empowering users with new, more accessible ways of gaining insights.
In-memory in the box for breakthrough performance
A few weeks ago, one of our competitors announced plans to build an in-memory column store into their database product some day in the future. We shipped similar technology two years ago in SQL Server 2012, and have continued to advance that technology in SQL Server 2012 Parallel Data Warehouse and now with SQL Server 2014. In addition to our in-memory columnar support in SQL Server 2014, we are also pushing the boundaries of performance with in-memory online transaction processing (OLTP). A year ago we announced project “Hekaton,” and today we have customers realizing performance gains of up to 30x. This work, combined with our early investments in Analysis Services and Excel, means Microsoft is delivering the most complete in-memory capabilities for all data workloads – analytics, data warehousing and OLTP.
We do this to allow our customers to make breakthroughs for their businesses. SQL Server is enabling them to rethink how they can accelerate and exceed the speed of their business.
- TPP is a clinical software provider managing more than 30 million patient records – half the patients in England – including 200,000 active registered users from the UK’s National Health Service. Their systems handle 640 million transactions per day, peaking at 34,700 transactions per second. They tested a next-generation version of their software with the SQL Server 2014 in-memory capabilities, which has enabled their application to run seven times faster than before – all of this done and running in half a day.
- Ferranti provides solutions for the energy market worldwide, collecting massive amounts of data using smart metering. With our in-memory technology they can now process a continuous data flow up to 200 million measurement channels making the system fully capable of meeting the demands of smart meter technology.
- SBI Liquidity Market in Japan provides online services for foreign currency trading. By adopting SQL Server 2014, the company has increased throughput from 35,000 to 200,000 transactions per second. They now have a trading platform that is ready to take on the global marketplace.
A closer look into In-memory OLTP
Previously, I wrote about the journey of the in-memory OLTP project Hekaton, where a group of SQL Server database engineers collaborated with Microsoft Research. Changes in the ratios between CPU performance, IO latencies and bandwidth, cache and memory sizes as well as innovations in networking and storage were changing assumptions and design for the next generation of data processing products. This gave us the opening to push the boundaries of what we could engineer without the constraints that existed when relational databases were first built many years ago.
Challenging those assumptions, we engineered for dramatically changing latencies and throughput for so-called “hot” transactional tables in the database. Lock-free, row-versioning data structures and compiling T-SQL and queries into native code, combined with making the programming semantics consistent with SQL Server means our customers can apply the performance benefits of extreme transaction processing without application rewrites or the adoption of entirely new products.
The continuous data platform
Windows Azure fulfills new scenarios for our customers – transcending what is on-premises or in the cloud. Microsoft is providing a continuous platform from our traditional products that are run on-premises to our cloud offerings.
With SQL Server 2014, we are bringing the cloud into the box. We are delivering high availability and disaster recovery on Windows Azure built right into the database. This enables customers to benefit from our global datacenters: AlwaysOn Availability Groups that span on-premises and Windows Azure Virtual Machines, database backups directly into Windows Azure storage, and even the ability to store and run database files directly in Windows Azure storage. That last scenario really does something interesting – now you can have an infinitely-sized hard drive with incredible disaster recovery properties with all the great local latency and performance of the on-premises database server.
We’re not just providing easy backup in SQL Server 2014, today we announced backup to Windows Azure would be available for all our currently supported SQL Server releases. Together, the backup to Windows Azure capabilities in SQL Server 2014 and via the standalone tool offer customers a single, cost-effective backup strategy for secure off-site storage with encryption and compression across all supported versions of SQL Server.
By having a complete and continuous data platform we strive to empower billions of people to get value from their data. It’s why I am so excited to announce the availability of SQL Server 2014 CTP2, hot on the heels of the fastest-adopted release in SQL Server’s history, SQL Server 2012. Today, more businesses solve their data processing needs with SQL Server than any other database. It’s about empowering the world to push the boundaries.
4.3
Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights:
- Unlock Insights on Any Data [MSCloudOS YouTube channel, Nov 1, 2013]
- Enabling Familiar, Powerful Business Intelligence [MSCloudOS YouTube channel, Nov 1, 2013]
- The Modern Data Warehouse [MSCloudOS YouTube channel, Nov 8, 2013]
- PDW: The high performance SQL Server Data Warehouse solution [MSCloudOS YouTube channel, Oct 25, 2013]
SQL Server is one of the most used and loved data warehouse platform today but did you know there is a specialized version of SQL Server, the SQL Server Parallel Data Warehouse Appliance, specially built for high scale and high performance analytics needs? Watch this video to learn about SQL Server PDW and see why you might want to evolve your SQL Server data warehouse to PDW so you can experience the next level of scale and performance for your SQL Server data warehouse. Learn More: http://www.microsoft.com/en-us/sqlserver/solutions-technologies/data-warehousing/upgrade-to-pdw.aspx
- Data Insights Partner Video — HP, Dell and Hortonworks [MSCloudOS YouTube channel, Oct 30, 2013]
- Quentin Clark “Can Big Data Reach One Billion People?” [OreillyMedia YouTube channel, Oct 29, 2013]
- Microsoft: HDInsight and Hadoop [OreillyMedia YouTube channel, Oct 29, 2013]
Announcing Windows Azure HDInsight: Where big data meets the cloud [The Official Microsoft Blog, Oct 28, 2013]
post is from Quentin Clark, Corporate Vice President of the Data Platform Group at Microsoft
I am pleased to announce that Windows Azure HDInsight – our cloud-based distribution of Hadoop – is now generally available on Windows Azure. The GA of HDInsight is an important milestone for Microsoft, as its part of our broader strategy to bring big data to a billion people.
On Tuesday at Strata + Hadoop World 2013, I will discuss the opportunity of big data in my keynote, “Can Big Data Reach One Billion People?” Microsoft’s perspective is that embracing the new value of data will lead to a major transformation as significant as when line of business applications matured to the point where they touched everyone inside an organization. But how do we realize this transformation? It happens when big data finds its way to everyone in business – when anyone with a question that can be answered by data, gets their answer. The impact of this is beyond just making businesses smarter and more efficient. It’s about changing how business works through both people and data-driven insights. Data will drive the kinds of changes that, for example, allow personalization to become truly prevalent. People will drive change by gaining insights into what impacts their business, enabling them to change the kinds of partnerships and products they offer.
Our goal to empower everyone with insights is the reason why Microsoft is investing, not just in technology like Hadoop, but the whole circuit required to get value from big data. Our customers are demanding more from the data they have – not just higher availability, global scale and longer histories of their business data, but that their data works with business in real time and can be leveraged in a flexible way to help them innovate. And they are collecting more signals – from machines and devices and sources outside their organizations.
Some of the biggest changes to businesses driven by big data are created by the ability to reason over data previously thought unmanageable, as well as data that comes from adjacent industries. Think about the use of equipment data to do better operational cost and maintenance management, or a loan company using shipping data as part of the loan evaluation. All of this data needs all forms of analyticsand the ability to reach the people making decisions. Organizations that complete this circuit, thereby creating the capability to listen to what the data can tell them, will accelerate.
Bringing Hadoop to the enterprise
Hadoop is a cornerstone of how we will realize value from big data. That’s why we’ve engineered HDInsight as 100 percent Apache Hadoop offered as an Azure cloud service. The service has been in public production preview for a number of months now – the reception has been tremendous and we are excited to bring it to full GA status in Azure.
Microsoft recognizes Hadoop as a standard and is investing to ensure that it’s an integral part of our enterprise offerings. We have invested through real contributions across the project – not just to make Hadoop work great on Windows, but even in projects like Tez, Stinger and Hive. We have put in thousands of engineering hours and tens of thousands of lines of code. We have been doing this in partnership with Hortonworks, who will make HDP (Hortwonworks Data Platform) 2.0 for Windows Server generally available next month, giving the world access to a supported Apache-pure Hadoop v2 distribution for Windows Server. Working with Hortonworks, we will support Hadoop v2 in a future update to HDInsight.
Windows Azure HDInsight combines the best of Hadoop open source technology with the security, elasticity and manageability that enterprises require. We have built it to integrate with Excel and Power BI – our business intelligence offering that is part of Office 365 – allowing people to easily connect to data through HDInsight, then refine and do business analytics in a turnkey fashion. For the developer, HDInsight also supports choice of languages: .NET, Java and more.
We have key customers currently using HDInsight, including:
- The City of Barcelona uses Windows Azure HDInsight to pull in data about traffic patterns, garbage collection, city festivals, social media buzz and more to make critical decisions about public transportation, security and overall spending.
- A team of computer scientists at Virginia Tech developed an on-demand, cloud-computing model using the Windows Azure HDInsight Service, enabling easier, more cost-effective access to DNA sequencing tools and resources.
- Christian Hansen, a developer of natural ingredients for several industries, collects electronic data from a variety of sources, including automated lab equipment, sensors and databases. With HDInsight in place, they are able to collect and process data from trials 100 time times faster than before.
End-to-end solutions for big data
These kinds of uses of Hadoop are examples of how big data is changing what’s possible. Our Hadoop-based solution HDInsight is a building block – one important piece of the end-to-end solutions required to get value from data.
All this comes together in solutions where people can use Excel to pull data directly from a range of sources, including SQL Server (the most widely-deployed database product), HDInsight, external Hadoop clusters and publicly available datasets. They can then use our business intelligence tools in Power BI to refine that data, visualize it and just ask it questions. We believe that by putting widely accessible and easy-to-deploy tools in everyone’s hands, we are helping big data reach a billion people.
I am looking forward to tomorrow. The Hadoop community is pushing what’s possible, and we could not be happier that we made the commitment to contribute to it in meaningful ways.
- Quentin Clark – BigDataNYC 2013 – theCUBE [SiliconANGLE YouTube channel, Oct 30, 2013]
Quentin Clark, Microsoft, at Big Data NYC 2013 with John Furrier and Dave Vellante
“We’re here here because we’re super committed to Hadoop,” Clark said, explaining that Microsoft is dedicated to help its customers embrace the benefits Big Data can provide them with. “Hadoop is the cornerstone of Big Data but not the entire infrastructure,” he added. Microsoft is focusing around adding security and tool integration, with thousands of hours of development put into Hadoop, to make it ready for the enterprise. “There’s a foundational piece where customers are starting,” which they can build upon and Microsoft focuses on helping them embrace Hadoop as part of the IT giant’s business goals.
Asked to compare the adoption of traditional Microsoft products with the company’s Hadoop products, Clark said, “a big part of our effort was to get to that enterprise expectations.” Security and tools integration, getting Hadoop to work on Windows is part of that effort. Microsoft aims to help people “have a conversation and dialogue with the data. We make sure we funnel all the data to help them get the BI and analytics” they need.
Commenting on Microsoft’s statement of bringing Big Data to its one billion Office users, Vellante asked if the company’s strategy was to put the power of Big Data into Excel. Clark explained it was about putting Big Data in the Office suite, going on to explain that there is already more than a billion people who are passively using Big Data. Microsoft focuses on those actively using it.
Clark mentions Microsoft has focused on the sports arena, helping major sports leagues use Big Data to power fantasy teams. “We actually have some models, use some data sets. I have a fantasy team that I’m doing pretty well with, partly because of my ability to really have a conversation with the data. On the business side, it’s transformational. Our ability to gain insight in real time and interact is very different using these tools,” Clark stated.
Why not build its own Hadoop distro?
Asked why Microsoft decided not to have its own Hadoop distribution, Clark explained that “primarily our focus has been in improving the Apache core, make Hadoop work on Windows and work great. Our partnership with Hortonworks just made sense. They are able to continue to push and have that cross platform capability, we are able to offer our customers a solution.”
Explaining there were great discrepancies in how different companies in the same industries made use of the benefits Big Data, he advised our viewers to “look at what the big companies are doing” embracing the data, and to look what they are achieving with it.
As far as the future of the Big Data industry is concerned, Clark stated: “There’s a consistent meme of how is this embraced by business for results. Sometimes with the evolution of technology, everyone is exploring what it’s capable of.” Now there’s a focus shift of the industry towards what greater purpose it leads to, what businesses can accomplish.
@thecube
#BigDataNYC
4.4
Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI):
- Enable Modern Work Styles with Microsoft VDI [MSCloudOS YouTube channel, Nov 1, 2013]
- Microsoft Session: VDI solutions with Windows Server R2 – at Dell World 2013 [Dell YouTube channel, Dec 13, 2013]
- Microsoft and Dell’s Continued Collaboration on VDI Solutions on Display at Dell World [Windows Server Blog, Dec 11, 2013]
In October, we announced Windows Server 2012 R2 which delivers several exciting improvements for VDI solutions. Among the benefits, Windows Server 2012 R2 reduces the cost per seat for VDI as well as enhances your end user’s experience. The following are just some of the features and benefits of Windows Server 2012 R2 for VDI:
- Online data deduplication on actively running VMs reduces storage capacity requirements by up to 90% on persistent desktops.
- Tiered storage spaces manage your tiers of storage (fast SSDs vs. slower HDDs) intelligently so that the most frequently accessed data blocks are automatically moved onto faster-tier drives. Likewise, older or seldom-accessed files are moved onto the cheaper and slower SAS drives.
- The Microsoft Remote Desktop App provides easy access to a variety devices and platforms including Windows, Windows RT, iOS, Mac OS X and Android. This is good news for your end users and your mobility/BYOD strategy!
- Your user experience is also enhanced due to improvements on several fronts including RemoteFX, DirectX 11.1 support, RemoteApp, quick reconnect, session shadowing, dynamic monitor and resolution changes.
If your VDI solutions run on Dell servers or if you are looking at deploying new VDI infrastructure, we are excited to let you know about the work we have been doing in partnership with Dell around VDI. Dell recently updated their Desktop Virtualization Solution (DVS) for Windows Server to support Windows Server 2012 R2, and DVS now delivers all of the benefits mentioned above. Dell is also delivering additional enhancements into Dell DVS for Windows Server so it will also support:
- Windows 8.1 with touch screen devices and new Intel Haswell processors
- Unified Communication with Lync 2013, via an endpoint plug-in that enables P2P audio and video. (Dell Wyse has certified selected Windows thin clients to this effect, such as the D90 and Z90.)
- Virtualized shared graphics on NVidia GRID K1/K2 and AMD FirePro cards using Microsoft RemoteFX technology
- Affordable persistent desktops
- Highly-secure and dual/quad core Dell Wyse thin clients, for a true end-to-end capability, even when using high-end server graphics cards or running UC on Lync 2013
- Optional Dell vWorkspace software, also supporting Windows Server 2012 R2, that brings scalability to tens of thousands of seats, advanced VM provisioning, IOPS efficiency to reduce storage requirement and improve performance, diagnostics and monitoring, flexible resource assignments, support for multi-tenancy and more.
- Availability in more than 30 countries
Depending on where you stand in the VDI deployment cycle in your organization, Dell DVS for Windows Server is already supported today on multiple Dell PowerEdge server platforms:
- The T110 for a pilot/POC up to 10 seats
- The VRTX for implementation in a remote or branch office of up to about 500 users
- The R720 for a traditional enterprise-like, flexible and scalable implementation to several thousand seats. It supports flexible deployments such as application virtualization, RDSH, pooled and persistent VMs.
This week, Microsoft and Dell will present a technology showcase at Dell World in Austin (TX), USA. If you happen to be at the show, you will be able to see for yourself how well Windows Server 2012 R2 and Windows 8.1 integrate into Dell DVS. We will show:
- The single management console of Windows Server 2012 installed on a Dell PowerEdge VRTX, demonstrating how easy it can be for an IT administrator to manage VDI workloads based on Hyper-V in a remote or branch office environment
- How users can chat, talk, share, meet, transfer files and conduct video conferencing within virtualized desktops set up for unified communication
- That you can watch HD multimedia and 3D graphics files on multiple virtual desktops sharing a graphic card installed remotely in a server
- How affordable it is to run persistent desktops with DVS and Windows Server 2012 R2
We are excited about the work that we are doing with Dell around VDI and hope you have a chance to come visit our joint VDI showcase in Austin. We will be located in the middle of the Dell booth in show expo hall. Also, we will show a VDI demo as part of the Microsoft Cloud OS breakout session at noon on Thursday (December 12th ) in room 9AB. Finally, we will show a longer VDI demo in the show expo theater (next to the Microsoft booth) at 10am on Friday (December 13th ) morning. We are looking forward to seeing you there.
- Microsoft Remote Desktop App [iPhone & iPad] [BJTechNews YouTube channel, Nov 7, 2013]
- Wow: Remote Desktop Goes Cross Platform! [In the Cloud blog, Oct 18, 2013]
Post from Brad Anderson,
Corporate Vice President of Windows Server & System Center at Microsoft.As of yesterday afternoon, the Microsoft Remote Desktop App is available in the Android, iOS, and Mac stores (see screen shots below). There was a time, in the very recent past, when many thought something like this would never happen.
If your company has users who work on iPads, Android, and Windows RT devices, you also likely have a strategy (or at least of point-of-view) for how you will deliver Windows applications to those devices. With the Remote Desktop App and the 2012 R2 platforms made available earlier today, you now have a great solution from Microsoft to deliver Windows applications to your users across all the devices they are using.
As I have written about before, one of the things I am actively encouraging organizations to do is to step back and look at their strategy for delivering applications and protecting data across all of their devices. Today, most enterprises are using different tools for enabling users on PCs, and then they deploy another tool for enabling users on their tablets and smart phones. This kind of overheard and the associated costs are unnecessary – but, even more important (or maybe I should say worse), is that your end-users therefore have different and fragmented experiences as they transition across their various devices. A big part of an IT team’s job must be to radically simplify the experience end users have in accomplishing their work – and users are doing that work across all their devices.
I keep bolding “all” here because I am really trying to make a point: Let’s stop thinking about PCs and devices in a fragmented way. What we are trying to accomplish is pretty straightforward: Enable users to access the apps and datathey need to be productive in a way that can ensure the corporate assets are secure. Notice that nowhere in that sentence did I mention devices. We should stop talking about PC Lifecycle management, Mobile Device Management and Mobile Application Management – and instead focus our conversation on how we are enabling users. We need a user-enablement Magic Quadrant!
OK – stepping off my soapbox.
![]()
Delivering Windows applications in a server-computing model, through solutions like Remote Desktop Services, is a key requirement in your strategy for application access management. But keep in mind that this is only one of many ways applications can be delivered – and we should consider and account for all of them.
For example, you also have to consider Win32 apps running in a distributed model, modern Windows apps, iOS native apps (side-loaded and deep-linked), Android native apps (side-loaded and deep-linked), SaaS applications, and web applications.
Things have really changed from just 5 years ago when we really only had to worry about Windows apps being delivered to Windows devices.
As you are rethinking your application access strategy, you need solutions that enable you to intelligently manage all these applications types across all the devices your workforce will use.
You should also consider what the Remote Desktop Apps released yesterday are proof of Microsoft’s commitment to enable you to have a single solution to manage all the devices your users will use.
Microsoft describes itself as a “devices and services company.” Let me provide a little more insight into this.
Devices: We will do everything we can to earn your business on Windows devices.
Services: We will light up those Windows devices with the cloud services that we build, and these cloud services will alsolight-up all (there’s that bold again) your other devices.
The funny thing about cloud services is that they want every device possible to connect to them – we are working to make sure the cloud services that we are building for the enterprise will bring value to all (again!) the devices your users will want to use – whether those are Windows, iOS, or Android.
The RDP clients that we released into the stores yesterday are not v1 apps. Back in June, we acquired IP assets from an organization in Austria (HLW Software Development GMBH) that had been building and delivering RDP clients for a number of years. In fact, there were more than 1 million downloads of their RDP clients from the Apple and Android stores. The team has done an incredible job using them as a base for development of our Remote Desktop App, creating a very simple and compelling experience on iOS, Mac OS X and Android. You should definitely give them a try!
Also: Did I mention they are free?
To start using the Microsoft Remote Desktop App for any of these platforms, simply follow these links:
- Android:
https://play.google.com/store/apps/details?id=com.microsoft.rdc.android- iOs:
https://itunes.apple.com/us/app/microsoft-remote-desktop/id714464092?mt=8- Mac:
https://itunes.apple.com/us/app/microsoft-remote-desktop/id715768417?mt=12&ls=1
- Microsoft Remote Desktop Client (RD Client App) run on Android device [Andr.oid Eric YouTube channel, Oct 23, 2013]
Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others
My Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post already introduced the HP Moonshot System. This post is discussing Moonshot in a much wider context, as well as providing the information which came after Dec 6, 2013, particularly at the HP Discover Barcelona 2013 event:
1. The essence of IT industry’s state-of-the-art regarding the datacenter and the cloud
2. Recent academic research: the disaggregated datacenter phenomenon
3. Details about HP’s converged systems and next-gen cloud technology
4. Latest details about HP’s Moonshot technology
-
The essence of IT industry’s state-of-the-art regarding the datacenter and the cloud
There is a new way of thinking in the IT industry which is best represented by No silo left behind: Convergence in the age of virtualization, cloud, and Big Data [HP Discover YouTube channel, recorded on Dec 10, 11:20 AM – 12:20 PM; published on Dec 11, 2013] presentation by HP on its HP Discover Barcelona 2013 event:
As far as the cloud is concerned today’s issue is Making hybrid real for IT and business success [HP Discover YouTube channel, recorded on Dec 10, 12:40 PM – 1:40 PM; published on Dec 11, 2013]
Then one should at least briefly understand HP Cloud strategy and benefit of leveraging a portfolio of solutions [HP Discover YouTube channel, Dec 12, 2013]
And HP is just about half year from the point (in time) when it will have its final answer to the question: How open source will reinvent cloud computing – again [HP Discover YouTube channel, Dec 12, 2013], the presentation which was originally announced under the title “The Rise of Open Source Clouds” and finally delivered with the following slides (to wet your apetite for watching the record of the presentation following next):
![]()
“Different delivery models being private, manage and public. … On the top you can the see six workload areas. These areas are basicaly we’ll build our product portfolio against. So we’ll be moving away from just sort of a catalogue of SKUs and piece parts into building offers in a workload base, things like dev test, business continuity, technical computing or HPC, and of course things like analytics and infrastructure.”
Now we can take a brief Tour of the Cloud Booth at HP Discover Barcelona [hpcloud YouTube channel, Dec 11, 2013] in order to understand the cloud-related announcements made by HP (some of these will be detailed in this post later as related to the title of post)
And Moonshot-specific announcements are briefly summarized in HP Moonshot latest innovations allow your business can embrace the new style of IT [HP Discover YouTube channel, Dec 12, 2013]
Finally The future according to HP Labs [HP Discover YouTube channel, Dec 12, 2013]
This is the essence of IT industry’s state-of-the-art regarding the datacenter and the cloud.
2. On the other hand recent academic research has just been awakening to, what they are calling, the disaggregated datacenter phenomenon
already happening as the “next big thing” in the industry, as evidenced by the following excerpts from the Network Support for Resource Disaggregation in Next-Generation Datacenters [research paper* on HotNets-XII**, Nov 21-22, 2013]
Datacenters have traditionally been architected as a collection of servers wherein each server aggregates a fixed amount of computing, memory, storage, and communication resources. In this paper, we advocate an alternative construction in which the resources within a server are disaggregated and the datacenter is instead architected as a collection of standalone resources.
Disaggregation brings greater modularity to datacenter infrastructure, allowing operators to optimize their deployments for improved efficiency and performance. However, the key enabling or blocking factor for disaggregation will be the network since communication that was previously contained within a single server now traverses the datacenter fabric. This paper thus explores the question of whether we can build networks that enable disaggregation at datacenter scales.
…
Figure 2: Architectural differences
between server-centric and resource-centric datacenters***
As illustrated in Figure 2, the high-level idea behind diaggregation is to develop standalone hardware “blades”for each resource type including CPUs, memory, storage, and network interfaces as well as specialized components (GPUs, various ASIC accelerators, etc.). Those resource
blades are interconnected by a datacenter-wide network fabric. Understanding the specifications and nature of this network fabric is our focus in this paper.
Abbreviations used above for Figure 2. (in addition to “C” for CPU and “M” for Memory):
|
Martin Fink, CTO and Director of HP Labs, speaks at NTH Generation’s 13th Annual Symposium.
* Sangjin Han (U.C.Berkeley), Norbert Egi (Huawei Corp.), Aurojit Panda, Sylvia Ratnasamy (U.C.Berkeley), Guangyu Shi (Huawei Corp.), Scott Shenker (U.C.Berkeley and ICSI)
** Twelfth ACM Workshop on Hot Topics in Networks
*** I should emphasize here that a disaggregated datacenter with shared disaggregated memory (as on the (b) part of the Figure 2. above) is NOT a kind of academic exageration but a relatively “near term reality” of the future. It became somewhat obvious from the recent The future according to HP Labs video included in the end of the first section above, especially when Moonshot was mentioned. To provide more evidence watch the Tectonic shifts: Where the future of convergence is taking us [NTH Generation Computing, Inc. YuTube channel, recorded on Aug 1; published on Aug 20, 2013] keynote presentation above. In this HP’s CTO Martin Fink said that a new type of device HP has been working on for years, called memristor, could be made into a non-volatile and non-hierarchical, i.e. universal memory system, replacing both DRAM and flash, as well as magnetic storage in perspective. He also hinted at specialised Moonshot cartridges, possibly using memristor memory instead of DRAM, linked by terabit-class photonic connects to memristor storage arrays. He was already showing a prototype memristor wafer as well. There is no wonder therefore that according to HP’s own Six IT technologies to watch [Enterprise 20/20 Blog, Sept 5, 2013] article:
Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.
The Future of Big Data – an interview with John Sontag, VP and director of HP Labs’ Systems Research [HP Enterprise Business Community, Nov 14, 2013] is providing even bigger prospects as:
If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data.
Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways.
On top of all that, we need to reduce costs – if we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you can conduct experiments on this data literally as fast as we can think them up.
The combination of non-volatile, memristor-powered memory and very large scales is causing the people who think about storage and algorithms to realize that the tradeoff has changed. For the last 50 years, we’ve had to think of every bit of data that we process as something that eventually has to get put on a disk drive if you intend to keep it. That means you have to think about the time to fetch it, to re-sort it into whatever way you want it to rest in memory, and to put it back when you’re done as one of your costs of doing business.
If you don’t have those issues to worry about, you can leave things in memory – graphs, for example, which are powerful expressions of complex data – that at present you have to spend a lot of compute time and effort pulling apart for storage. The same goes for processing. Right now we have to worry about how we break data up, what questions we ask it and how many of us are asking it at the same time. It makes experimentation hard because you don’t know whether the answer’s going to come immediately or an hour later.
Our vision is that you can sit at your desk and know you’ll get your answer instantly. Today we can do that for small scale problems, but we want to make that happen for all of the problems that you care about. What’s great is that we can begin to do this with some questions that we have right now. We don’t have to wait for this to change all at once. We can go at it in an incremental way and have pieces at multiple stages of evolution concurrently – which is exactly what we’re doing.
There are people who have given up on thinking about certain problems because there’s no way to compactly express them with the systems we have today. They’re going to be able to look at those problems again – it’s already happening with Moonshot and HAVEn [HP’s Big Data platform], and at each stage of this evolution we’re going to allow another set of people to realize that the problem they thought was impossible is now within reach.
One example of where this already happened is aircraft design. When we moved to 64-bit processors that fit on your desktop and that could hold more than four gigabytes of memory, the people who built software that modeled the mechanical stresses on aircraft realized that they could write completely different algorithms. Instead of having to have a supercomputer to run just a part of their query, they could do it on their desktop. They could hold an entire problem in memory, and then they could look at it differently. From that we got the Airbus A380, the Boing 777 and 787, and, jumping industries, most new cars.
Now back to the academic research for Network Support for Resource Disaggregation in Next-Generation Datacenters [presentation slides on HotNets-XII*, Nov 21-22, 2013] to illustrate their understandin of the trends
The Trends: Disaggregation
HP MoonShot
– Shared cooling/casing/power/mgmt for server blades
[Note that Moonshot is much more than that, as it was already presented in all detail in my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post.]
[from the research paper:]
SeaMicro’s server architecture [6] uses a looser coupling of components within a single server … the network in SeaMicro’s architecture implements a 3D torus interconnect, which only disaggregate I/O and does not scale beyond the rack … [6] SeaMicro Technology Overview.
Intel Rack Scale Architecture
[from the research paper: SeaMicro’s server architecture [6] uses a looser coupling of components within a single server,] while Intel’s Rack Scale
Architecture (RSA) [15] extends this approach to rack scales. …
[15] Intel Newsroom. Intel, Facebook Collaborate on Future Data Center Rack Technologies.
Open Compute Project
Closing Remarks
- Disaggregated datacenter will be “the next big thing”
– Already happening. We [i.e. the academic research] need to catch up!
3. And next continue with the details about HP’s converged systems and next-gen cloud technology
:
Why HP uses its own Converged Infrastructure solutions [Enterprise CIO Forum YouTube channel, Nov 11, 2013]
From “Sharks” in the press at HP Discover, Barcelona – Day One coverage [HP Converged Infrastructure blog, Dec 10, 2013]
… we were hosting a large press announcement that went out over the wire on Monday at 3 pm local time (CET).
Here’s a brief summary of the announcement that was presented by Tom Joyce, Senior Vice President and General Manager, HP Converged Systems. The HP ConvergedSystem is a new product line completely reengineered up based on 21st-century assets and architectures for the New Style of IT. This is an important point as Tom emphasized – this is not a collection of piece parts, this is a completely new engineered solution, built on core building that are workload-optimized systems which are easy to buy, manage, and support – order to operations in as few as 20 days, with ONE tool to manage and most importantly having ONE point of accountability.
Built using HP Converged Infrastructure’s best-in-class servers, storage, networking, software and services, the new HP ConvergedSystem family of products deliver a total systems experience “out of the box.”
- HP ConvergedSystem for Virtualization helps clients easily scale computing resources to meet business needs with preconfigured, modular virtualization systems supporting 50 to 1,000 virtual machines at twice the performance, and at an entry price 25 percent lower than competitive offerings.
- HP ConvergedSystem 300 for Vertica speeds big data analytics, helping organizations turn data into actionable insights at 50 to 1,000 times faster performance and 70 percent lower cost per terabyte than legacy data warehouses.
- HP ConvergedSystem 100 for Hosted Desktops, based on the award-winning HP Moonshot server, delivers a superior desktop experience compared to traditional virtual desktop infrastructure. This first PC on a chip for the data center delivers six times faster graphics performance and 44 percent lower total cost of ownership
The physical press release in my opinion was pretty cool, and one of the better ones I have attended. The new HP ConvergedSystem for Virtualization 300 and 700 debuted on stage with the theme from Jaws, with much snapping of camera flashes. Tom explained why the sharks theme was so integral to this particular system with core attributes of most “efficient”, ”best in class”, extremely “fast”, very “agile” and that it “never sleeps”!!
The best one liner from Tom Joyce during the session was “If I were VCE [VMware/Cisco/EMC combination] I would be getting out of the water!!” which was capture on the HP live streaming video s found here. Check it out as it is worth watching. I have also included the full “HP Shark” press release HP Introduces Innovations Built for the Data Center of the Future.
Here is a detailed press report on that: HP Targets VCE With Converged System Lineup [Dec 10, 2013].
HP ConvergedSystem: Innovation to reduce the complexity of technology integration [HP Discover YouTube channel, Dec 11, 2013]
The HP “Sharks” are in the Water [HP Converged Infrastructure blog, Dec 9, 2013]
Written by guest blogger Tom Joyce, Senior Vice President and General Manager, HP Converged Systems
Seven months ago HP announced the formation of our new Converged Systems business unit. I was excited to be asked to lead this new team because so many of our customers had told us they needed truly converged platforms for their datacenters. Over the last five years HP had developed Converged Infrastructure technologies for storage, networking and servers that enabled better and more cost effective solutions, but it was time to take it to the next level. We needed to bring all those technologies together in a way that collapsed the cost of IT infrastructure and made everything faster and easier.
Starting last summer, we built our team. We hired the best of the best from within HP and from elsewhere. We put in place an operating model and set of processes that allow us to do agile product development and deliver products to market rapidly and with high quality. And we got really creative in our thinking. We were also fortunate to get a lot of time with Meg [Whitman, HP CEO] and other top people throughout HP. This was critical because to deliver a game changing set of new products, we had to break down or change a lot of established processes in development, manufacturing, support and go-to-market. We had to break some glass, and Meg helped us do that by making this a high priority.
Based on the customer input, there were some critical things I knew we needed to do.
- Move fast. The IT market is changing quickly, and I wanted to get our first set of products out by the end of the calendar year.
- Do more than just combine existing server, storage, networking and software components. We needed to engineer these new products to deliver more with less infrastructure, and to handle the most important customer workloads exceptionally well.
- Everything had to be simple – the ordering process, the system design, management, support, easy upgrades – everything.
- Think about the “whole offer” and experience for the customer, not just the product itself. This meant providing a better process from end to end.
- Deliver exceptional economics. The new product had to be priced to market with a clear return on investment for the customer.
- Most importantly, we needed to make sure that our channel partners could make money selling this product, and could provide specialize services around it.
After developing our plan, we started “Project Sharks”. We called it this because if you think about it, a shark is perfectly engineered to accomplish its mission – it is the ideal hunting machine. When I was a kid I was fascinated by sharks. People tend to think of sharks as primitive creatures, but they are actually extremely sophisticated. Everything is designed with a purpose, and there is no waste. Sharks have a unique hydroskeleton, musculature, and skin. All these parts are connected to maximize thrust so that the animal can move fast, like a torpedo. Sharks are noted for being able to sense blood in the water, but beyond that they have an amazingly complete set of sensors – perhaps the most sophisticated set of “sensors in the sea.” 🙂
Our goal with “project sharks” was to build a perfectly designed virtual infrastructure machine. This week at HP Discover, Barcelona, we announced the new HP ConvergedSystem for Virtualization. Click here to find out more information. The two models are designed to be core building blocks for constructing a converged data center. They are very fast and efficient, delivering better raw IOPS for virtualization at a great cost point. They can handle a lot more virtual machines than a traditional configuration. They can also deliver about a 58% lower cost per VM over a 3 year period, as compared to our closest competitor.
Perhaps more important, we redesigned our whole delivery process as part of “project sharks”. The result is that HP or a channel partner can actually produce a configuration and quote for an HP ConvergedSystem in about 20 minutes, and the whole thing will be on one sheet of paper. HP ConvergedSystem 300 and the 700 installed and in production in a customer data center in as few as 20 days. We have also fully integrated the management, to make it simple, and the support. If support is needed, only one call to HP is required; you don’t need to deal with a server vendor, a storage vendor, etc. When it is time for firmware upgrades, the process for the whole system is integrated. And when you need additional capacity, we can ship a module out from our factory in one day, and it will be up and running in about five days.
These new “sharks” are not just for virtualization. We also announced that the HP ConvergedSystem 300 for Vertica as a new platform for big data analytics. The HP ConvergedSystem 100 is based on HP Moonshot servers, and ships as a Citrix XenDesktop system.
In the future the HP ConvergedSystem products will support additional workloads and ISV applications, and will be used as building blocks for HP CloudSystem private clouds, so stay tuned for more.
Our new Converged Systems business unit team is very excited about the opportunity to unleash these new “sharks”, and put them in the water. We are looking forward to hearing from our customers and partners about what they want us to do next, because the spirit of innovation is alive and well at HP.
On the Dec 10 HP Discover Barcelona 2013 keynote HP’s hybrid cloud strategy was presented with the following slides, with comments made by the presenter added only for the HP CloudSystem private clouds part:
![]()
Bill Hilf Vice President, Converged Cloud Products and Services, is driving HP’s entire cloud roadmap (who came to HP 6 months ago from Microsoft where he was GM of Windows Azure Product Management): “HP Next Gen CloudSystem … to be released in the 1st half of 2014” with the following major characteristics:
Consistency – Choice – Confidence
More information:
– HP Unveils Innovations in Cloud to help Customers Thrive in a Hybrid World [The HP Blogs Hub, Dec 11, 2013] in which it is stated “As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft® Windows® Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company. “
– A press release of similar title with additional lead and closing “Pricing and availability” parts
– HP CloudSystems stand apart [HP Enterprise Business community blog, Dec 10, 2013]
– How HP CloudSystem stacks up against competitors [Porter Consulting, June 14, 2013] Comparison of offerings from HP, IBM [PureSystems], and VCE [formed as a joint venture by Cisco and EMC, with minor investments from VMware and Intel; resulting in Vblock products based on Cisco UCS servers, Cisco network components, EMC storage arrays, and the VMware virtualization suite]
“We created a killer interface. An easy to use, consumer inspired interface that is consistent across multiple types of experiences (from classic PC, administration, to mobile experiences). We also designed and optimized the interface for the different types of roles in the organization (from architect who might be designing a service, to end user or consumer of that service, as well as for IT operator and adminstrator).”
More information: Empowering users and the new face of cloud [HP Enterprise Business community blog, Dec 11, 2013] written by Ken Spear, Senior Marketing Manager (HP CloudSystem and OneView)
“We spent considerable effort and energy an choice and ability to really give customers the heterogeneous workload support they need. And now we are taking openess to an entirely new level. And so for the first time with CloudSystem we are shipping HP Cloud OS which is our enterprise class, OpenStack**** platform which gives customers the great innovation from OpenStack to build modern cloud workloads. But we are also supporting the power of matrix, so that you can bridge today’s and
tomorrow’s workloads on the same system.”
**** OpenStack APIs are compatible with Amazon EC2 (see Nova/APIFeatureComparison) and Amazon S3 (see Swift/APIFeatureComparison) and thus client applications written for Amazon Web Services can be used with OpenStack with minimal porting effort. Note that HP nixes Amazon EC2 API support — at least in its public cloud [Gigaom, Dec 6, 2013] “based upon significant input from developers and customers” as “customers want to avoid getting locked in to what he called, ‘Amazon’s spider web’ ”. Tier 1 Research analyst Carl Brooks said via email: “HP doesn’t need to support AWS APIs — OpenStack will do that for them to the limited extent it already does”.
“And finally we’re giving customers and partners more confidence
than they’ve ever had before in this type of solution. … And that will be available in both a quick-ship, channel-ready fixed configuration as well as in a highly customizable solution. In addition CloudSystem will ship with cloud service automation (CSA), the industry-leading orchestration and hybrid cloud management software [read NEW! HP’s solution for managing private and hybrid clouds] that gives an easy experience and easy management of next hybrid cloud environment. That could be clouds delivered in any physical infrastructure: public, managed or private. And lastly, when customers use clouds as to build private cloud there is boundless growth, because you can extend CloudSystem with public cloud resources: from the HP public cloud, or Amazon, or Savvis. And this week we are also announcing support for Windows Azure, as well as two very important European partners: SFR and arsys, a service provider right here in Spain.
More information:
– HP Cloud Service Automation – See new, do new at HP Discover! [HP Enterprise Business community blog, Dec 11, 2013]
– HP Unveils Innovations in Cloud to help Customers Thrive in a Hybrid World [The HP Blogs Hub, Dec 11, 2013] in which it is stated “As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft® Windows® Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company. “
– A press release of similar title with additional lead and closing “Pricing and availability” parts
Underlying core technologies:
- HP Converged Cloud brings OpenStack to the Enterprise [HewlettPackardVideos YouTube channel, Nov 6, 2013]
- HP Moonshot Demo with HP Cloud OS [hpcloud YouTube channel, Dec 12, 2013]
- Open source clouds and the enterprise [The HP Blog Hub, Nov 24, 2013]
Open source has long been linked to innovation. With a history tracing back to the origins of the public web, the concept of open source relies on the assumption that shared knowledge produces more and better innovation, which is better for everyone—as well as the business world.
Some pundits believe that it is the combination of cloud and the power of the open source community that has enabled such rapid cloud development, adoption, and innovation.
OpenStack: cloud source code at the ready
OpenStack® provides the building blocks for developing private and public cloud infrastructures. OpenStack comprises a series of interrelated projects, characterized by their powerful capabilities and massive scalability.
Like all open source projects, OpenStack is a group collaboration, consisting of a global community of developers and cloud computing technologists. HP is a top contributor and driving force behind OpenStack, helping it to become a leading software for open cloud platforms.
In other words, there’s a bright future for OpenStack, which is why HP chose it as the foundation for its hybrid cloud solutions.
HP Cloud OS
HP Cloud OS is the world’s first OpenStack-based cloud technology platform for hybrid delivery. HP Cloud OS enables our existing cloud solutions portfolio and new innovative offerings by providing a common architecture that is flexible, scalable, and easy to build on.
“We are in a new phase of cloud computing. Enterprises, government agencies, and industry are all placing demands on cloud computing technologies that exceed a singular, one-size-fits all delivery model,” says Bill Hilf, vice president of product management for HP Cloud. “HP Cloud OS, built on the power of OpenStack, is the foundation for the HP Cloud portfolio and a key part of the HP solutions that enable real customer choice and consistency.”
Watch the HP Cloud OS story at HP Discover
Attendees at HP Discover 2013 in Barcelona, don’t miss this opportunity to hear the inside story of HP’s development of HP Cloud OS. Join the Innovation Theater session:
IT3261 – The rise of open source clouds
In this session, Bill Hilf will walk you through his experiences working with large public cloud systems, the rise of open source clouds in the enterprise, and HP’s strategy and innovation with OpenStack, including a discussion of HP Cloud OS (Wednesday, 12/11/13, 4:30 pm).
Highlights from the presentation include:
- How open source has affected the development of the cloud
- The requirements of enterprises related to cloud computing
- How OpenStack enables HP’s cloud platform
- Top ten lessons learned when building HP’s public cloud
- HP’s overall cloud strategy
- William Franklin on HP Cloud and OpenStack Strategy for HP [hpcloud YouTube channel, Nov 5, 2013]
- OpenStack Technology [HewlettPackardVideos YouTube channel, Oct 29, 2013]
Gartner’s Allessandro Perilli’s latest observations about the OpenStack (he is focusing on private cloud computing in the Gartner for Technical Professionals (GTP) division):
– What I saw at the OpenStack Summit [Nov 12, 2013] in which he is particularly describing how OpenStack vendors are divided into two camps that I called “purists” and “pragmatists”. He notes that purists tend to ignore the fact that many large enterprises are interested in OpenStack for the reason of reducing their dependency from VMware and frightened by rewriting their traditional multi-tier LoB applications into new cloud-aware applications advocated by purists.
– Why vendors can’t sell OpenStack to enterprises [Nov 19, 2013] where he notes that: “In fact, for the largest part, vendors don’t know how to articulate the OpenStack story to win enterprises. They simply don’t know how to sell it.” Then he gives at least four reasons for why vendors can’t tell a resonating story about OpenStack to enterprise prospects:
1. “Lack of clarity about what OpenStack does and does not.”
2. “Lack of transparency about the business model around OpenStack.”
3. “Lack of vision and long term differentiation.”
4. “Lack of pragmatism”, i.e. “purist” approach described in his previous post.
- HP Cloud OS [Technology Preview] Technical Overview [hpcloud YouTube channel, Nov 5, 2013]
- Converged Cloud: HP Cloud OS Whiteboard Demo [hpcloud YouTube channel, June 12, 2013]
-
HP Cloud OS Whiteboard Demo – Hybrid Cloud [hpcloud YouTube channel, Oct 29, 2013]
-
An Open Architecture for Hybrid Cloud Delivery [hpcloud YouTube channel, Dec 10, 2013]
4. Finally latest details about HP’s Moonshot technology:
Moonshot: one of the “INFRA” (see above in the “HP Cloud OS Whiteboard Demo” video) building blocks for the HP CloudOS, actually the most future-oriented one
The Power of Moonshot [HP Discover YouTube channel, Dec 10, 2013]
My Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post introduced the HP Moonshot System as follows:
On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.
Also the Dec 6 update to the above post already provided significant roadmap information:

With Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013] saying
We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]
For the details about the ARM SoC technologies behind that go to the Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post!
But the initial Moonshot System launched in April’13 had support just for light workloads, such as such as website front ends and simple content delivery. This meant, nevertheless, a lot in the hosting space as evidenced by serverCONDO Builds its Business on Moonshot [Janet Bartleson YouTube channel, Dec 9, 2013] video:
More information from the same source:
– Why serverCONDO is in the Dedicated Hosting Business
– Old School and New School Cloud Servers (serverCONDO)
OR taking a true large-scale example watch this HP.com Takes 3M Hits on Moonshot [Janet Bartleson YouTube channel, Nov 26, 2013] video:
According to Meg Whitman’s keynote at Discover 2013 on Dec 10 they would be able to go from 6 datacenters to 4 thanks to Moonshot, even considering the future needs and workloads. Something as dramatic as when HP moved previously (3 years ago) from 86 datacenters to 6 datacenters.
So, to appreciate the full potential of Moonshot one should, on the other hand, understand the following system architecture information provided in the HP Moonshot System, the world’s first software defined servers [April 10, 2013] technical whitepaper:
HP Moonshot System
HP Moonshot System is the world’s first software defined server accelerating innovation while delivering breakthrough efficiency and scale with a unique federated environment, and processor-neutral architecture. Traditional servers rely on dedicated components, including management, networking, storage, power cords and cooling fans in a single enclosure. In contrast, the HP Moonshot System shares these enclosure components. The HP Moonshot 1500 Chassis has a maximum capacity of 1800 servers per 47U rack with quad server cartridges. This gives you more compute power in a smaller footprint, while significantly driving down complexity, energy use and costs.
The first server available on HP Moonshot System is HP ProLiant Moonshot Server based on Intel® Atom™ processor S1260, and it provides an ideal solution for web serving, offline analytics and hosting.
HP Moonshot 1500 Chassis design
The HP Moonshot 1500 Chassis incorporates independent component design and hosts 45 cartridges, two network switches, and the infrastructure components within the chassis. The Moonshot 1500 Chassis’ electrically passive design makes this completely hot pluggable design possible. The Moonshot 1500 Chassis uses no active electrical components, other than EEPROMs required for manufacturing and configuration control purposes.
Figure 1 shows the elements of the Moonshot 1500 Chassis. HP controls the design on all elements of the chassis except for the server (initial server contain a single server) and the network switch module which may be designed by the Moonshot server or network switch partners.
Figure 1.
The HP Moonshot 1500 Chassis accommodates up to 45 individually serviceable hot plug cartridges. Two high-density, low-power HP Moonshot 45G Switch Modules, each with a 10g x6 HP Moonshot 6SFP Uplink Module, handle network communication for all cartridges in the chassis. These switches use Layer 2/Layer 3 routing, QoS management (CLI, SFLOW), and require no license keys. The dual network switches and I/O modules provide traffic isolation, or stacking capability for resiliency. Rack level stacking simplifies the management domain.
The Moonshot System uses the HP Moonshot 1500 Chassis Management module (CM) module for complete chassis management, including power management with shared cooling. The server platform is powered by four 1200W Common Slot Power Supplies in an N+1 configuration and cooled by five hot pluggable fans also in an N+1 configuration. The CM uses component-based satellite controllers to communicate with and manage chassis elements. The modular faceplate design allows for future feature development.
…
HP ProLiant Moonshot Server
Each software defined server contains its own dedicated memory, storage, storage controller, and two NICs [Network Interface Controllers] (1Gb). For monitoring and management, each server contains management logic in the form of a Satellite Controller with a dedicated internal network connection (100 Mb). Figure 5 shows HP ProLiant Moonshot Server with a single Intel® Atom™ processor S1260and a single SFF drive.
Figure 5. HP ProLiant Moonshot Server and functional block diagram
These servers provide the base hardware functionality of the system. Future software defined servers can take the following forms:
- One or more discrete server with separate compute, storage, memory and I/O
- One or more complete cartridge designs with integrated compute, storage, memory, and I/O
- One or more forms of storage accessible to adjacent cartridges
Future servers will incorporate these descriptions to provide a wide degree of flexibility for customizing and tuning based on the desired performance, cost, density, and power constraints.
The available ProLiant Moonshot server design includes one processor and a single HDD or SDD. This server is ideal for application workloads such as website front ends and simple content delivery. Table 1 gives you the current server component descriptions.
The Intel Atom is the world’s first 6-watt server-class processor. In addition to lower power requirements, it includes data-center-class features such as 64-bit support, error correcting code (ECC) memory, increased performance, and broad software ecosystem. These features, coupled with the revolutionary HP Moonshot System design are ideal for workloads using many extreme low-energy servers densely packed into a small footprint can be much more efficient than fewer standalone servers.
Intel® Atom™ processor S1260 integrates two CPU cores, single-channel memory controller, and PCI Express 2.0 interface. Each CPU core will has its own dedicated 32KB instruction and 24 KB data L1 caches, and 512 KB L2 cache. The processors incorporate Hyper-Threading, which allows them to run up to 4 threads simultaneously. Additionally, the chips have VT-x virtualization enabled.
Each Moonshot server boots from a local hard drive, or the network using PXE [Preboot eXecution Environment]. The Moonshot System use HP BIOS and “headless” operation (no video or USB). No additional HP software is required to run the cartridge. NIC, storage, and other drivers are included in the compatible Linux distributions (described later in the OS management section).
…
Fabrics and topology
We designed the HP Moonshot System to provide application-specific processing for targeted workloads. Creating a fabric infrastructure capable of accommodating a wide range of application-specific workloads requires highly flexible fabric connectivity. This flexibility allows the Moonshot System fabric architecture to adapt to changing requirements of hyperscale workload interconnectivity.
The Moonshot System design includes three physical production fabrics, the Radial Fabric, the Storage Fabric, and the 2D Torus Mesh Fabric. The fabrics are connected to 45 cartridges slots, two slots for the network switches, and two corresponding I/O modules.
Figure 9 shows the eight 10Gb lanes routed from each of the cartridge slots to the pair of core network fabric slots in the center of the Moonshot 1500 chassis. Four lanes from each cartridge go to one core network fabric slot and four to the other (A and B). From each core fabric slot there are 16 10Gb lanes routed to the back of the chassis to attach to an I/O module.
Figure 9.
Radial Fabric
The Radial Fabric provides a high-speed interface between each cartridge and the two core fabric slots.
The Radial fabric includes these links:
• 2x GbE channels
• One port to each network switchFigure 10 illustrates a torus topology interlinking cartridge to cartridge in combination with the radial topology linking to the network switches.
Figure 10.
The Radial fabric handles all Ethernet-based traffic between the cartridge and external targets. The exception is iLO* management network traffic using the dedicated iLO port.
*[iLO: Integrated Lights-Out]
Storage fabric
A Moonshot System Storage Fabric will use existing Moonshot 1500 Chassis connections to span each 3×3 cartridge slot subsection within the chassis baseboard (Figure 11). The Storage Fabric will be part of future HP Moonshot System releases. This fabric implementation will use the Storage Fabric as a connection between servers and local storage devices.
Figure 11.
In this implementation, SAS/SATA is sent over lanes between each adjacent cartridge for primary storage along with additional lanes to other cartridges in the subsection for redundancy or other storage requirements. Although the figure shows a specific configuration of compute and storage nodes, there is flexibility to configure the subsections in different ways as long it does not violate the rules of the interface or storage technology. While the example in Figure 11 shows the proximal fabric being used for SAS/SATA, any type of communication is possible due to the dynamic nature of the fabric.
2D Torus Mesh Fabric
Like the Storage Fabric, future releases of the HP Moonshot System will use existing Moonshot 1500 Chassis connections to implement the 2D Torus Mesh Fabric, providing a high speed general purpose interface among the cartridges for those applications that benefit from high bandwidth node-to-node communication. The 2D Torus Mesh fabric can be used as Ethernet, PCIe, or any other interface protocol. At chassis power on, the CM [Chassis Management] ensures the compatibility on all interfaces before allowing the cartridges to power on.
The 2D Torus Mesh fabric is routed as torus ring configuration capable of providing four 10Gb bandwidths in each direction to its north, south, east and west neighbors. This allows the HP Moonshot System to meet many unique HPC [High-Performance Computing] applications where efficient localized traffic is needed.
- 16 lanes from each cartridge
- Four up, four down, four left, and four right
- Can support speeds up to 10Gb
Topologies
Topologies utilize the physical fabric infrastructure to achieve a desired configuration. In this case, Radial and 2D Torus Mesh fabrics are the desired Moonshot topologies. The Radial Fabric pathways are optimized for a network topology utilizing two Ethernet switches. The 2DTorus Mesh fabric pathways are passive copper connections negotiated with neighbors and optimized for topology protocols that change over time to accommodate future Moonshot System releases.
Moonshot System network configurations
Moonshot System network switches and uplink modules provide resiliency and efficiency when configured as stand-alone or stackable networks. This feature allows you to connect up to nine Moonshot 1500 Chassis and then to your core network, eliminating the need for a top of rack (TOR) switch.
- Dual switches provide traffic isolation or can be stacked
- Rack level stacking simplifies management domain
- Redundant switch configurations provide a more resilient infrastructure
- Layer 2, Layer 3 Routing & QoS, Management (CLI, SNMP, SFLOW). No license keys
Moonshot 1500 Chassis stacking
Stacking allows you to select a tradeoff between overall performance and cost of TOR switches. Stacking can eliminate the cost of TOR switches for workloads able to tolerate extra latency. The switch firmware architecture elects a master management processor to control all stacked switches. Stacking does not scale in a linear way; stacking size is constrained by the capability of a single management processor. The P2020 [switch management] processor is sized to reliably stack nine network switches (405 ports).
We can create two stacked switches in a single rack with no performance issues. Up to nine modules can be stacked to form a single logical switch. A simple loop consumes two ports per I/O module in this Figure 12 layout.
Figure 12.
Management
The HP Moonshot System relies on a federated iLO system. Federation requires the physical or logical sharing of compute, storage or networking resources within the Moonshot 1500 Chassis. The chassis shares four individual iLO4 ASICs [Application-Specific Integrated Circuits] in the CM module with high-speed connections to the management network through a single management port uplink.
The CM provides a single point of management for up to 45 cartridges, and all other components in the Moonshot 1500 Chassis, using Ethernet connections to the internal private network. Each hot pluggable component includes a resident satellite controller. The CM and satellite controllers use data structures embedded in non-volatile memory for discovery, monitoring, and control of each component.
HP Moonshot 1500 Chassis Management module
The CM includes four iLO processors sharing the management responsibility for 45 cartridges, the power and cooling processor, two networks switches and Moonshot 1500 chassis management. We’ve federated the iLO system functionality by assigning certain iLO processors responsibility for managing certain hardware interfaces. We balanced the workload among the three cartridge zones in the chassis (physically separated by network switches), and dedicated one iLO processor to manage chassis hardware and the switches. Communication between the CM and the Satellite Controllers is an internal private Ethernet network. This eliminates the requirements for a large number of IP addresses being used on the production network.
The iLO subsystem includes an intelligent microprocessor, separate memory, and a dedicated network interface. iLO uses the management logic on each cartridge and module, and up to 1,500 sensors within the Moonshot 1500 Chassis, to monitor component thermal conditions. This design makes iLO independent of the host servers and their operating systems.
iLO monitors all key Moonshot components. The CM user interfaces and API’s include a Command-Line Interface (CLI) and Intelligent Platform Management Interface (IPMI) support. These provide the primary gateway for node management, aggregation and inventory. A text-based interface is available for power capping, firmware management and aggregation, asset management and deployment. Alerts are generated directly from iLO, regardless of the host operating system or even if no host operating system is installed. Using iLO, you can do the following:
- Securely and remotely control the power state of the Moonshot cartridges (text-based Remote Console)
- Obtain access to each and all serial ports using a secure Virtual Serial Port (VSP) session
- Obtain asset and hardware specific information (MAC Addresses, SN)
- Control cartridge boot configuration
…
OS deployment and support
The Moonshot System hosts multiple individual systems, and network switches. Unlike other HP ProLiant BladeSystem-class servers, Moonshot cartridges provide OS installation only through network Installation, with console access provided by an integrated Virtual Serial Port to each server. Network Installation is performed in a manner similar to other HP ProLiant, or standard x86 servers, with the only required modifications being the specification of the serial console instead of a standard VGA display (described below.)
Linux Distributions
The initial release of the HP Moonshot System is compatible with these versions of Linux:
• Red Hat Enterprise Linux 6.4
• SuSE SLES 11SP2
• Ubuntu 12.04HP Insight Cluster Management Utility
The HP Insight Cluster Management Utility (CMU) is well suited for performing network installations, image capture and deploy, and ongoing management of large numbers of servers such as the density provided by the Moonshot 1500 Chassis. If you are using CMU, the directions included in the following “Setting up an installation server” section are not required, and you should instead refer to the CMU documentation.
The CMU is optional and basic network installation of the OS may be performed using a standard PXE-based installation server.
Conclusion
The HP Moonshot System addresses the needs of data centers deploying servers at a massive scale for the new era of IoT. Industry sources estimate that lightweight web serving and analytics workloads will equal 14% of the x86 server market by 2015. The HP Moonshot System changes the current computing paradigm with an innovative completely hot pluggable architecture that increases the value of your investment and reduces TCO. You get a significant reduction in power usage, hardware costs, and use of space. You’ll see simplification in the areas of network switches, cabling, and management. Moonshot System’s use of shared hot pluggable infrastructure includes power supplies and fans. The HP Moonshot 1500 Chassis Management module, with proven HP iLO management processors, gives you detailed reporting on all platform components while the power and cooling controller manages the N+1 fan and power supply configurations. Dual network switches and I/O modules increase Moonshot’s resiliency and flexibility, allowing you to stack HP Moonshot Switch Modules. The Moonshot System is the first software defined, application-optimized server platform in the industry. Look for a growing library of software defined servers from multiple HP partners targeting specific IoT workloads compatible with emerging web, cloud, and massive scale environments, as well as analytics and telecommunications.
Now we have 2 aditional cartridges: the m300 and the m700
Moonshot ProLiant m300 Server Cartridge Overview [Janet Bartleson YouTube channel, Nov 27, 2013]
A new big little HP Moonshot server cartridge is shipping!! [The HP Blog Hub, Dec 10, 2013]
Guest blog written by Nigel Church, HP Servers
We call it the HP ProLiant m300 Server cartridge for the HP Moonshot System. This is the “big brother” to the current HP ProLiant Moonshot server cartridge sporting the new Intel Atom Avoton—an eight core processor running at 2.4GHz with 32GB memory [with 1 TB disk storage on the cartridge] delivering up to six times the energy efficiency and up to seven times more performance.
Now, in just one Moonshot System with 45 ProLiant m300 Servers you have 360 cores, 1,440GB memory and up to 45TB of storage. For the right workloads, you can accomplish the same work using just 19% of the power of a traditional server!
What workloads can it support? If you have a growing web site serving dynamic content [note that for the first Atom based server cartridge static content was mentioned when describing the type of workload supported] currently running on ageing traditional servers you must take a look at Moonshot to save space, power and prepare yourself for the future.
If you’re attending HP Discover in Barcelona, come to the show floor and see HP Moonshot in action–or visit the HP Discover News & Social Buzz page and get the latest updates! Otherwise, visit the HP ProLiant m300 Server Cartridge web page for more details on the newest Moonshot Cartridge.
HP ProLiant m300 Server Cartridge [HP product page, Dec 11, 2013]
Overview
- Are traditional servers more than you need for your scale-out big data, Web and content delivery network workloads? Are you paying for underutilized servers that use more and more space and energy? Companies running scale-out big data applications, serving web pages, images, videos, or downloads over the Internet often need to carry out simultaneous lightweight computing tasks over and over, at widely distributed locations. The HP ProLiant m300 Server Cartridge based on the Intel® Atom™ System on a Chip (SOC) delivers breakthrough performance and scale with up to 360 processor cores, 1,440 GB of memory and 45 TB of storage in a single Moonshot System.
Features
A Platform for Big Data with NoSQL/NewSQL
- NoSQL/NewSQL on HP ProLiant m300 Server Cartridges gives cost-effective scalable performance for online transactional processing and maintains the ACID (Atomicity, Consistency, Isolation, Durability) of traditional databases.
- NoSQL/NewSQL thrives in a distributed cluster of shared-nothing nodes like the HP ProLiant m300 Server Cartridges. SQL queries are split into query fragments and sent to the node that owns the data. These databases are able to scale linearly as nodes are added, without suffering from bottlenecks.
Scale-out Platform for Your Web Needs
- Companies need the scalability of the HP ProLiant m300 Server Cartridge to serve web pages, including image and video downloads while carrying out simultaneous lightweight computing tasks over and over, at widely distributed locations.
- For Web workloads, a platform based on the HP ProLiant m300 Server Cartridge means you don’t waste energy, space, and money on a high-end server when a low-cost density-optimized server can handle the job.
Content Delivery Anytime from Any Device
- The m300 Server Cartridge provides high-speed efficient transcoding of media streams to match specific user devices. This allows efficient management of content by reducing library size and transcoding on demand, for specific device characteristics.
- Using less energy and space at a lower cost compared to traditional servers, the compact m300 Server Cartridge has Intel Atom-based SOCs to quickly deliver Web content to a variety of mobile devices.
- System Features
Compute: Intel® Atom™ Processor C2750, 2.4 GHz
Memory: DDR3 PC3-12800 SDRAM (1600 MHz); Four (4) SODIMM slots; 32GB (4x8GB)
Storage: (1) SFF 500GB HDD, 1TB HDD, and 240GB SSD
Networking: (Internal) dual port 1GbE per CPU; HP Moonshot 45G Switch Module Kit; HP Moonshot 6SFP Uplink Module Kit
Enclosure: Moonshot 1500 Chassis
Warranty: 1 year
Intel® Atom™ Processor C2750 (4M Cache, 2.40 GHz) [Intel product page, Dec 3, 2013]
SPECIFICATIONS
Essentials
Status
Launched
Launch Date
Q3’13
Processor Number
C2750
# of Cores
8
# of Threads
8
Clock Speed
2.4 GHz
Max Turbo Frequency
2.6 GHz
Cache
4 MB
Instruction Set
64-bit
Embedded Options Available
No
Lithography
22 nm
Max TDP
20 W
Recommended Customer Price
TRAY: $171.00
Memory Specifications
Max Memory Size (dependent on memory type)
64 GB
Memory Types
DDR3, 3L 1600
# of Memory Channels
2
Max Memory Bandwidth
25.6 GB/s
Physical Address Extensions
36-bit
ECC Memory Supported ‡
Yes
Expansion Options
PCI Express Revision
2
PCI Express Configurations ‡
x1,x2,x4,x8,x16
Max # of PCI Express Lanes
16
I/O Specifications
USB Revision
2
# of USB Ports
4
Total # of SATA Ports
6
Integrated LAN
4x 2.5 GbE
UART
2
Max # of SATA 6.0 Gb/s Ports
2
Package Specifications
TCASE
97°C
Package Size
34 mm x 28 mm
Sockets Supported
FCBGA1283
Low Halogen Options Available
See MDDS
Advanced Technologies
Intel® Turbo Boost Technology ‡
2.0
Intel® Virtualization Technology (VT-x) ‡
Yes
Intel® Data Protection Technology
AES New Instructions
Yes
HP’s Moonshot and AMD are taking cloud computing to a whole new level [AMD YouTube channel, published on Dec 4, 2013]
ProLiant m700 Server Cartridge in HP Moonshot Overview [Janet Bartleson YouTube channel, Dec 9, 2013]
HP ProLiant m700 Server Cartridge [HP product page, Dec 11, 2013]
Overview
- Looking for a cost-effective solution for hosted desktop infrastructure, mobile gaming or cloud multi-media workloads? The HP ProLiant m700 Server Cartridge in a Moonshot 1500 Chassis offers lower cost (price per seat), simplified systems management and user support, vastly improved system/data security, and efficient systems resource use for your hosted desktop infrastructure (HDI) and cloud multi-media workloads. Each m700 Server Cartridge has four servers, each with an AMD Opteron™ X2150 APU with fully-integrated graphics processing and CPU. The m700 Server Cartridge delivers outstanding compute density and price/performance for cloud multi-media workloads.
- You can power mobile games, or other web content, objects, or applications, live and on-demand streaming media.
Features
Hosted Desktop Infrastructure (HDI) Solution with Power and Scalability
- The centralized nature of hosting desktops on the HP ProLiant m700 Server Cartridge provides lower cost (price per seat), simplified system management and user support, vastly improved system/data security, and efficient system resource use.
- Each cartridge has four AMD-processor-based servers. Each server contains the AMD Opteron™ X2150 APU with graphics processing and CPU.
- The overall density means that you can cost-effectively have 180 servers in less than 5U of rack space.
Mobile Content and Gaming Any Time from Any Device
- The HP ProLiant m700 Server Cartridge excels at powering graphics-intensive content delivery such as hosted videos and mobile games.
- The cartridge provides high-speed, efficient transcoding of source media streams to match specific user devices. This allows efficient management of content by reducing library size and transcoding closer to the customer, on demand, for specific device characteristics.
- Using less energy and space at a lower cost compared to traditional servers, the m700 Server Cartridge has four AMD Opteron x2150-based servers, each with integrated graphics processing capabilities to quickly deliver mobile games to your device, wherever you are.
- System features
Compute: AMD Opteron™ X2150 APU, 1.5 GHz, with AMD Radeon™ HD 8000 graphics
Memory: DDR3 PC3-12800 SDRAM (1600 MHz); Four (4) SODIMM slots; 32GB (8GB per SoC)
Storage: 4 x 32 GB iSSD (1 per SoC)
Networking: (Internal) BCM5720 dual port 1GbE per CPU; HP Moonshot-180G Switch Module; HHP Moonshot-4QSFP+ Uplink Module
Enclosure: Moonshot 1500 Chassis
Warranty: 1 year
AMD Opteron™ X2150 APU [AMD product page, May 29, 2013]
Introducing the World’s First Server-class x86 APU SoC
Specification
Features
Feature
Function
Benefit
4 Energy Efficient X86 Cores, Codenamed “Jaguar”
Optimize x86 performance/watt for microservers.
Helps enable low datacenter TCO
Flexible TDP
Allows user to control their own power profile by adjusting CPU and GPU frequencies in the BIOS to match their application needs (GPU integrated in X2150 only)
Gives users more control over their workload performance and power consumption
Integrated I/O
Integrates legacy Northbridge and Southbridge functionality directly on the processor
Smaller footprint enables dense microserver designs
Core, Northbridge and Memory P-states
Dynamically adjusts performance levels based on application requirements
Helps reduce power consumption
Server Infrastructure support
Feature
Function
Benefit
DDR3 Memory with ECC Support
High-speed, highly reliable server-class memory
Helps reduce server failures due to memory.
Integrated I/O
Integrate PCIe Gen2, SATA 2/3, USB 2.0 and USB 3.0 functionality onto the processor.
Enable enterprise-class functionality in a single chip solution.
Server Processor Reliability
Processor undergoes a back-end test flow to ensure proper quality
Ensure product quality is that of other server-class products for greater reliability.
Integrated Graphics
Feature
Function
Benefit
Graphics Core Next Architecture with AMD Radeon™ HD 8000 Series Graphics
Provide high-quality graphics capabilities in a server SoC.
Outstanding performance in media-oriented workloads such as remote DT, online gaming and imaging
Display Controller Engine
Allows for VGA and HDMI display capabilities
Helps reduce cost by eliminating need for add-on display cards
Unified Video Decoder 4.2
Dedicated hardware video decoding block
Help enable a near-native experience in remote DT applications.
Video Compression Engine 2.0
Hardware-assisted encoding of HD video streams
Help enable a near-native experience in remote DT applications
Citrix hosted desktops–powered by HP Moonshot [The HP Blog Hub, Dec 10, 2013]
Written by Citrix Guest Blogger Kevin Strohmeyer, Director Product Marketing, Citrix
Veterans of server-based computing and VDI are all too familiar with the complexities of buying and deploying desktop virtualization. Great strides have been made to simplify the sizing and configuration of desktop virtualization infrastructure, but ultimately, when you build and deliver shared resources, you should carefully consider how those resources will be used; and decide how much excess capacity you need to ensure peak usage can be supported.
The distributed nature of PCs, coupled with management challenges of patching and updates plus the vulnerability of unsecured, sensitive data has left IT looking for a better answer. This brings us right back to centralized desktop virtualization.
The HP ConvergedSystem 100 for Hosted Desktops with Citrix XenDesktop is a new as well as unique type of desktop virtualization. Instead of just leveraging a hypervisor to abstract the OS from hardware, XenDesktop streams an OS right to bare metal to dedicated microsystems with dedicated CPU, memory and graphics all neatly arranged in a rack mount chassis. This eliminates the overhead and complexity of abstracting the hardware and managing VMs. This also eliminates the system overhead required to share those resources leaving more power for the desktop. All in all, the solution presents a very interesting alternative to VDI.
The HP ConvergedSystem 100 for Hosted Desktops is an all-in-one compute, storage and networking system based on HP Moonshot, delivering 180 desktops for Citrix XenDesktop environments. The system provides an independent, remote PC experience with business graphics and multimedia performance essential for mainstream knowledge workers, and all while delivering up to 44% improvement in TCO and 63% lower power requirements. Other benefits include:
- Predictable, fixed cost per user reduces OPEX
- Independent compute and graphics delivers consistent end user performance
- Deploy with Citrix XenDesktop in approximately 2 hours
At the same time, this solution is great example of the power of FlexCast technology from Citrix. And that power is reflected in the way the FlexCast management infrastructure is designed to promote these innovative solutions that leverage common image management, profile management and app virtualization in a common delivery architecture. The unique Citrix Provisioning Services (PVS) technology that enables bare metal and just in time OS provisioning provides all the benefits of VDI without hypervisor management.
What makes this solution most interesting is the ease of purchasing and deploying. There is no configuration work required to figure out how much hardware or storage to purchase, you simply buy as many systems as you need and rack and stack as you grow from the first 180 desktop on up. This alone could make this solution very attractive to organizations desiring the security and management of centralized virtual desktops, but who want to avoid the management of virtual infrastructure.
If you are attending HP Discover in Barcelona this week, come by to see the ConvergedSystem 100 for Hosted Desktops in the Discover Zone.
Learn more about the new HP ConvergedSystem 100 for Hosted Desktops.
Offering a no compromise PC experience [The HP Blog Hub, Dec 9, 2013]
By HP guest blogger Dan Nordhues, HP Client Virtualization Worldwide Manager
Poor performance is one of the major reasons users reject VDI or remote desktop implementations. While all your workers may sit at PCs, each user population has unique needs that dictate requirements. For example, task workers need only a couple of applications to do their jobs, but workstation-class users require accelerated graphics capabilities to handle workloads like CAD/CAM and Oil and Gas applications.
Right in the middle of the PC-user continuum sits the mainstream knowledge worker—the largest segment of the PC user population— with unique requirements of their own. Meeting the needs of these users is the goal of HP ConvergedSystem 100 for Hosted Desktops powered by HP Moonshot—a next-generation solution engineered specifically for meeting the needs of today’s knowledge workers, while also meeting your requirements for simplicity, lower deployment cost, and energy efficiency.
HP ConvergedSystem 100 for Hosted Desktops provides an all-in-one compute, storage, and networking system that delivers desktops for Citrix XenDesktop non-persistent users. Provide your mainstream users a dedicated PC experience with the business graphics and multimedia performance they need, while reducing TCO by up to 44 percent and lowering power requirements up to 63 percent.
If you plan to attend HP Discover Barcelona 2013, you can take advantage of great hands-on experience with HP Converged Systems. And check out these sessions for more information on HP’s client virtualization portfolio:
- BB2391 – Architecting client virtualization for task worker to workstation-class users 10 December 10-11am
- DT3108 – Moonshot-hosted desktop infrastructure: an innovative way for hosting end-user desktops 11 December, 11:30-12
- DT3177 – Moonshot-hosted desktop infrastructure: an innovative way for hosting end-user desktops, Part II 12 December, 11:30-12
Learn more about the new HP ConvergedSystem 100 for Hosted Desktops.
Precedence for TD-LTE by Chinese government to benefit China Mobile to launch its China-originated 4G service as early as Dec 18, 2013
… it looks like the government was waiting till China Mobile was ready to launch, meanwhile delaying FDD-LTE by declaring a necessity to “test a converged TD-LTE/LTE FDD network at a later date”.
4G TD-LTE Licenses Officially Issued by MIIT [Global TD-LTE Initiative Updates, Dec 4, 2013]
After months of waiting and dithering, China is moving into the 4G era.
Today Chinese Ministry of Industry and Information Technology (MIIT) has finally issued the first batch of 4G licenses to China Mobile, China Unicom and China Telecom. China Mobile gets access to 130MHz of spectrum (1880-1900 MHz, 2320-2370 MHz, 2575-2635 MHz), China Unicom gets 40MHz (2300-2320 MHz, 2555-2575 MHz) and China Telecom has 40MHz (2370-2390 MHz, 2635-2655 MHz) for TD-LTE operation. The commercialization of TD-LTE in China by these three operators will certainly promote the TD-LTE scale deployment globally.
China issues 4G licenses [Xinhua, Dec 4, 2013]
China’s Ministry of Industry and Information Technology (MIIT) on Wednesday issued 4G licenses to three Chinese telecom operators, marking the beginning of a new era in China’s high-speed mobile network.
China Mobile, China Telecom and China Unicom received permits to offer fourth-generation (4G) mobile network services employing homegrown TD-LTE technology.
The ministry said the three companies have conducted large-scale tests of TD-LTE, or Time-Division Long-Term Evolution, one of two international standards, and their technology is ready for commercial service.
Zhang Feng, the MIIT’s spokesman, said 4G technology will lower bandwidth costs and promise faster mobile broadband.
The ministry’s figures showed that the Internet speed of 4G networks is 10 times that of 3G services, and allows mobile users to download a 7-megabyte music file in less than one second.
China Mobile said the rates for 4G services will be cheaper than those for 3G. In some cities where the company has launched the 4G network for trial commercial use, the tariff is 20 percent less than similar 3G network plans.
Li Yue, president of China Mobile, said the price of 4G smartphones will go down quickly following the approval of the 4G network for commercial use.
Now only a number of smartphone models in China are equipped with modules that support home-grown 4G TD-LTE technology, with their prices ranging from 350 U.S. dollars to 800 U.S. dollars.
Li said 4G terminals for as little as 150 U.S. dollars will be available on the market by the end of this year.
The MIIT also said Wednesday it will test a converged TD-LTE/LTE FDD network at a later date.
China is the major promoter of the TD-LTE standard and is also a major owner of the standard’s core patents. LTE FDD is the other international 4G standard and is popular in Europe.
The MIIT said the convergence of the two standards is gaining momentum in the global telecom industry. A total of 10 converged TD-LTE/LTE FDD commercial networks have been established so far worldwide.
“China will issue licenses for LTE FDD when the condition is ripe,” said the ministry.
Experts believe the commercialization of TD-LTE will create a new impetus for China’s economic growth, as the country is home to the largest number of mobile phone users in the world.
The ministry’s statistics showed that the 3G network contributed 211 billion yuan (34 billion U.S. dollars) to China’s GDP in its first three years of commercial use.
“The 4G industry chain, which involves terminal manufacturing and the software sector, will further improve the services of China’s telecom sector,” said spokesman Zhang Feng.
60% of phone users in China have no plans to upgrade to 4G: report [Want China Times, Dec 6, 2013, 14:46 (GMT+8)]
More than 60% of China’s cell phone users have no plans to switch to the latest 4G technology, the Guangzhou-based Souther Daily reported on Dec. 5.
Though the paper did not give detailed information on how its poll was conducted, it said more than 60% of the people it surveyed said they are happy with their 3G smartphones and that they do not feel the need to upgrade.
Those polled said they have a greater choice of 3G smartphones at more competitive prices than the 4G options currently available.
Southern Daily said 4G services, for which the government began to issue licenses this week, would be attractive for the younger generation in particular but telecom carriers may need to offer more promotions and incentives to persuade people to retire their current cell phones.
3G vs. LTE Network Architecture – SixtySec [ExploreGate YouTube channel, May 4, 2012]
What are the differences between TDD LTE (TD-LTE) and FDD LTE (FD-LTE)? [Global TD-LTE Initiative, Nov 4, 2013]
FDD LTE and TDD LTE are two different standards of LTE 4G technology. LTE is a high-speed wireless technology from the 3GPP standard. 3G growth reached its end at HSPA+, and mobile operators have already started deploying 4G networks to provide much more bandwidth for mobile users. 4G speed will provide a virtual LAN reality to mobile handsets by offering very high speed access to the Internet to experience real triple play services such as data, voice and video from a mobile network.
LTE is defined to support both the paired spectrum for Frequency Division Duplex (FDD) and unpaired spectrum for Time Division Duplex (TDD). LTE FDD uses a paired spectrum that comes from a migration path of the 3G network, whereas TDD LTE uses an unpaired spectrum that evolved from TD-SCDMA.
TD-LTE does not require a paired spectrum since transmission and reception occurs in the same channel. In FD-LTE, it requires a paired spectrum with different frequencies with a guard band.
TD-LTE is cheaper than FD-LTE since in TD-LTE there is no need for a diplexer to isolate transmission and receptions.
In TD-LTE, it’s possible to change the uplink and downlink capacity ratio dynamically according to the needs. In FD-LTE, capacity is determined by frequency allocation by regulatory authorities, making it difficult to make a dynamic change.
In TD-LTE, a larger guard period is necessary to maintain the uplink and downlink separation that will affect the capacity. In FD-LTE, the same concept is referred to as a guard band for isolation of uplink and downlink, which will not affect capacity.
Cross slot interference exists in TD-LTE, which is not applicable to FD-LTE.
What are TD-LTE’s technical highlights? [Global TD-LTE Initiative, Nov 4, 2013]
TD-LTE transmissions travel in both directions on the same frequency band, a methodology formally known as “unpaired spectrum.” It is distinct from “paired spectrum,” where two frequencies are allocated, one for the transmit channel and the other for the receive channel (formally called “Frequency Division”). “Time Division” means the receive channel and the transmit channel take turns (i.e., divide the time between them) on the same frequency band. The time divisions are asymmetric, meaning that more time-slots are allocated to data going from the tower to the phone than from the phone to the tower. The usage patterns of the future (fewer phone calls, more Internet) are asymmetric in this manner.
The frequency bands used by TD-LTE are 3.4–3.6 GHz in Australia and the UK, 2.57−2.62 GHz in the US and China, 2.545-2.575 GHz in Japan, and 2.3–2.4 GHz in India and Australia. The technology supports scalable channel bandwidth, between 1.4 and 20 MHz. A typical range measures up to 200 meters indoors on a 2.57–2.62 GHz radio frequency link.
China Telecommunications: Who says TD-LTE doesn’t work? [Global TD-LTE Initiative Updates, Nov 25, 2013]
Our existing ‘counter consensus’ view on the outlook for Chinese Telecoms is based on the belief that LTE will cause a reversal of fortune among the key players. China Mobile will solve the biggest problem identified in our consumer research (slow data speeds) and will once again have the ‘best’ mobile network in China on all dimensions. China Unicom, having gained strong momentum on the basis of their superior 3G data speeds will face a slowing of momentum – at least among high value customers seeking the latest technology
Over the last few weeks we have heard many arguments from China Mobile Bears as to why our hypothesis will be wrong. The initial arguments are usually targeted at the technology itself – that TD-LTE is a Chinese standard and a poor cousin to the much better FD-LTE more popular in Europe (it isn’t), that it doesn’t handle voice calls well (irrelevant – no operator in the world has launched a new LTE network with voice over LTE – in all cases they use existing 2G or 3G networks for voice), that handsets will not be available (ever heard of the iPhone? Not to mention Samsung, Sony, HTC, Huawei…)
China Mobile launched its TD-LTE network in Shenzhen for ‘test’ operations in early November. We thought the best way to address the Bear’s technology concerns was to go test the network for ourselves. Nearly 120 speed tests conducted from different indoor and outdoor locations supported our hypothesis that TD-LTE will be demonstratively better than Unicom’s existing 3G network in data speeds. On average we experienced download speeds 10 times faster, upload speeds 7 times faster and a dramatic improvement in latency. We concur that service coverage for LTE is currently weaker, but locations meaningful to high value customers are already largely covered. Coverage will continue to improve as China Mobile rolls out new sites.
Over the last few years, China Mobile has underperformed the market while Unicom has outperformed – we attribute most of the difference in fortune of these two companies to the relative data speed of their respective 3G networks. We believe the launch of TD-LTE services by China Mobile will start the process of reversing this. Speed test in Shenzhen affirm our belief that TD-LTE technology works and is demonstratively superior to W-CDMA in data speeds.
Click to download:
China Telecommunications: Who says TD-LTE doesn’t work?
We experienced lightning speeds in Shenzhen
[a 10 pages long whitepaper by Berstein Research, Nov 18, 2013]
Some important excerpts from that:
China Mobile has been selling TD-LTE devices and rate plans in Shenzhen since November 1st. As 4G licenses are not yet issued, these sales are described as “trials” and are limited to a small number of devices and are only available in a few cities. The LTE rate plans are provisional: service contracts are signed under a 3G rate plan which will transfer to a 4G plan in January. We believe that sales of 4G services in advance of an actual license is an aggressive move, and highlights how important 4G is for China Mobile’s management.
We conducted over 100 speed tests in Shenzhen to compare the new TD-LTE network versus Unicom’s existing 3G network. Unicom has benefited tremendously from China Mobile’s misfortune with TD-SCDMA and its own good fortune of being licensed with WCDMA. Unicom also stands to suffer the most if its leadership on speed is lost. Our proprietary customer research indicated this was a key buying factor for many of Unicom’s existing customers. We went to Shenzhen (one of the cities where China Mobile is already selling 4G services) to pit China Unicom and China Mobile’s networks head-to-head. We conducted ~120 tests across various locations (indoors, outdoors, in-transit, and under-ground) to reach robust conclusions on speed, latency and coverage. Our test approach and sampling criteria are shown on Exhibit 1; our 4G test equipments are shown in Exhibits 2 and 3.
…
As expected, our test highlighted that TD-SCDMA lags Unicom’s WCDMA in 3G data speeds. First we wanted to confirm Unicom’s data speed superiority over China Mobile on 3G network. As expected we found Unicom’s WCDMA to download and upload around 3 times faster than China Mobile’s TD-SCDMA. TD-SCDMA clocked an average of 1.1MB/s on download and 0.2MB/s on upload, compared to 2.7MB/s and 0.7MB/s for WCDMA. These results were broadly similar to field tests done by the Chinese Ministry of Industry and Information Technology (MIIT) in 2010 (see Exhibits 4 and 5).
However, China Mobile’s TD-LTE is everything it is promised to be: the new leader in data speed. We then moved on to test TD-LTE… We found it had 3 times less latency (Exhibit 6) which improves the browsing experience making the phone feel more responsive. Download speeds clocked an average of 26.2MB/s, which was ~10 times faster than Unicom’s 3G network (Exhibit 7). Upload speeds averaged 5MB/s, which was 7 times faster than Unicom’s 3G (Exhibit 8). These performance levels were consistently observed across all locations where there was a signal. Part of TD-LTE’s outperformance is due to a lack of users on the network, however, given the large amount of spectrum expected to be allocated for LTE services we believe there will continue to be a material performance advantage over WCDMA even as the subscriber base expands.
The TD-LTE network had more coverage gaps but this will improve over time. China Mobile’s TD-LTE network did have some coverage issues, even within urban Shenzhen. However the problem was less significant than feared. All the outdoor sites tested received good signals, and high traffic indoor locations (e.g. shopping malls, cafes) are also covered. The only test site where we failed to receive a signal was the underground metro station (Refer back to Exhibit 1). We suspect there are many more ‘gaps’ around, but these will be progressively fixed over time.
…
Anecdotally there appears to be pent-up demand for TD-LTE services; improving availability of handsets will be key to unlocking this. Currently there are only two LTE handsets available from China Mobile: a Samsung Galaxy Note II at 5299RMB [$871] and a cheaper Huawei model at 2888RMB [$475]. One clerk told us that since launching 4G “trials” 2 weeks ago, her store had only sold one TD-LTE phone. However many customers with TD-LTE compatible iPhones (5S/5C models bought in Hong Kong) are signing up to 4G plans. We are wary of making too much from this, but agree that improving handset availability will be key to a broader uptake of the service. With integrated 2G/3G/4G chipsets available and China now being the largest smartphone market, we believe it will not be long before a large number of mid to low end devices start to appear on the market.
More than Half of Asian Population Will Be Covered by LTE-TDD by 2018 [ABI Research News, Nov 4, 2013]
LTE network deployments will continue to grow rapidly globally. Time-division duplex (TDD) network is picking up the pace and gaining more market traction. In Asia-Pacific, LTE-TDD networks will cover more than 53% of the population by 2018 at a compound annual growth rate (CAGR) of 41.1% between 2012 and 2018, while frequency-division duplex (FDD) networks will reach 49% population coverage by the end of 2018.
“The increase of LTE-TDD population coverage is mainly driven by wide deployment in some Asian countries with large populations, such as China, India, and Japan,” comments Marina Lu, research associate at ABI Research. “Due to its complementarity of using unpaired spectrum, a number of LTE-FDD operators will expand their networks with LTE-TDD in additional spectrum to improve network capacity.”
Among Asia-Pacific’s recently completed, on-going, and upcoming 4G spectrum auctions, 25% concern 2,600 MHz, 25% 1,800 MHz, and 20% 800 MHz, which is consistent with the popularity of the 2,600 MHz band for LTE-TDD networks. “Asia-Pacific will be the region with the most LTE-TDD networks,” adds Jake Saunders, VP and practice director. “Of global LTE-TDD concluded contracts awarded to vendors so far, 47% come from Asia-Pacific and the second largest portion of 18% is contributed by the Middle East.”
Considering spectrum efficiency, spectrum bandwidth, network capacity, etc., a number of operators are preparing to upgrade LTE networks to LTE-Advanced networks. In ABI Research’s latest survey, there have been 29 LTE Advanced network commitments worldwide by Q3 2013, of which 10 commitments come from Western Europe, 9 from Asia-Pacific, and 5 from North America.
TD-LTE global market overview [Global TD-LTE Initiative Updates, Sept 13, 2013]
With the Long Term Evolution (LTE) standard continuing to develop, international differences in plannings and frequency allocation timetables have resulted in different frequency bands being used in different countries. TD-LTE standard’s greater efficiency in terms of frequency spectrum usage has attracted the attention of carriers in a number of other countries.
21 TD-LTE commercial networks have been launched as of August, 2013, and 39 LTE TDD commercial networks are in progress or planned. (Source: GSA)
TD-LTE’s unique features have also played an important part in the technology’s growing stature in the market. Because TD-LTE makes asymmetrical use of unpaired spectrum, for both uplink and downlink, it is a spectral efficient technology. Spectrum is a valuable commodity for mobile operators, especially those who operate in countries where there is a limited amount of available FDD spectrum; or where only single unpaired frequency is available. Driven by its spectral efficiency, TD-LTE is now increasingly being viewed as an attractive proposition in markets.
GSA confirms 244 LTE networks are commercially launched, LTE1800 now mainstream [news article by GSA, Dec 5, 2013]
The latest update of the Evolution to LTE report from GSA (Global mobile Suppliers Association) confirms that 244 operators have commercially launched LTE services in 92 countries.
98 LTE networks have been commercially launched so far in 2013.
The report confirms that 499 operators are investing in LTE in 143 countries. This is made up of 448 firm operator commitments to build LTE networks in 134 countries, plus 51 additional operators engaged in various trials, studies, etc. in a further 9 countries.
From amongst the committed operators, 244 have commercially launched services, which is 78% more than a year ago.
GSA forecasts there will be 260 LTE networks in commercial service by the end of this year.
The majority of LTE operators have deployed the FDD mode of the standard. The most widely used band in network deployments continues to be 1800 MHz which is used in over 44% of commercially launched LTE networks. 108 operators worldwide have launched LTE1800 (band 3) systems, 157% more than a year ago, in 58 countries, either as a single band system, or as part of a multi-band deployment.
1800 MHz spectrum is typically refarmed from its original use for 2G/GSM, facilitated by technology-neutral licensing policies.
As 1800 MHz is the prime band for LTE deployments worldwide, it will greatly assist international roaming for mobile broadband. Mobile licences for 1800 MHz have been awarded to 350+ operators in nearly 150 countries.
The number of LTE1800 terminals has tripled in each of the past 2 years. One third of all announced LTE user devices can operate in 1800 MHz band 3 spectrum. LTE1800 is a mature, mainstream technology.
The next most popular contiguous bands are 2.6 GHz (band 7) as used in 29% of networks in commercial service today, followed by 800 MHz (band 20) in 12% of networks, and AWS (band 4) in 8% of networks.
Interest in the TDD mode continues to be strengthening globally ahead of the large-scale commercial deployments in China. Worldwide, 25 LTE TDD (TD-LTE) systems are commercially launched in 20 countries, of which 12 are deployed in combined LTE FDD & TDD operations.
The report includes a growing list of operators who have commercially launched or preparing to introduce enhancements to their networks including multicarrier support for Category 4 user devices (150 Mbps theoretical peak downlink speed), and LTE-Advanced features, especially carrier aggregation, which is a key trend.
The report also confirms how voice service has moved up the agenda for many LTE operators as network coverage has improved (nationwide in many cases) and as the penetration and usage of LTE-capable smartphones has increased. VoLTE services have been launched by operators in Asia, Europe, and North America and several more operators have committed to VoLTE deployments and launches over the next few months.
The Evolution to LTE report (December 5, 2013) is a free download for registered site users
Registration page for new users: http://www.gsacom.com/user/register
Numerous charts, maps etc. confirming the progress of mobile broadband developments including LTE are also available on the home page and at www.gsacom.com/news/statistics.
GSA confirms 1,240 LTE user devices launched, support building for LTE-Advanced systems [news article by GSA, Nov 7, 2013]
The latest update to the ‘Status of the LTE Ecosystem’ report published by the GSA (Global mobile Suppliers Association) confirms that 120 manufacturers have announced 1,240 LTE-enabled user devices, including frequency and carrier variants.
680 new LTE user devices were announced in the past year. The number of manufacturers increased by 44% in this period. Smartphones continue to be the largest LTE device category with 455 products released, representing 36% share of all LTE device types. 99% of LTE smartphones also operate on 3G networks (HSPA/HSPA+ or EV-DO or TD-SCDMA technologies).
The report embraced devices that operate on the FDD and/or TDD modes of the LTE system. The majority of products are designed for operation in the FDD mode. However, 274 devices can operate in the LTE TDD (TD-LTE) mode, and this figure is 159 higher than a year ago.
The largest LTE device ecosystems for the FDD bands are as follows:
– 2600 MHz band 7 = 448 devices
– 1800 MHz band 3 = 412 devices
– 800 MHz band 20 = 314 devices
– 2100 MHz band 1 = 305 devices
– 700 MHz bands 12, 17 = 289 devices
– AWS band 4 = 279 devices
– 700 MHz band 13 = 250 devices
– 850 MHz band 5 = 189 devices
– 900 MHz band 8 = 174 devices
– 1900 MHz band 2 = 134 devicesTDD bands:
– 2600 MHz band 38 = 197 devices
– 2300 MHz band 40 = 184 devices
– 1900 MHz band 39 = 71 devices
– 2600 MHz band 41 = 63 devices
– 2500 MHz bands 42, 43 = 15 devices
(totals include carrier and operator variants)
…
The Evolution to LTE report (October 17, 2013) is also available as a free download to registered site users via the link at http://www.gsacom.com/gsm3g/infopapers
Note that by the time of 4G based on TD-LTE the leading edge of LTE will much further ahead as SK Telecom Demonstrates 225 Mbps LTE-Advanced [press release, Nov 28, 2013]
- Successfully demonstrates the upgraded LTE-Advanced: Aggregates 20MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band to offer up to 225Mbps of speed
- Expects to launch the ‘20MHz+10MHz’ LTE-Advanced service in the second half of 2014 and plans to introduce 3 Band Carrier Aggregation in an early manner
SK Telecom (NYSE:SKM) today held a press conference to demonstrate the upgraded LTE-Advanced service that offers up to 225Mbps of speed by aggregating 20MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band.
LTE can only offer up to 150Mbps of speeds using a maximum of 20MHz of continuous spectrum in one band, while LTE-Advanced can support speeds over 150Mbps by combining different bands through Carrier Aggregation (CA).
Insert of mine:
[WIS2013] SK텔레콤 LTE-Advanced [SK telecom YouTube channel, May 20, 2013]
In June 2013, SK Telecom has commercialized, for the first time in the world, LTE-Advanced service using 10MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band. Backed by a wide range of mobile value added services specially designed for the LTE-Advanced network, and a rich lineup of LTE-Advanced capable devices (8 different smartphone models), SK Telecom’s LTE-Advanced service is attracting subscribers at a rapid pace.
Moreover, on August 30, 2013, SK Telecom has gained authorization to operate the 35 MHz bandwidth (20 downlink + 15 uplink) in 1.8GHz band, and immediately launched diverse measures to strengthen both its LTE and LTE-Advanced services by utilizing the newly acquired bandwidth.
Once SK Telecom commercializes the upgraded LTE-Advanced (20MHz+10MHz), customers will be able to download an 800MB movie in just 28 seconds, significantly faster than other networks. Measured at their maximum speeds, downloading the same movie file via 3G, LTE, and the existing LTE-Advanced (10MHz+10MHz) would take 7 minutes and 24 seconds, 1 minute and 25 seconds, and 43 seconds, respectively.
The company said that it expects to launch the ‘20MHz+10MHz’ LTE-Advanced service nationwide through smartphones in the second half of 2014 as the smartphone chipset that supports 225 Mbps of speeds is currently being developed.
Furthermore, by successfully demonstrating the ‘20MHz+10MHz’ CA, SK Telecom moves one step closer to realizing the next level of LTE-Advanced technology: Aggregating three component carriers (20MHz+10MHz+10MHz) to support up to 300Mbps of speed.
Alex Jinsung Choi, Executive Vice President and Head of ICT R&D Division at SK Telecom said, “SK Telecom has been leading the development of wireless networks since it commercialized CDMA (2G) technology for the world’s first time in 1996. Today’s successful demonstration of 225 Mbps LTE-Advanced will serve as a momentum for SK Telecom to realize more innovative network technologies, which will also lead to the growth of relevant industries, including device, content and convergence fields.”
But already SK Telecom, China Mobile agree on automatic LTE roaming service [Yonhap, Dec 5, 2013]
SK Telecom Co., South Korea’s largest mobile operator, said Thursday that it has agreed to launch an automatic international Long Term Evolution (LTE) roaming service with China Mobile Ltd., as well as other LTE services.
Under the deal, travelers and businesspeople will be able to use their regular LTE services offered by the two mobile carriers more easily between the two countries, according to SK Telecom.
About 6.8 million Koreans and Chinese traveled between the two countries last year.
Early this year, SK Telecom and CSL Ltd. of Hong Kong successfully demonstrated the compatibility of their two LTE networks. The international automatic LTE roaming service has been available since June this year.
Since October, SK Telecom also has offered a similar roaming service with Saudi Arabia.
SK Telecom CEO Ha Sung-min (R) and China Mobile’s Chairman Xi Guohua
pose for a photo at SK Telecom’s headquarters in Seoul.
China Mobile:
New era for mobiles as 4G licenses issued to carriers [Xinhuanet, Dec 5, 2013]
China issued long-awaited 4G licenses to three telecommunications carriers yesterday, which would offer mobile Internet access 20 to 50 times faster than the current 3G network and create a new trillion-yuan market for devices and services.
China, the world’s biggest mobile phone market, has now officially entered the 4G era five years after it issued 3G licenses. The technology is widely adopted in the United States, Europe, Japan, South Korea and other regional markets.
The network, along with e-commerce and software businesses, is expected to boost information consumption and market demand, and encourage innovation in China, according to the Ministry of Industry and Information Technology.
China Mobile will launch 4G services in Shanghai, Beijing and 11 other cities by the end of this year. The number of cities will expand to 340 by the end of 2014.
Users can upgrade to the 4G network without changing phone numbers, China Mobile said yesterday. It has been testing 4G networks for two years.
China Mobile, China Unicom and China Telecom all got 4G licenses based on TD-LTE (time division-long term evolution) technology. China Unicom and China Telecom also got approval to test another 4G technology FD-LTE (frequency division-LTE), which is mainly used in overseas markets.
China will issue FD-LTE 4G licenses later, the ministry said.
China Mobile also got the approval to operate fixed-line business including family broadband, which makes it possible to launch bundled services, the ministry added.
“It’s a national strategy to boost commercial 4G development to boost consumption and fuel-related investment,” the ministry said on its website.
The ministry said that 4G had become an engine for the development of the whole IT industry, fueling demand for the latest smartphones. With greatly improved speed and more powerful phones, new mobile Internet services will appear that will enrich people’s daily lives, the ministry said.
With 4G, mobile users can download a film (700 megabytes) in two minutes and a high-quality song (7MB) in less than a second. More 4G-related services such as video on demand, conferencing, high-quality music streaming, multiplayer games and remote video monitoring for medical and security services are being tested, industry insiders said.
The initial investment for 4G will reach 500 billion yuan (US$82 billion) in a few years, and is expected to hit 1 trillion yuan with the industry’s development.
“4G LTE is the fastest growing mobile technology since the inception of mobility some 25 years ago. And we know that mobile broadband will have a huge impact on people, business and society and be one of the most critical infrastructures for any country,” Hans Vestberg, chief executive of Ericsson, the world’s largest telecommunications equipment vendor.
By 2019, China will be home to 700 million mobile subscribers on 4G, making it the world’s biggest 4G market, according to Ericsson.
Equipment makers including Ericsson, Huawei, ZTE and Alcatel-Lucent Shanghai Bell are going to benefit from the 4G wave.
“We are fully prepared for providing handsets for China’s own 4G technology, from entry-level to high-end phones,” said Cher Wang, HTC’s chairman.
China Mobile is going to launch 4G services with a new brand He, meaning harmony in Chinese, on December 17. The carrier may offer iPhones supporting TD-LTE then, according to industry sources.
In cities such as Beijing and Shenzhen, China Mobile have allowed users to apply for trial commercial use of 4G services with their own devices. In Shanghai, more than 1,800 people had been invited to test 4G services.
Its target is to cover 100 cities by the middle of next year and 340 by the end of 2014, when it plans to launch 4G phones that cost less than 1,000 yuan each. In the first half, it will launch 50 new 4G phones.
In Shanghai, nine TD-LTE phones will be available by the end of this year. Users can apply for 4G services at China Mobile’s outlets on Madang Road and Minsheng Road initially, to be expanded to 20 outlets citywide.
Shanghai Mobile also plans to establish an additional 3,000 4G base stations next year from the current 700, to cover the whole city including suburban and rural regions.
(Source: Shanghai Daily)
From 2013 Interim Results Presentation as of Aug 15, 2013
From China Mobile 2012 Annual Report [April 25, 2013]
Business Overview
… starting from 2013, we commenced investments in the development of TD-LTE network. We intend to use the TD-LTE network to primarily carry high bandwidth and high quality wireless broadband businesses. In 2012, the extended large scale trial of the TD-LTE network was carried out in 15 cities in Mainland China and approximately 20,000 base stations were built. The quality and scale of the TD-LTE networks in Hangzhou, Guangzhou and Shenzhen have reached pre-commercial standard. In addition, we started providing commercial 4G services in Hong Kong in 2012 with the LTE FDD and TD-LTE bandwidths we previously obtained from the Office of the Telecommunications Authority of Hong Kong in 2009 and 2012, respectively. We plan to construct more than 200,000 TD-LTE base stations in 2013. [Certain 3G base stations may also be upgraded to TD-LTE base stations.]
China Mobile lifts hopes of Apple deal and 4G launch [Shanghai Daily via Xinhuanet, Oct 31, 2013]
China Mobile is raising consumer hopes that the next-generation 4G mobile network will be launched soon and that a long-awaited deal between the world’s largest telco and Apple Inc may be unveiled as early as next week.
The telco’s website displays a cartoon tornado advertisement that announces “the invasion of 4G” and “November 9-11.” The ad links to a page showing two images of smartphones that resemble iPhones and a caption that says “special discounts.”
November 11, or Singles’ Day, is the busiest shopping day of the year in China. Last year, it generated 4 billion U.S.dollars in online sales alone, according to retail consultant McKinsey Global Institute.
China Mobile declined to comment but its senior executives said earlier that it would distribute 4G phones, including Apple’s latest iPhone 5S, after China issues 4G licenses expected by the end of this year.
Meanwhile, the Ministry of Industry and Information Technology has approved the sale of several 4G models made by Sony, ZTE and other vendors.
China Mobile hopes the expected tie-up with Apple will boost revenue and profit, especially in the high-end market segment, after its net profit for the first three quarters of this year fell for the first time by 1.9 percent to 91.5 billion yuan (14.8 billion U.S.dollars).
China Mobile’s Beijing branch jumps on 4G technology wave [China Daily USA, Nov 6, 2013]
Carrier to begin sales of newest network-enabled smartphones
Beijing has become the latest Chinese city to join the wave of tests for fourth generation, or 4G, mobile networks, despite the fact that the government has yet to issue 4G licenses to telecom carriers.
On Tuesday, China Mobile Ltd’s Beijing branch said it would start sales of 4G smartphones on Wednesday. The first batch of 4G handsets includes two models – Sony Corp’s M35T and Samsung Electronics Co Ltd’s Galaxy Note 2.
Customers do not need to change their phone numbers but just have to get a new SIM card for their 4G handsets, according to a statement from China Mobile. Fourth-generation wireless networks achieve data download speeds of up to 80 megabits per second, four times faster than 3G networks.
However, the coverage of 4G networks in Beijing is limited, said Gao Shu, a spokeswoman for China Mobile’s Beijing branch. Only people in areas inside the capital’s Third Ring Road will be able to access the network.
“Our 4G smartphones are aimed at high-end, white-collar workers in Beijing,” Gao said.
Before Beijing, a handful of affluent Chinese cities, including Guangzhou and Hangzhou, have started offering 4G services on a trial basis.
China Mobile – the only operator in the country currently testing 4G networks – has adopted the domestic Time Division-Long Term Evolution (TD-LTE) 4G technology.
The number of applicants for 4G services is expected to surpass 100,000 in major cities, according to a China Mobile official, who asked not to be named.
Meanwhile, the lack of mature 4G smartphones has long been seen as a major obstacle for the expansion of China Mobile’s 4G business. But the situation has improved in recent months. According to a report from Bank of China International Securities, as of Sept 11, smartphone models received the permission from Chinese authorities to run on 4G networks. The new smartphones are being made by domestic and international companies, including Samsung, Sony, Huawei Technologies Co Ltd and ZTE Corp, the report said.
“The planned 4G commercial rollout is very good news for China Mobile, as well as for smartphone companies and mobile Internet companies,” said Wang Jun, an analyst with Beijing-based research firm Analysys International.
China Mobile’s net profit dropped 9 percent in the third quarter partly due to the increasing challenges posed by mobile Internet applications such as Tencent Holdings Ltd’s WeChat.
“The 4G business can help the carrier to attract more high-end users from rivals,” Wang said.
Apple Inc has also said that its latest iPhone 5S and iPhone 5C handsets may support TD-LTE technology.
James Yan, an analyst with IDC China, pointed out that the timing for launching 4G services in China is right.
“The environment could not be better. Customers favor smartphones, carriers have the motivation to do 4G services, and distributors know how to sell 4G products to people,” Yan said.
The launch of 4G services in China will definitely be a new driver for the growth of the nation’s smartphone market, he added.
“4G will be an important factor to make people buy new phones,” Yan said.
Ryan Reith, program director at IDC’s Worldwide Quarterly Mobile Phone Tracker, said that China has become one of the fastest-growing smartphone markets in the world, accounting for more than one-third of total shipments in the third quarter of the year.
China Mobile to launch all-service brand [China Daily, Nov 20, 2013]
China Mobile Ltd, the nation’s biggest telecom carrier by subscriber numbers, revealed onTuesday that it would officially launch a new brand “He” (And) on Dec 18, mainly targeting the upcoming fourth generation (4G) mobile business.
The new brand’s logo features grass green and peach blossom colors. According to ChinaMobile officials, the company’s current-running brands – GoTone, EasyOwn, M-Zone and G3for 3G mobile services, will be phased out after the launch of “He”.
That means “He” will take the stage as an all-service brand for China Mobile and provide customers with integrated 2G, 3G and 4G mobile services.
Commercial 4G to start December 18 [Shanghai Daily, Nov 25, 2013]
China will start commercial 4G mobile communications services on December 18, bringing the most advanced telecommunications technology to the country’s more than 1 billion mobile users.
China Mobile, the country’s No. 1 mobile operator with over 700 million users, will start 4G services on that date with a new brand He, meaning harmonious in the Chinese language.
China is expected to issue licences for 4G before the telco’s new services start.
“It will be a national event and users are allowed to apply for 4G services without changing numbers,” said a Shanghai Mobile official.
Users in Beijing, Guangzhou and Chongqing will be the first to enjoy commercial 4G, or fourth generation, services. Shanghai, which is still building a citywide 4G network, will launch the services later.
Though China is the world’s biggest mobile phone market with more than 1 billion users on its mainland, it lacks the 4G technology that is used in some other countries and regions including the United States, South Korea, Japan, Singapore and Hong Kong.
The 4G phone will become rapidly popular on China’s mainland, thanks to the low cost of 4G phones, according to Li Yue, China Mobile’s president, who expects some 4G phones priced below 1,000 yuan (US$162) to appear in the second half of next year.
Apple Inc is also set to introduce iPhones supporting the 4G network in China, industry insiders said. The US giant and China Mobile are in negotiations over the 4G iPhone and they will launch it officially on December 18.
China Telecom and China Unicom are now Apple’s carrier partners for its smartphone on the Chinese mainland.
Apple will partner with China Mobile [CNN YouTube channel, Dec 5, 2013]
China Mobile still talking to Apple on iPhones [Reuters, Dec 5, 2013 9:27am EST]
Earlier in the day, the Wall Street Journal reported that the two giants had signed a deal, citing an anonymous source familiar with the matter.
“We are still negotiating with Apple, but for now we have nothing new to announce,” China Mobile spokeswoman Rainie Lei said, declining to elaborate. Apple also declined comment.
Moody’s: TD-LTE License Is Credit Positive for China Mobile [Moody’ Global Credit Research announcement, Dec 6, 2013]
Hong Kong, December 06, 2013 — Moody’s Investors Service says that the Chinese government’s decision to issue a Time-Division Long-Term Evolution (TD-LTE), or 4G, license, is credit positive for China Mobile Limited (Aa3 stable) as this will help strengthen its market position in the growing wireless data business.
On 4 December, China Mobile announced that the Ministry of Industry and Information Technology had granted its parent, China Mobile Communications Corporation (CMCC, unrated), permission to operate the TD-LTE business and China Mobile will assist CMCC in the construction and operations of the TD-LTE network.
China Mobile is likely to enjoy the first mover advantage in the TD-LTE business as it has been investing in the technology since early 2013, well ahead of its competitors.
China Mobile targets to build over 200,000 commercial-ready base stations and expand its network coverage to 100 major cities by the end of this year. It has already started trials in some of the major cities, including Beijing.
While its two major competitors — such as China United Network Communications Group Co Ltd (China Unicom, unrated) and China Telecom Corporation (unrated) — also obtained TD-LTE licenses at the same time, we expect these companies to only start major investments in 2014.
In fact, these companies plan to use Frequency Division Duplex (FDD)-LTE — an international standard used outside China — as their mainstream 4G technology. However, the FDD-LTE licenses have not yet been granted and any delay in the issuance of the licenses will be advantageous for China Mobile.
Although TD-LTE is a home-grown technology, China Mobile is unlikely to be hampered by the lack of choice in 4G handsets, as was the case with its 3G indigenous technology platform (Time Division-Code Division Multiple Access, or TD-SCDMA).
TD-LTE technology has been accepted internationally, with 59 operators and 54 manufacturers joining the global TD-LTE initiative as of H1 2013. In addition, 25 models of TD-LTE trial devices were launched and over 100 models are under development, of which 15 handsets are intended for commercial use.
Moody’s believes that Apple’s new iPhones have also become technologically compatible with TD-LTE, as well as TD-SCDMA, although China Mobile has not yet started selling iPhones.
The launch of TD-LTE is strategically important for China Mobile to strengthen its market position in the growing wireless data business.
China Mobile had about 759 million customers as of October 2013, of which 176 million were 3G customers. Its 3G subscribers are growing rapidly with over 100% growth since May 2013 on a year-over-year basis.
Moody’s expects its wireless data business to continue its solid growth. The wireless data revenue has grown 62% in H1 2013 on a year-over-year basis. In H1 2013 the business accounted for 17% of its telecommunications services revenue, up from 11% in H1 2012.
However, China Mobile’s market share for 3G services has been much smaller than its overall mobile market share. As of October 2013, its 3G market share was 45% (China Unicom 30% and China Telecom 25%) while its overall mobile market share was 62% (China Unicom 23% and China Telecom 15%), largely because of the use of TD-SCDMA despite the recent improvement in its 3G market share.
Moody’s expects the launch of TD-LTE will help China Mobile improve its market position in the wireless data segment and slow the pace of declines in average revenue per user (ARPU), as the ARPU of data users tends to be higher.
The large investments in TD-LTE will continue to pressure China Mobile’s cash flow. Moody’s expects its adjusted free cash flow (FCF)/debt to fall to below 0% in 2013 and 2014 from over 60% in 2012.
Moody’s expects that the company’s adjusted capital expenditure as a percentage of revenue from telecommunications services will increase to over 30% in 2013 and 2014, from below 25% of its revenue in 2012.
Nevertheless, its overall credit profile will remain in line with its rating, supported by its solid overall operating and financial profiles, as well as its excellent liquidity. For example, Moody’s expects China Mobile’s adjusted debt/EBITDA to remain at approximately 0.3x.
The principal methodology used in this rating was the Global Telecommunications Industry published in December 2010. Please see the Credit Policy page on http://www.moodys.com for a copy of this methodology.
China Mobile is the leading provider of mobile telecommunications services in China, offering voice and data services in all 31 provinces and autonomous regions, as well as in Hong Kong. It is 74% owned by CMCC, which in turn is wholly owned by China’s State-owned Assets Supervision and Administration Commission.
China Telecom:
LTE/4G DIGITAL CELLULAR MOBILE SERVICE OPERATION PERMIT [China Telecom’s regulatory announcement for Hong Kong Exchange, Dec 4, 2015]
This announcement is made pursuant to Rule 13.09 of the Rules Governing the Listing of the Securities on The Stock Exchange of Hong Kong Limited and Part XIVA of the Securities and Futures Ordinance (Cap. 571 of the Laws of Hong Kong).
The Board (the “Board”) of directors of China Telecom Corporation Limited (the “Company”) announced that the Company was notified by China Telecommunications Corporation (the parent company of the Company) that China Telecom has been granted by the Ministry of Industry and Information Technology of the PRC the permit to operate the LTE/4G digital cellular mobile service (TD-LTE). Meanwhile, China Telecom will apply for the permit to operate the LTE/4G digital cellular mobile service (LTE FDD) as soon as practicable.
In order to proactively implement national innovation strategy and leverage collaborated use of different spectrum resources to meet customers’ demand, the Company aims to adopt a flexible approach in deployment of LTE network with one hybrid network of integrated resources. The Company will flexibly deploy the LTE network with regard to data business growth and value chain development. In particular, the LTE deployment would only start from densely populated areas, overlaying on existing superior 3G network for long-term integrated operation. The Company would grasp the rapidly growing data business opportunities with an aim to better enhance customers experience and corporate return.
The Company believes that the issue of 4G digital cellular mobile service operation permit will be beneficial to the sustainable development of the telecommunications industry. It will also foster the informatisation consumption and economic growth. However, it will simultaneously intensify market competition. The Company will proactively leverage its operation edge and strive to foster the sustainable development of its business.
In the meantime, investors are advised to exercise caution in dealing in the securities of the Company.
By Order of the Board
China Telecom Corporation Limited
Wang Xiaochu
Chairman and Chief Executive Officer
From Edited Transcript of 2013 Interim Results Investor Presentation and 2013 Interim Results Presentation of Aug 21, 2013:
Slide 10: To Deploy LTE Trial Network Timely & Appropriately
To support national technology innovations and allow flexible use of spectrum resources to meet customer demand, we plan to deploy one hybrid LTE network of integrated resources, sharing the core network with wireless access through both TDD and FDD. Thus, most of the LTE network investments would support both TDD and FDD services, offering us flexibility in long term development and return enhancement.
We will continue to fully leverage existing nationwide superior 3G and fibre broadband networks to serve our customers. LTE deployment would only start from densely populated areas.
We plan to flexibly deploy LTE network with regard to future LTE licensing, data business growth & value chain development, overlaying on existing superior 3G network for long-term integrated operation to enhance customer experience & return.
China Telecom to launch TD-LTE trial network construction [Global TD-LTE Initiative Updates, Oct 25, 2013]
According to informed sources, the Ministry has recently approved the China Telecom launched TD-LTE trial network construction and pre-commercial related business. This means that China Telecom 4G future will get two licenses for FDD LTE/TD-LTE network integration.
“China Telecom will use FDD LTE/TD-LTE network integration approach build 4G network.” China Telecom Chairman Mr. Wang had previously publicly stated that “since the frequency is restricting the operator’s core resources in the 4G era, network integration is inevitable.”
A week ago, China Telecom completed the LTE core network master device EPC Jicai tender. It is understood that although China Telecom’s LTE core network master device bidding amount is not large, but the coverage of the country’s 31 provinces, including ZTE, Huawei, Shanghai Bell, Ericsson and other equipment manufacturers, including domestic and international mainstream have received certain share, which, ZTE, Huawei, Shanghai Bell’s winning share is relatively large.
It is understood that the successful vendor device support FDD/TDD multi-mode network, this also shows that China Telecom has begun preparations related to the deployment of TD-LTE.
Late last year, China Telecom in Shanghai, Nanjing and other cities in Guangdong 4G trial, however, was mainly dominated by FDD LTE trial network. The Ministry of approval, indicating that China Telecom has determined will be FDD LTE/TD-LTE 4G mode hybrid network test network construction.
Prior to the introduction, according to Mr. Wang in China Telecom’s 4G network planning, large-scale, wide coverage 4G networks will use FDD standard, while the urban area densely populated areas will use TDD system, using this integrated program will be able to achieve all of the user needs.
In addition, from China Telecom’s terminal planning can be seen that China Telecom in 4G mobile phones mainly uses standard FDD LTE multimode phones, but in the data card is the main use of TD-LTE network resources.
China Unicom:
Announcement LTE/4G Digital Cellular Mobile Service Operation (TD-LTE) Permit [China Unicom’s regulatory announcement for Hong Kong Exchange, Dec 4, 2015]
This announcement is made pursuant to Rule 13.09 of the Rules Governing the Listing of Securities on The Stock Exchange of Hong Kong Limited (the “Listing Rules”) and Part XIVA of the Securities and Futures Ordinance (Cap. 571).
On 4 December 2013, China Unicom (Hong Kong) Limited (the “Company”) was notified by its ultimate parent company, China United Network Communications Group Company Limited (中國聯合網絡通信集團有限公司) (“Unicom Parent”), that Unicom Parent has been granted the license to operate LTE/4G digital cellular mobile service (TD-LTE) by the Ministry of Industry and Information Technology of the People’s Republic of China (“MIIT”) on 4 December 2013. MIIT has also granted approval for Unicom Parent to license China United Network Communications Corporation Limited (中國聯合網絡通信有限公司), a wholly-owned subsidiary of the Company, to operate LTE/4G digital cellular mobile service (TD-LTE) nationwide in China.
Meanwhile, the Company will continue to proactively apply for the launch of LTE FDD technology test run. It aims to leverage on the 3G network in order to provide users with mobile broadband data services with a higher speed.
By Order of the Board
CHINA UNICOM (HONG KONG) LIMITED
CHU KA YEE
Company Secretary
From 2013 Interim Results Presentation as of Aug 8, 2013
From INTERIM REPORT 2013 as of August 8, 2013
[p. 3]
To support its sustainable growth in the future, the Company further enhanced its network capabilities with a focus on network architecture as well as mobile, broadband and transmission networks so as to strengthen its network advantages in broadband and mobile Internet. In the first half year, the Company added 33 thousand new 3G base stations, and opened HSPA+ 21Mbps services over the whole 3G network, with speed up to 42Mbps at some urban hot spot areas. The Company accelerated fiber optic deployment. Its broadband access ports increased by 19.9% year-on-year, and FTTH/B accounted for 63% of total access ports, representing an increase of 10 percentage points over the same period last year. In order to better meet the demand from HSPA+, LTE and integrated services, the Company optimised the structure and enhanced the coverage of its infrastructure and transmission networks.
From China’s telecom firms reveal 4G strategies [Xinhuanet, June 27, 2013]
… the other two smaller Chinese telecom operators – China Unicom (Hong Kong) Ltd and China Telecom Corp Ltd – have expressed their willingness to adopt the Frequency Division Duplex-Long Term Evolution, or FDD-LTE, technology, or at least to build a converged network under both standards.
TD-LTE and FDD-LTE are the two major 4G international standards, but the latter has gained more popularity across the globe and has stronger industry support.
Lu Yimin, general manager of China Unicom, said the company is conducting tests for 4G wireless networks with mixed technologies. It is the first time that China Unicom has admitted that it is actively preparing to launch 4G services.
However, Lu added that because the Chinese government has not yet awarded the 4G licenses, China Unicom’s final strategy is still “uncertain.” Lu also made the remarks at Shanghai’s Mobile Asia Expo.
Last weekend, Wang Xiaochu, China Telecom’s chairman, confirmed that the company is stepping up efforts for its LTE network trials.
“It’s inevitable (for China Telecom) to adopt a converged network, since the spectrum is at the core of every carrier’s resources,” Wang said.
China Unicom tests 4G network [China Daily via Xinhuanet, Aug 9, 2013]
China United Network Communications Co Ltd, known as China Unicom, said on Thursday that it has started testing a TD-LTE 4G network, which it will use if the government doesn’t allow it to use its favored FDD-LTE technology in the upcoming 4G licensing process.
China’s second-biggest mobile operator by subscribers is said to have taken the preemptive action because it expects the government to follow a similar strategy as in its 3G auction, when it first awarded licenses for TD-LTE networks, a technology which is mostly backed by its arch-rival China Mobile Ltd, which has the most subscribers in the country.
The government is widely expected to award 4G licenses before the end of the year. And if it licenses TD-LTE networks first, it will give China Mobile a big edge in the 4G market over its competitors.
After reporting a 55 percent jump in its first-half profit, Chang Xiaobing, the company’s chairman, said investment on TD-LTE technology has already started and testing will begin in major cities. Funds will come from Hong Kong-listed China Unicom, rather than from its controlling company China United Network Communications Corp Ltd, which previously funded some of China Unicom’s network tests.
“I expect Beijing to license TD-LTE first, so we have to prepare,” Chang told a news conference in Hong Kong on Thursday.
Beijing favors TD-LTE, or Time-Division Long-Term Evolution, because the network’s core technologies are developed by Chinese companies. The technology was developed specifically for the Chinese market and is expected to serve a quarter of the global market by 2016.
China Unicom’s infrastructure mainly supports FDD-LTE, or Frequency Division Duplexing Long-Term Evolution, which is the world’s dominant 4G technology. Out of the 156 commercial 4G networks operating around the world in March 2013, 142 were FDD-LTE and 14 were TD-LTE networks. China Mobile operates a FDD-LTE network in Hong Kong and is trying to integrate it with the mainland’s TD-LTE market.
Chang said China Unicom’s capital expenditure will stay within the full-year budget of 80 billion yuan (12.96 billion U.S. dollars), despite the planned investment in TD-LTE networks.
Media reports said that China Telecom Corp Ltd, the other major operator in China, will rent China Mobile’s TD-LTE 4G infrastructure. Chang refused to say if China Unicom will do the same.
China Unicom’s first-half profit surged to 5.32 billion yuan compared with 3.43 billion yuan in the same period in 2012. Revenue was up 18.6 percent to 144.3 billion yuan, boosted by a 52 percent increase in income from 3G services to 40.9 billion yuan. The company’s 3G subscribers grew a stunning 74 percent to more than 100 million.
China Unicom shares gained 2.67 percent on Thursday. Trading of the stocks was suspended in the afternoon, after the website of the State-owned Asset Supervision and Administration Commission published the company’s earnings before they were reported to the Hong Kong stock exchange. China Unicom shares surged after the disclosure at around 3:30 pm.
A China Unicom spokesman apologized for the incident and promised it won’t happen again.
China Unicom to procure TD-, FDD-LTE equipment, says report [DIGITIMES, Oct 24, 2013]
China United Network Communications (China Unicom) has started an open-bid process for procuring 34,000 FDD-LTE base stations, 10,000 TD-LTE base stations and 8,000 FDD-LTE small cells, according to China-based tech.sina.com.
Of the mobile telecom carriers in China, China Mobile has adopted TD-LTE only, while China Telecom and China Unicom have adopted FDD LTE as their main 4G standard and TD-LTE as an auxiliary in line with the China government’s policy promoting TD-LTE.
China Telecom procured about 50,000 FDD-LTE base stations and about 20,000 TD-LTE ones in the third quarter of 2013.
Xamarin: C# developers of native “business” and “mobile workforce” applications now can easily work cross-platform, for Android and iOS clients as well
… while other cross-platform applications, i.e. “applications for consumers only” are prohibited for C# developers by the still high price of Xamarin, which essentially applies to indie and start-up developers only
The mobile application development technology behind this, from the cloud to the clients, was extensively covered in Windows Phone 8: getting much closer to a unified development platform with Windows 8 [‘Experiencing the Cloud’, Nov 8, 2012] post of mine (including the cross-platform possibilities with Xamarin already), and then continued in Windows Azure becoming an unbeatable offering on the cloud computing market [‘Experiencing the Cloud’, June 28, 2013] and Microsoft partners empowered with ‘cloud first’, high-value and next-gen experiences for big data, enterprise social, and mobility on wide variety of Windows devices and Windows Server + Windows Azure + Visual Studio as the platform [‘Experiencing the Cloud’, July 10, 2013] posts for the cloud part.
Note: Decide for yourself how that “consumers only applications by indie and start-up developers” type of exclusion will effect the cross platform development needs, after you take a look at the current state of the evolution of smartphone and tablet markets:
Details
For one of the problems solved now by Microsoft see my Obstacles for .NET on other platforms [‘Experiencing the Cloud’, Oct 15, 2013] post.
To understand what is the situation now I will start with:
- Phil Haack working at GitHub “doing crazy”:
In: Cross Platform .NET Just A Lot Got Better [Haacked blog, Nov 13, 2013]
Not long ago I wrote a blog post about how platform restrictions harm .NET. This led to a lot of discussion online and on Twitter. At some point David Kean suggested a more productive approach would be to create a UserVoice issue. So I did and it quickly gathered a lot of votes.
…
Phil Haack – Customer Feedback for Microsoft http://visualstudio.uservoice.com/users/40986152-phil-haack:
Remove the platform restriction on Microsoft NuGet packages 4,929 votes
Phil Haack shared this idea and gave it 3 votes · Sep 26, 2013
COMPLETED · Visual Studio team (Product Team, Microsoft) responded
Thanks a lot for this suggestion and all the votes.We’re happy to announce that we’ve removed the Windows-only restriction from our license. We’ve applied this new license to most of our packages and will continue to use this license moving forward.
Here is our announcement:
http://blogs.msdn.com/b/dotnet/archive/2013/11/13/pcl-and-net-nuget-libraries-are-now-enabled-for-xamarin.aspxFor reference, the license for stable packages can be found here:
http://go.microsoft.com/fwlink/?LinkId=329770Thanks,
Immo Landwerth
Program Manager, .NET Framework TeamPhil Haack commented · Nov 13, 2013
Amazing! Thanks! This is great!
Bravo!
Serious Kudos to the .NET team for this. It looks like most of the interesting PCL packages are now licensed without platform restrictions. As an example of how this small change sends out ripples of goodness, we can now make Octokit.net depend on portable HttpClient and make Octokit.net itself more cross platform and portable without a huge amount of work.
I’m also excited about the partnership between Microsoft and Xamarin this represents. I do believe C# is a great language for cross-platform development and it’s good to see Microsoft jumping back on board with this. This is a marked change from the situation I wrote about in 2012.
- then will go to S. Somasegar, Corporate Vice President of the Developer Division at Microsoft:
In: Visual Studio 2013 Launch: Announcing Visual Studio Online [Somasegar’s blog, Nov 13, 2013]
… Microsoft and Xamarin are collaborating to help .NET developers broaden the reach of their applications to additional devices, including iOS and Android …
…
Partner News
With today’s launch of Visual Studio 2013, we have 123 products from 74 partners available already as Visual Studio 2013 extensions. As part of an ecosystem of developer tools experiences, Visual Studio continues to be a platform for delivering a great breadth of developer experiences.
Xamarin
The devices and services transformation is driving developers to think about how they will build applications that reach the greatest breadth of devices and end-user experiences. We’ve offered great HTML-based cross platform development experiences in Visual Studio with ASP.NET and JavaScript. But our .NET developers have also asked us how they can broaden the reach of their applications and skills.
Today, I am excited to announce a broad collaboration between Microsoft and Xamarin. Xamarin’s solution enables developers to leverage Visual Studio, Windows Azure and .NET to further extend the reach of their business applications across multiple devices, including iOS and Android.
The collaboration between Xamarin and Microsoft brings several benefits for developers today. First, as an initial step in a technical partnership, Xamarin’s next release that is being announced today will support Portable Class Libraries, enabling developers to share libraries and components across a breadth of Microsoft and non-Microsoft platforms. Second, Professional, Premium and Ultimate MSDN subscribers will have access to exclusive benefits for getting started with Xamarin, including new training resources, extended evaluation access to Xamarin’s Visual Studio integration and special pricing on Xamarin products.
…
-
followed by the Microsoft and Xamarin Partner Globally to Enable Microsoft Developers to Develop Native iOS and Android Apps With C# and Visual Studio [Xamarin press release, Nov 13, 2013]
Xamarin, the company that empowers developers to build fully native apps for iOS, Android, Windows and Mac from a single shared code base, today announced a global collaboration with Microsoft that makes it easy for mobile developers to build native mobile apps for all major platforms in Visual Studio. Xamarin is the only solution that unifies native iOS, Android and Windows app development in Visual Studio—bridging one of the largest developer bases in the world to the most successful mobile device platforms.
A highly competitive app marketplace and the consumerization of IT have put tremendous pressure on developers to deliver high quality mobile user experiences for both consumers and employees. A small bug or crash can lead to permanent app abandonment or poor reviews. Device fragmentation, with hundreds of devices on the market for iOS and Android alone, multiplies testing efforts resulting in a time-consuming and costly development process. This is further complicated by faster release cycles for mobile, necessitating more stringent and efficient regression testing.
The collaboration spans three areas:
- A technical collaboration to better integrate Xamarin technology with Microsoft developer tools and services.
Aligned with this goal, Xamarin is a SimShip partner for Visual Studio 2013, releasing same-day support for Microsoft’s latest Visual Studio release that launched today. In addition, Xamarin has released today full integration for Microsoft’s Portable Library projects in iOS and Android apps, making it easier than ever for developers to share code across devices.- Xamarin’s recently launched Xamarin University is now free to MSDN subscribers. The training course helps developers become successful with native iOS and Android development over the course of 30 days. Classes for the $1,995 program kick off in January 2014, with a limited number of seats available at no cost for MSDN subscribers.
- MSDN subscribers have exclusive trial and pricing options to Xamarin subscriptions for individuals and teams.
Get a 90-day trial to Xamarin, sign up for Xamarin University for free (normally $1,995), and save 30-50% on Xamarin with special MSDN pricing.
All the productivity you love in Visual Studio and C#,
on iOS and Android.
The broad collaboration between Microsoft and Xamarin which we announced today is targeted at supporting developers interested in extending their applications across multiple devices,said S. Somasegar, Corporate Vice President, Microsoft Corporation.With Xamarin, developers combine all of the productivity benefits of C#, Visual Studio 2013 and Windows Azure with the flexibility to quickly build for multiple device targets.According to Gartner, by 2016, 70 percent of the mobile workforce will have a smartphone, half of which will be purchased by the employee, and 90 percent of enterprises will have two or more platforms to support. Faced with high expectations for mobile user experiences and the pressures of BYOD, companies and developers alike are looking for scalable ways to migrate business practices and customer interactions to high-performance, native apps on multiple platforms.
To meet this need to support heterogeneous mobile environments, Microsoft and Xamarin are making it easy for developers to mobilize their existing skills and code. By standardizing mobile app development with Xamarin and C#, developers are able to share on average 75 percent of their source code across device platforms, while still delivering fully native apps. Xamarin supports 100 percent of both iOS and Android APIs—anything that can be done in Objective-C or Java can be done in C# with Xamarin.
In just two years, Xamarin has amassed a community of over 440,000 developers in 70 countries, more than 20,000 paying accounts and a network of over 120 consulting partners globally.
We live in a multi-platform world, and by embracing Xamarin, Microsoft is enabling its developer community to thrive as mobile developers,said Nat Friedman, CEO and cofounder, Xamarin.Our collaboration with Microsoft will accelerate enterprise mobility for millions of developers.The groundbreaking partnership was announced as part of the Visual Studio Live 2013 launch event in New York City. In addition, Xamarin and Microsoft have teamed up with the popular podcast, .NET Rocks!, for a 20-city nationwide road show featuring live demos on how to use Visual Studio 2013, Xamarin and Windows Azure to build and scale mobile apps for iOS, Android and Windows. For a full list of cities and to sign up for an event, please visit: xamarin.com/modern-apps-roadshow
About Xamarin
Xamarin is the new standard for enterprise mobile development. No other platform enables businesses to reach all major devices—iOS, Android, Mac and Windows—with 100 percent fully native apps from a single code base. With Xamarin, businesses standardize mobile app development in C#, share on average 75 percent source code across platforms, and leverage their existing skills, teams, tools and code to rapidly deliver great apps with broad reach. Xamarin is used by over 430,000 developers from more than 100 Fortune 500 companies and over 20,000 paying customers including Clear Channel, Bosch, McKesson, Halliburton, Cognizant, GitHub, Rdio and WebMD, to accelerate the creation of mission-critical consumer and enterprise apps. For more information, please visit: xamarin.com, read our blog, and follow us on Twitter @xamarinhq.
- as well as the PCL and .NET NuGet Libraries are now enabled for Xamarin [.NET Framework Blog, Nov 13, 2013] post
Earlier today, Soma announced a collaboration between Microsoft and Xamarin. As you probably know, Xamarin’s Visual Studio extension enables developers to use VS and .NET to extend the reach of their apps across multiple devices, including iOS and Android. As part of that collaboration, today, we are announcing two releases around the .NET portable class libraries (PCLs) that support this collaboration:
- We are making portable Microsoft .NET NuGet libraries available under a new license that enables use on all platforms. This includes HttpClient, Immutable Collections, SignalR, ODataLib and several others. Beyond that, we intend to use this license going forward.
- We are also making the RTM version of the portable reference assemblies available for use on all platforms. This announcement builds on the announcement we made a month ago around the RC release of these reference assemblies.
Microsoft .NET NuGet Libraries Released
Today we released the following portable libraries with our new license, on NuGet.org:
- Async for .NET Framework 4, Silverlight 4 and 5, and Windows Phone 7.5 and 8
- Microsoft ASP.NET SignalR .NET Client
- Microsoft BCL Build Components
- Microsoft BCL Portability Pack
- Microsoft Composition
- Microsoft Compression
- Microsoft HTTP Client Libraries
- Microsoft Immutable Collections
- ODataLib
You can now start using these libraries with Xamarin tools, either directly or as the dependencies of portable libraries that you reference.
We also took the opportunity to apply the same license to Microsoft .NET NuGet libraries, which aren’t fully portable today, like Entity Framework and all of the Microsoft AspNet packages. These libraries target the full .NET Framework, so they’re not intended to be used with Xamarin’s iOS and Android tools (just like they don’t target Windows Phone or Windows Store).
These releases will enable significantly more use of these common libraries across Windows and non-Windows platforms, including in open source projects.
Cross-platform app developers can now use PCL
Portable class libraries are a great option for app developers building for Microsoft platforms in Visual Studio, to share key business functionality across Microsoft platforms. Many developers use the PCL technology today, for example, to share app logic across Windows Store and Windows Phone. Today’s announcement enables developers using Xamarin’s tools to share these libraries as well.
In Visual Studio, you’ll continue to use Portable Class Library projects but will be able to reference them from within Xamarin’s tools for VS. That means that you can write rich cross-platform libraries and take advantage of them from all of your .NET apps.
The following image demonstrates an example set of .NET NuGet library references that you can use within one of your portable libraries. The .NET NuGet libraries will enable new scenarios and great new libraries built on top of them.
You can build cross-platform libraries with .NET
This announcement also benefits .NET developers writing reusable and open source libraries. You’ve probably used some of these libraries, for example Json.NET. These developers have been very vocal about wanting this change. This announcement greatly benefits those library developers, enabling them to leverage our portable libraries in their libraries.
Getting started with portable libraries and Xamarin
You can start by building portable libraries in Visual Studio, as you can see in the screenshot above. You can take advantage of the portable libraries that we released today. Write code!
You’ll need an updated NuGet client, to take advantage of this new scenario. Make sure that you are using NuGet 2.7.2 or higher, or just download the latest NuGet for your VS version from the Installing NuGet page.
We are working closely with Xamarin to ensure that our NuGet libraries work well with Xamarin tools, as well as PCL generally. Please tell us if you find any issues. We’ll get them resolved and post them to our known issues page.
Thank You
Thank you for the feedback on UserVoice. With today’s announcement, we can mark the request to Remove the platform restriction on Microsoft NuGet packages as complete. Thanks to Phil Haack for filing the issue. Coupled with our collaboration with Xamarin, .NET developers have some compelling tools, especially for targeting mobile devices.
Both Microsoft and Xamarin want to see this scenario succeed. We’d love your feedback. Please tell us how the new features are working for you.
This post was written by Rich Lander, a Program Manager on the .NET Team.
[Some] Comments
Immo Landwerth [MSFT] 13 Nov 2013 1:24 PM
Thanks a lot for the kind words!
@Curt: We absolutely understand that PCL support in Visual Studio express editions is super important to many of our developers. That’s why it’s on our list. However, I can’t promise that we actually end up delivering it in the VS 2013 time frame. As you’ve seen today, there is a lot of great stuff going on and resources are always more scarce than one would hope.
Gz 14 Nov 2013 4:19 AM
Xamarin is great but their pricing is insane! even with the MSDN discount. We’re a tiny start-up development house that has benefited from the MS BizSpark programme and we simply cannot stretch to paying out a thousand bucks per platform, per year, per developer – mobile isn’t even a revenue generator for us – it would merely be extending some functionality from our main apps to mobile and we’d give it to customers for free. I know they have a free & an indie edition blah blah blah but we wanna work in VS. The good news is that Xamarin will soon have a competitor in this space that could potentially blow them out of the water with full VS support and direct access to native APIs on each platform (iOS, Android & Mac) and their pricing will be less than 1/3rd of Xamarin’s. I’ve been sworn to secrecy about it but expect to have a cost-effective Xamarin alternative before the end of the year. (No I don’t work for the company, just got some info about it recently).
Stilgar 14 Nov 2013 8:30 AM
I second the need for PCLs in Express editions. Otherwise your company’s constant claims that the tooling for Windows 8 and Windows Phone development is free is pure hypocrisy.
- and end finally with New and improved EULA! [WCF Data Services Blog, Nov 13, 2013] post:
TL;DR: You can now (legally) use our .NET OData client and ODataLib on Android and iOS.
Backstory
For a while now we have been working with our legal team to improve the terms you agree to when you use one of our libraries (WCF Data Services, our OData client, or ODataLib). A year and a half ago, we announced that our EULA would include a redistribution clause. With the release of WCF Data Services 5.6.0, we introduced portable libraries for two primary reasons:
Portable libraries reduce the amount of duplicate code and #ifdefs in our code base.
Portable libraries increase our reach through third-party tooling like Xamarin (more on that later).
It took some work to get there, and we had to make some sacrifices along the way, but we are now focused exclusively on portable libraries for client-side code. Unfortunately, our EULA still contained a clause that prevented the redistributable code from being legally used on a platform other than Windows.
OData and Xamarin: Extending developer reach to many platforms
We are really excited about Microsoft’s new collaboration with Xamarin. As Soma says, this collaboration will allow .NET developers to broaden the reach of their applications and skills. This has long been the mantra of OData – a standardized ecosystem of services and consumers that enables consumers on any platform to easily consume services developed on any platform. This collaboration will make it much easier to write a shared code base that allows consumption of OData on Windows, Android or iOS.
EULA change
To fully enable this scenario, we needed to update our EULA. We, along with several other teams at Microsoft, are rolling out a new EULA today that has relaxed the distribution requirements. Most importantly, we removed the clause that prevented redistributable code from being used on Android and iOS.
The new EULA is effective immediately for all of our NuGet packages. This means that (even though we already released 5.6.0) you can create a Xamarin project today, take a new dependency on our OData client, and legally run that application on any platform you wish.
Thanks
As always, we really appreciate your feedback. It frequently takes us some time to react, but the credit for this change is due entirely to customer feedback. We hear you. Keep it coming.
Thanks,
The OData Team
Q3’13 smartphone and overall mobile phone markets: Android smartphones surpassed 80% of the market, with Samsung increasing its share to 32.1% against Apple’s 12.1% only; while Nokia achieved a strong niche market position both in “proper” (Lumia) and “de facto” (Asha Touch) smartphones
Details about Samsung’s strengths you can find inside the Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A [‘Experiencing the Cloud’, Nov 11, 2013] post of mine.
My findings supporting the above title:
- 205 million Android smartphones were delivered in Q3’13, representing 15.2% growth sequentially (Q/Q) and 67.3% growth relative to the same period of last year (Y/Y)
- Meanwhile the number of Apple iPhones shipped increased only to 33.8 million, growing by 8.3% sequentially (Q/Q), but still representing a 25.65% growth relative to the same period of last year (Y/Y)
- The shipment of “proper” smartphones from Nokia (S60/Symbian and Lumia/Windows Phone) increased to 8.8 million units, representing 18.9% growth sequentially (Q/Q) and 39.7% growth relative to the same period of last year (Y/Y)
- Meanwhile the shipment of “de facto” smartphones from Nokia (S60/Symbian, Lumia/Windows Phone and Asha Full Touch in S40 Series) increased to 14.7 million units, representing 25.6% growth sequentially (Q/Q) and 14.8% growth relative to the same period of last year (Y/Y). It is also important that the decline of Asha Full Touch after its peak of 9.3 million units sold in Q4’12 has been reversed with 5.9 million units shipped, representing a sizable 37.2% growth sequentially (Q/Q).
- The new (in Q3’13) Asha 501 became the most popular smartphone on the Indian market in the $60-80 price range (as per Flipkart, see above), successfully beating off the best competitive offerings from Samsung and the two leading local brands, Micromax and Karbonn. This is another positive sign of successfull revival of the Asha Touch platform started with Asha 501 (via the Asha Software Platform 1.0) as described in the New Nokia Asha platform for developers [‘Experiencing the Cloud’, May 9, 2013] and New Asha platform and ecosystem to deliver a breakthrough category of affordable smartphone from Nokia [‘Experiencing the Cloud’, May 9 – July 5, 2013] posts of mine. Everything is well represented by comparing the “micro reports” included into the bottom left corner of the overall chart a quarter ago and now:

- As one currently could see this Nokia (the devices part of it soon becoming the part of Microsoft*) could realise its goal of selling “100 million of the new generation Asha smartphones over the coming years, beginning with the Nokia Asha 501”. The Asha 500, Asha 502 and Asha 503 introduced in October 22 could already deliver a huge jump in shipments of “de facto smartphones” under Asha brand, helping to defend further and even improve Nokia’s market position against the sub $100 Android smartphones in Q4’13. Note also that Asha 500 was announced for $69 list price (before taxes or subsidies) which means that—depending on “race to the bottom” competition—could easily mean a street price of $60+ on the Indian market.
-
* See also the previous posts of mine:
– Unique Nokia assets (from factories to global device distribution & sales, and the Asha sub $100 smartphone platform etc.) will now empower the One Microsoft devices and services strategy [‘Experiencing the Cloud’, Sept 3 – Oct 23, 2013]
– Microsoft answers to the questions about Nokia devices and services acquisition: tablets, Windows downscaling, reorg effects, Windows Phone OEMs, cost rationalization, ‘One Microsoft’ empowerment, and supporting developers for an aggressive growth in market share [‘Experiencing the Cloud’, Sept 3 – Oct 23, 2013]
– Microsoft Nokia Transaction Conference Call with slides from Microsoft Strategic Rationale inserted-ebook – 3-Sept-2013 edited by Sándor Nacsa from those two sources into an ebook format PDF
– Leading edge Nokia phablets for both entertainment and productivity: Lumia 1320 targeting the masses at $339, and Lumia 1520 the imaging conscious business users and individuals at $749 [‘Experiencing the Cloud’, Oct 26, 2013] - The Asha Touch revival was also able to stop the decline of the overall Nokia “mobile phones” category (Nokia S30, S40, Asha and Asha Full Touch phones) exactly at 55.8 million units, the same number as for the Q1’13.
- In addition there are now the Leading edge Nokia phablets for both entertainment and productivity: Lumia 1320 targeting the masses at $339, and Lumia 1520 the imaging conscious business users and individuals at $749 [‘Experiencing the Cloud’, Oct 26, 2016].
- With that Nokia established a strong niche market position on both the $130+ market (starting with Lumia 520 sold at that price in India, also the most popular one on Flipkart for the the $80-160 price range of devices) and the sub $80 market against the onslaught of Android devices. The rest will depend now only on Microsoft.

Than for the lead smartphone market, i.e. Mainland China I will include here:
- China market: Smartphone sales top 93 million units in 3Q13, says Analysys [Digitimes, Nov 12, 2013]
There were 102.66 million handsets sold in the China market during the third quarter of 2013, growing 13.6% on quarter and 54.5% on year, of which 93.08 million units were smartphones, increasing 20.7% on quarter and 89.3% on year, according to China-based consulting company Analysys International.
While for the worldwide market:
- China-based smartphone vendors set to rise in 2013 rankings, says IC Insights [Digitimes, Nov 13, 2013]
Lenovo, ZTE, Huawei and Yulong/Coolpad have taken advantage of the surging low-end smartphone market. According to IC Insights, the four major China-based handset companies are forecast to ship 168 million smartphones in 2013 and together hold a 17% share of the worldwide smartphone market.
Lenovo, ZTE, Huawei and Yulong/Coolpad shipped a combined 98 million smartphones in 2012, a more than 300% surge from the 29 million units shipped in 2011, IC Insights disclosed. It should be noted that the China-based suppliers of smartphones are primarily serving the China and Asia-Pacific marketplace, and offer low-end models that typically sell for less than US$200.
Low-end smartphones are expected to represent just under one-third (310 million) of the total 975 million smartphones shipped in 2013. IC Insights forecast that by 2017, low-end smartphone shipments will represent 46% of the total smartphone market with China and the Asia-Pacific region to remain the primary markets for these low-end models.
Samsung Electronics and Apple are set to continue dominating the total smartphone market in 2013. The two vendors are forecast to ship 457 million units and together hold a 47% share of the total smartphone market in 2013, IC Insights said. In 2012, Samsung and Apple shipped 354 million smartphones and took a combined 50% share of the total smartphone market.
Nokia was third-largest supplier of smartphones behind Samsung and Apple in 2011, but has seen its share of the smartphone market fall. Nokia’s smartphone shipments are forecast to decline by another 4% and grab an only 3% share of the total smartphone market in 2013, IC Insights indicated.
Other smartphone producers that have fallen on hard times include RIM and HTC. While each of these companies had about a 10% share of the smartphone market in 2011, IC Insights estimated they will have only about 2% shares of the 2013 smartphone market.
-
Gartner Says Smartphone Sales Accounted for 55 Percent of Overall Mobile Phone Sales in Third Quarter of 2013 [press release, Nov 14, 2013]
– Western Europe Grew for the First Time this Year
– Lenovo Became the No. 3 Worldwide Smartphone Vendor for the First Time
Worldwide mobile phone sales to end users totaled 455.6 million units in the third quarter of 2013, an increase of 5.7 percent from the same period last year, according to Gartner, Inc. Sales of smartphones accounted for 55 percent of overall mobile phone sales in the third quarter of 2013, and reached their highest share to date.
Worldwide smartphone sales to end users reached 250.2 million units, up 45.8 percent from the third quarter of 2012. Asia/Pacific led the growth in both markets – the smartphone segment with 77.3 percent increase and the mobile phone segment with 11.9 percent growth. The other regions to show an increase in the overall mobile phone market were Western Europe, which returned to growth for the first time this year, and the Americas.
“Sales of feature phones continued to decline and the decrease was more pronounced in markets where the average selling price (ASP) for feature phones was much closer to the ASP affordable smartphones,” said Anshul Gupta, principal research analyst at Gartner. “In markets such as China and Latin America, demand for feature phones fell significantly as users rushed to replace their old models with smartphones.”
Gartner analysts said global mobile phone sales are on pace to reach 1.81 billion units in 2013, a 3.4 percent increase from 2012. “We will see several new tablets enter the market for the holiday season, and we expect consumers in mature markets will favor the purchase of smaller-sized tablets over the replacement of their older smartphones” said Mr. Gupta.
While Samsung’s share was flat in the third quarter of 2013, Samsung increased its lead over Apple in the global smartphone market (see Table 1). The launch of the Samsung Note 3 helped reaffirm Samsung as the clear leader in the large display smartphone market, which it pioneered.
Lenovo’s sales of smartphones grew to 12.9 million units, up 84.5 percent year-on-year. It constantly raised share in the Chinese smartphone market.
Apple’s smartphone sales reached 30.3 million units in the third quarter of 2013, up 23.2 percent from a year ago. “While the arrival of the new iPhones 5s and 5c had a positive impact on overall sales, such impact could have been greater had they not started shipping late in the quarter. While we saw some inventory built up for the iPhone 5c, there was good demand for iPhone 5s with stock out in many markets,” said Mr. Gupta.
In the smartphone operating system (OS) market (see Table 2), Android surpassed 80 percent market share in the third quarter of 2013, which helped extend its leading position. “However, the winner of this quarter is Microsoft which grew 123 percent. Microsoft announced the intent to acquire Nokia’s devices and services business, which we believe will unify effort and help drive appeal of Windows ecosystem,” said Mr. Gupta. Forty-one per cent of all Android sales were in mainland China, compared to 34 percent a year ago. Samsung is the only non-Chinese vendor in the top 10 Android players ranking in China. Whitebox Yulong [Coolpad] is the third largest Android vendor in China with a 9.7 percent market share in the third quarter of 2013. Xiaomi represented 4.3 percent of Android sales in the third quarter of 2013, up from 1.4 percent a year ago.
Mobile Phone Vendor Perspective
Samsung: Samsung extended its lead in the overall mobile phone market, as its market share totaled 25.7 percent in the third quarter of 2013 (see Table 3). “While Samsung has started to address its user experience, better design is another area where Samsung needs to focus,” said Mr. Gupta. “Samsung’s recent joint venture with carbon fiber company SGL Group could bring improvements in this area in future products.”
Nokia: Nokia did better than anticipated in the third quarter of 2013, reaching 63 million mobile phones, thanks to sales of both Lumia and Asha series devices. Increased smartphone sales supported by an expanded Lumia portfolio, helped Nokia move up to the No. 8 spot in the global smartphone market. But regional and Chinese Android device manufacturers continued to beat market demand, taking larger share and creating a tough competitive environment for Lumia devices.
Apple: Gartner believes the price difference between the iPhone 5c and 5s is not enough in mature markets, where prices are skewed by operator subsidies, to drive users away from the top of the line model. In emerging markets, the iPhone 4S will continue to be the volume driver at the low end as the lack of subsidy in most markets leaves the iPhone 5c too highly priced to help drive further penetration.
Lenovo: Lenovo moved to the No. 7 spot in the global mobile phone market, with sales reaching approximately 13 million units in the third quarter of 2013. “Lenovo continues to rely heavily on its home market, which represents more than 95 per cent of its overall mobile phone sales. This could limit its growth after 2014, when the Chinese market is expected to decelerate,” said Mr. Gupta.
The tablet market in Q1-Q3’13: It was mainly shaped by white-box vendors while Samsung was quite successfully attacking both Apple and the white-box vendors with triple digit growth both worldwide and in Mainland China
Details about Samsung’s strengths you can find inside the Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A [‘Experiencing the Cloud’, Nov 11, 2013] post of mine.
|
Note what was communicated in the 2013 global tablet forecast [Dec 11, 2012]:
|
My findings behind the title statement:
- White-box vendors from Mainland China delivered 62.6 million tablets in Q1-Q3’13 vs. 35.4 million a year ago (76.8% growth) per DIGITIMES Research
(the two latest sources used for that are included in the end) - Apple delivered 48.2 million tablets in Q1-Q3’13 vs. 42.8 million a year ago (12.6% growth) per IDC
(the IDC sources used are the corresponding quarterly press releases) - Samsung delivered 27.3 million tablets in Q1-Q3’13 vs. 8.7 million a year ago (214% growth) per IDC (with a H1’13 correction from Samsung itself)
- IDC’s latest forecast couldn’t take properly into the account the group of white-box vendors (44.6 million in “Others” category vs. 62.6 million), even more than a year ago (25.8 million in “Others” category vs. 35.4 million)
- With such error for Q1-Q3’13 there was a 142.6 million strong worldwide market by IDC vs. 76.4 million a year ago (86.7% growth)
- Together the white-box vendors, Apple and Samsung, as the market changing vendors/vendor group delivered 132.7 million tablets in Q1-Q3’13 vs. 86.9 million a year ago (52.7% growth)
- Meanwhile the “Others” group (with improper inclusion of white-box vendors) by IDC delivered 49.8 million tablets in Q1-Q3’13 vs. 25.8 million a year ago (93% growth)
- Mainland China had a 4.4 million strong tablet market in Q3’13 vs. the 44.6 million worldwide market as per IDC. Since white-box vendors sold 25 million tablets worldwide (according to DIGITIMES Reasearch) in Q3’13 vs. only 16.8 million sales in the ‘Others’ category by IDC we can safely raise the 49.8 million number by upto 10 million to upto 60 million. This means that in the current quarter Mainland China constituted at least 8.8% of the worldwide tablet market.
- The sequential (Q/Q) growth rate on the Mainland China market per Analysis Int. is:
- Meanwhile the sequential (Q/Q) growth rate on the worldwide market per IDC is:
- This means that Mainland China has much less seasonality than the worldwide market, which is a sign of greater untapped tablet demand than in other markets of the world. Considering the fact that an unusually large group of local tablet vendors are playing the local brand game in China, while the white-box vendor game outside, any global brand tablet vendor should already participate in the Mainland China market in order to succeed worldwide. Lenovo, Samsung and Microsoft have clearly recognised this:
(the two latest Analysis International sources used for that are indicated later)
- Samsung has dramatically increased its market penetration efforts in Q3’13 and succeeded quite well. In fact it was able to push back somewhat the growth rate of the group of local brand vendors (from 170% Q/Q growth rate in Q2’13 to 150% in Q3’13) while significantly increased its own growth rate (from 170% to a whopping 220%).
- Therefore, if things stay as it is (see the above chart) Samsung will outgrow local brand vendors on the Mainland China market within a year.
- Otherwise, if the group of local brand vendors will be able to withstand Samsung’s local efforts and significantly improve the value of their own brands, then the outlook may return to a view which could have been forecasted after Q2’13 (see the below chart):
- Meanwhile two local brands, Teclast (台电) and Onda (昂达) each were able to beat two other global brands, Asus and Acer, on the Mainland China market in the last two quarters.
- The group of ‘Others’, i.e. other local brands taken together were able to grow by similar rate in the last two quarters which shows that with an ongoing consolidation of the local brands (details ommitted here) a few local brands may join Teclast and Onda as the strongest local vendors which will have an opportunity to change their white-box vendor status abroad (and grow globally under their own brand as well).
The Q3’13 and Q2’13 Analysys International sources:
– Nov 8, 2013: http://www.enfodesk.com/SMinisite/maininfo/articledetail-id-389539.html
– Aug 28, 2013: http://www.enfodesk.com/SMinisite/maininfo/articledetail-id-376953.html
The Q3’13 and Q2’13 DIGITIMES Research sources:
- Digitimes Research: White-box tablet shipments to reach 25 million in 3Q13 [DIGITIMES Research, Nov 11, 2013]
China white-box tablet shipments reached about 25 million units in the third quarter of 2013, up 56.3% sequentially and 40.4% on year thanks to strong overseas shipments, which accounted for 80% of the total volume. Among white-box tablet shipments, 7-inch models accounted for the largest share, while 8-inch models, which were originally expected to become new star products, were unable to do so because of high costs from the bezel design and limited supply of 8-inch panels.
Although white-box tablets are expected to see extraordinary growth in 2013, they are also expected to face more obstacles and challenges in the future. First, they will see strong price competition from large brand vendors, which will offer Android-based products at price levels similar to those of white-box models. Second, the tablet market will gradually reach saturation and should no longer see demand as strong as before.
Third, white-box tablet costs have already hit the bottom margin, causing related assembly service providers and component suppliers to see limited profits. Several unhealthy players were already been eliminated from the market at the end of the second quarter, while the remaining players will need to rely on pumping up their shipments to support their profitability. However, such a strategy is unlikely to sustain for long, Digitimes Research noted.
Digitimes Research also found that white-box tablets in Europe or North America are mostly used as gifts in product promotions or bundling deals and therefore specifications are not as high as those of regular tablets. As for emerging markets such as Eastern Europe, Southeast Asia and Latin America, most consumers are buying white-box tablets with a single-core processor, because of limited purchasing power.
As for application processors (APs), 70% of white-box tablets with phone functions adopted solutions from MediaTek in the third quarter, replacing the solutions from China-based Allwinner, the original favorite. Digitimes Research estimates that the proportion of white-box Wi-Fi-only tablets using MediaTek’s solution will also increase dramatically starting the fourth quarter, further impacting China-based Allwinner and Rockchip’s AP shipments. In addition to low prices, China-based AP suppliers will also need to consider how to create additional value for their APs to survive the competition.
- Digitimes Research: White-box tablet shipments suffer over 25% drop in 2Q13 [DIGITIMES Research, Sept 2, 2013]
White-box tablet shipments reached only 15.9 million units in the second quarter of 2013, down 26.3% sequentially due to weakening tablet demand in May and June. Many smaller white-box players were also forced to quit the market, according to Digitimes Research’s latest figures.
Although white-box tablet shipments peaked in April 2013, increasing component costs and the fact that consumers are becoming more sensitive over tablet pricing, are impacting white-box players’ profitability.
For component supply, China-based chipmakers’ competition is gradually becoming fierce for both single-core and dual-core processors. In August 2013, some single-core processor prices were as low as US$5. By the end of 2013, dual-core processor will become the basic specification for entry-level white-box tablets, while mid-range models will turn to quad-core processor completely, Digitimes Research noted.
DRAM and NAND Flash remained at high price points in the second quarter of 2013, but as related players are increasing their supplies in the third quarter, prices are dropping.
As for panels, an entry-level 7-inch TN panel was priced at about US$10-11 at the beginning of the third quarter, and the price has been rising. Although the industry is seeing tight panel supply, the issue is expected to be eased as more panel players will open up new production lines to manufacture small-to-medium size panels in the first half of 2014.
White-box vendors’ over-optimism about demand in the first half created high tablet inventories for the vendors. Weak demand in Europe and North America has affected sales of both first-tier brand vendors and white-box players.
As for China, local first-tier brand vendors’ increasing sales have impacted white-box models’ demand in the country. Emerging markets such as India, Russia, countries in Eastern Europe, Latin America and Southeast Asia, are only providing limited contributions to white-box tablet players because shipments to these countries have just recently started.
Currently, strengthening their inventory management and expanding into overseas emerging markets will be important tasks for white-box tablet players to survive in the tablet market.
On the right is the Moonshot System with the very first Moonshot servers (“
