Home » Articles posted by Nacsa Sándor (Page 15)

Author Archives: Nacsa Sándor

Intel’s HPC-like exascale approach to next-gen of Big Data as well

or we’ll need 1000x more compute (Exascale) than we have today, and we can do that via a proper exascale architecture for general purpose computing (i.e. without the special purpose computing approaches proposed by Intel competitors) – this is the latest message from Intel.

Just two recent headlines from the media:

Then other two headlines which are reflecting another aspect of Intel’s move:

Referring to: Chip Shot: Intel Reveals More Details of Its Next Generation Intel® Xeon Phi™ Processor at SC’13 [Intel Newsroom, Nov 19, 2013]

Today at the Supercomputing Conference in Denver, Intel discussed form factors and memory configuration details of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing”). The new revolutionary design will be based on the leading edge 14nm manufacturing technology and will be available as a host CPU with high-bandwidth memory on a processor package. This first-of-a-kind, highly integrated, many-core CPU will be more easily programmable for developers and improve performance by removing “off-loading” to PCIe devices, and increase cost effectiveness by reducing the number of components compared to current solutions. The company has also announced collaboration with the HPC community designed to deliver customized products to meet the diverse needs of customers, and introduced new Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools to bring the benefits of Big Data analytics and HPC together. View the tech briefing.

image

High-bandwidth In-Package Memory:
Performance for memory-bound workloads
Flexible memory usage modelsimage

image

From: Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

  • Intel discloses form factors and memory configuration details of the CPU version of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing”), to ease programmability for developers while improving performance.

During the Supercomputing Conference (SC’13), Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing“), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will significantly reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking.

Knights Landing will also offer developers three memory options to optimize performance. Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.

In addition, Intel and Fujitsu recently announced an initiative that could potentially replace a computer’s electrical wiring with fiber optic links to carry Ethernet or PCI Express traffic over an Intel® Silicon Photonics link. This enables Intel Xeon Phi coprocessors to be installed in an expansion box, separated from host Intel Xeon processors, but function as if they were still located on the motherboard. This allows for much higher density of installed coprocessors and scaling the computer capacity without affecting host server operations.

Several companies are already adopting Intel’s technology. For example, Fovia Medical*, a world leader in volume rendering technology, created high-definition, 3D models to help medical professionals better visualize a patient’s body without invasive surgery. A demonstration from the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) showed a 2D simulation of an F4 tornado, and addressed how a forecaster will be able to experience an immersive 3D simulation and “walk around a storm” to better pinpoint its path. Both applications use Intel® Xeon® technology.

Intel @ SC13 [HPCwire YouTube channel, Nov 22, 2013]

Intel presents technical computing solutions from SC13 in Denver, CO. [The CAPS demo is from [4:00] on]

From: Exascale Challenges and General Purpose Processors [Intel presentation, Oct 24, 2013]

CERN Talk 2013 presentation by Avinash Sodani, Chief Architect, Knights Landing Processor, Intel Corporation

The demand for high performance computing will continue to grow exponentially, driving to Exascale in 2018/19. Among the many challenges that Exascale computing poses, power and memory are two important ones. There is a commonly held belief that we need special purpose computing to meet these challenges. This talk will dispel this myth and show how general purpose computing can reach the Exascale efficiencies without sacrificing the benefits of general purpose programming. It will talk about future architectural trends in Xeon-Phi and what it means for the programmers.
About the speaker
Avinash Sodani is the chief architect of the future Xeon-Phi processor from Intel called Knights Landing. Previously, Avinash was one of the primary architects of the first Core i7/i5 processor (called Nehalem). He also worked as a server architect for Xeon line of products. Avinash has a PhD in Computer Architecture from University of Wisconsin-Madison and a MS in Computer Science from the same university. He has a B.Tech in Computer Science from Indian Institute of Technology, Kharagpur in India.

image

Summary

  • Many challenges to reach Exascale – Power is one of them 
  • General purpose processors will achieve Exascale power efficiencies
    – Energy/op trend show bridgeable gap of ~2x to Exascale (not 50x)
  • General purpose programming allows use of existing tools and programming methods. 
  • Effort needed to prepare SW to utilize Xeon-Phi’s full compute capability. But optimized code remains portable for general purpose processors.
  • More integration over time to reduce power and increase reliability

From: Intel Formally Introduces Next-Generation Xeon Phi “Knights Landing” [X-bit labs, Nov 19, 2013]

According to a slide from an Intel presentation that leaked to the web earlier this year, Intel Xeon Phi code-named Knights Landing will be released sometimes in late 2014 or in 2015.

image

The most important aspect about the Xeon Phi “Knights Landing” product is its performance, which is expected to be around or over double precision 3TFLOPS, or 14 – 16GFLOPS/w; up significantly from ~1TFLOPS per current Knights Corner chip (4 – 6GFLOPS/w). Keeping in mind that Knights Landing is 1.5 – 2 years away, three times performance increase seem significant and enough to compete against its rivals. For example, Nvidia Corp.’s Kepler has 5.7GFLOPS/w DP performance, whereas its next-gen Maxwell (competitor for KNL) will offer something between 8GFLOPS/w and 16GFLOPS/w.

More from: Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

  • New Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools bring the benefits of Big Data analytics and HPC together.
  • Collaboration with HPC community designed to deliver customized products to meet the diverse needs of customers.

High Performance Computing for Data-Driven Discovery
Data intensive applications including weather forecasting and seismic analysis have been part of the HPC industry from its earliest days, and the performance of today’s systems and parallel software tools have made it possible to create larger and more complex simulations. However, with unstructured data accounting for 80 percent of all data, and growing 15 times faster than other data1, the industry is looking to tap into all of this information to uncover valuable insight.

Intel is addressing this need with the announcement of the Intel® HPC Distribution for Apache Hadoop* software (Intel® HPC Distribution) that combines the Intel® Distribution for Apache Hadoop software with Intel® Enterprise Edition of Lustre* software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.

The Intel® Cloud Edition for Lustre* software is a scalable, parallel file system that is available through the Amazon Web Services Marketplace* and allows users to pay-as-you go to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud.

With numerous vendors announcing pre-configured and validated hardware and software solutions featuring the Intel Enterprise Edition for Lustre, at SC’13, Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. Partners announcing these appliances include Advanced HPC*, Aeon Computing*, ATIPA*, Boston Ltd.*, Colfax International*, E4 Computer Engineering*, NOVATTE* and System Fabric Works*.

* Other names and brands may be claimed as the property of others.

1 From IDC Digital Universe 2020 (2013)

Mark Seager: Approaching Big Data as a Technical Computing Usage Model [ieeeComputerSociety YouTube channel, recorded on October 29, published on November 12, 2013]

Mark Seager, CTO for technical computing at Intel, discusses the amazing new capabilities that are spreading across industries and reshaping the world. Watch him describe the hardware and software underlying much of the parallel processing that drives the big data revolution in his talk at the IEEE Computer Society’s “Rock Stars of Big Data” event, which was held 29 October 2013 at the Computer History Museum in Santa Clara, CA. Mark leads the HPC strategy for Intel’s High Performance Computing division. He is working on an ecosystem approach to develop and build HPC systems for Exascale and new storage paradigms Big Data systems. Mark managed the Platforms portion of the Advanced Simulation and Computing (ASC) program at Lawrence Livermore National Laboratory (LLNL) and successfully developed with industry partners and deployed the five generations of TOP1 systems. In addition, Mark developed the LLNL Linux strategy and award winning industry partnerships in storage and Linux systems developments. He has won numerous awards including the prestigious Edward Teller Award for “Major Contributions to the State-of-the-Art in High Performance Computing.”

From: Discover Your Parallel Universe [The Data Stack blog from Intel, Nov 18, 2013]

That’s Intel’s theme at SC’13 this week at the 25th anniversary of the Supercomputing Conference. We’re using it to emphasize the importance of modernizing codes and algorithms to take advantage of modern processors (think lots of cores and threads and wide vector units found in Intel Xeon processors and Intel Xeon Phi coprocessors). Or simply put, “going parallel” as we like to call it. We have a fantastic publication called Parallel Universe Magazine for more on the software and hardware side of going parallel.

But we’re also using it as inspiration for the researchers, scientists, and engineers that are changing the world every day. We’re asking them to envision the universe we’ll live in if the supercomputing community goes parallel. A few examples:

  1. In a parallel universe there is a cure
  2. In a parallel universe natural disasters are predicted
  3. In a parallel universe ideas become reality

Pretty lofty huh? But also inevitable. We will find a 100% cure to all forms of cancer according to the National Cancer Institute. We will be able to predict the weather 28-days in advance according to National Oceanic and Atmospheric Association. And everyone will eventually use computing to turn their ideas into products.

The only problem is it’ll be the year 2190 before we have a cure to pancreatic cancer, we’ll need 1000x more compute (Exascale) than we have today to predict the weather 28-days in advance, and the cost and learning curve of technical computing will need to continue to drop before everyone has access.

That’s our work here at Intel. We solve these problems. We drive more performance at lower cost which gives people more compute. The more compute, the better cancer researchers will understand the disease. We’ll shift that 2190 timeline left. We’ll also solve the challenges to reaching Exascale levels of compute which will make weather forecast more accurate. And we’ll continue to drive open standards. This will create a broad ecosystem of hardware and software partners which drives access on a broad scale.

From: Criteria for a Scalable Architecture 2013 OFA Developer Workshop, Monterey, CA [keynote on 2013 OpenFabrics International Developer Workshop, April 21-24, 2013]
By
Mark Seager, CTO for the HPC Ecosystem, Intel Technical Computing Group

In this video from the 2013 Open Fabrics Developer Workshop, Mark Seager from Intel presents: Criteria for a Scalable Achitecture. Learn more at: https://www.openfabrics.org/press-room/2013-intl-developer-workshop.html

image………………………………………………………..
Exascale Systems Challenges are both Interconnect and SAN

• Design with system focus that enables end-user applications
• Scalable hardware
– Simple, Hierarchal
– New storage hierarchy with NVRAM
• Scalable Software
– Factor and solve
– Hierarchal with function shipping
• Scalable Apps
– Asynchronous coms and IO
– In-situ, in-transit and post processing/visualization

Summary

• Integration of memory and network into processor will help keep us on the path to Exascale
• Energy is the overwhelming challenge. We need a balanced attack that optimizes energy under real user conditions
• B:F and memory/core while they have their place, they can also result in impediments to progress
• Commodity interconnect can deliver scalability through improvements in Bandwidth, Latency and message rates
………………………………………………………..

SAN: Storage Area Network     Ci: Compute nodes     NVRAM: Non-Volatile RAM
OSNj: ?Operating System and Network?    SNk: ?Storage Node?

Lustre: the dominant parallel file system for HPC and ‘Big Data’

Moving Lustre Forward: Status & Roadmap [RichReport YouTube channel, Dec 2, 2013]

In this video from the DDN User Meeting at SC13, Brent Gorda from the Intel High Performance Data Division presents: “Moving Lustre Forward: Status & Roadmap.” Learn more: http://www.whamcloud.com/about/ and http://ddn.com

Intel Expands Software Portfolio for Big Data Solutions [press release, June 12, 2013]

New Intel® Enterprise Edition for Lustre* Software Designed to Simplify Big Data Management, Storage

NEWS HIGHLIGHTS

  • Intel® Enterprise Edition for Lustre* software helps simplify configuration, monitoring, management and storage of high volumes of data.
  • With Intel® Manager for Lustre* software, Intel is able to extend the reach of Lustre into new markets such as financial services, data analytics, pharmaceuticals, and oil and gas.
  • When combined with the Intel® Distribution for Apache Hadoop* software, Hadoop users can access Lustre data files directly, saving time and resources.
  • New software offering furthers Intel’s commitment to drive new levels of performance and features through continuing contributions the open source community.

SANTA CLARA, Calif., June. 12, 2013 – The amount of available data is growing at exponential rates and there is an ever-increasing need to move, process and store it to help solve the world’s most important and demanding problems. Accelerating the implementation of big data solutions, Intel Corporation announced the Intel® Enterprise Edition for Lustre* software to make performance-based storage solutions easier to deploy and manage.

Businesses and organizations of all sizes are increasingly turning to high-performance computing (HPC) technologies to store and process big data workloads due to its performance and scalability advantages. Lustre is an open source parallel distributed file system and key storage technology that ties together data and enables extremely fast access. Lustre has become the popular choice for storage in HPC environments for its ability to support tens of thousands of client systems and tens of petabytes of storage with access speeds well over 1 terabyte per second. That is the equivalent to downloading all “Star Wars”* and all “Star Trek”* movies and television shows in Blu-Ray* format in one-quarter of a second.

“Enterprise users are looking for cost-effective and scalable tools to efficiently manage and quickly access large volumes of data to turn valuable information into actionable insight,” said Boyd Davis, vice president and general manager of Intel’s Datacenter Software Division. “The addition of the Intel Enterprise Edition for Lustre to our big data software portfolio will help make it easier and more affordable for businesses to move, store and process data quickly and efficiently.”

The Intel Enterprise Edition for Lustre software is a validated and supported distribution of Lustre featuring management tools as well as a new adaptor for the Intel® Distribution for Apache Hadoop*. This new offering provides enterprise-class reliability and performance to take full advantage of storage environments with worldwide service, support, training and development provided experienced Lustre engineers at Intel.

The Intel® Manager for Lustre provides a consistent view of what is happening inside the storage system regardless of where the data is stored or what type of hardware is used. This tool enables IT administrators to easily manage tasks and reporting, provides real-time system monitoring as well as the ability to quickly troubleshoot. IT departments are also able to streamline management, shorten the learning curve and lower operational expenses resulting in time and resource savings, better risk mitigation and improved business decision-making.

When paired with the Intel® Distribution for Apache Hadoop, the Intel Enterprise Edition for Lustre software allows Hadoop to be run on top of Lustre, significantly improving speed in which data can be accessed and analyzed. This allows users to access data files directly from the global file system at faster rates and speeds up analytics time, providing more productive use of storage assets as well as simpler storage management.

As part of the company’s commitment to drive innovation and enable the open source community, Intel will contribute development and support as well as community releases to the development of Lustre. With veteran Lustre engineers and developers working at Intel contributing to the code, Lustre will continue its growth in both high-performance computing and commercial environments and is poised to enter new enterprise markets including financial services, data analytics, pharmaceuticals, and oil and gas.

The Intel Enterprise Edition for Lustre will be available in early in the third quarter of this year.

Microsoft products for the Cloud OS

Part of: Microsoft Cloud OS vision, delivery and ecosystem rollout

1. The Microsoft way
2. Microsoft Cloud OS vision
3. Microsoft Cloud OS delivery and ecosystem rollout
4. Microsoft products for the Cloud OS

4. 1 Windows Server 2012 R2 & System Center 2012 R2
4.2 Unlock Insights from any Data – SQL Server 2014
4.3 Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights
4.4 Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI)
4.5 Microsoft talking about Cloud OS and private clouds: starting with Ray Ozzie in November, 2009 (separate post)

4.5.1 Tiny excerpts from official executive and/or corporate communications
4.5.2 More official communications in details from executives and/or corporate

4. 1 Windows Server 2012 R2 & System Center 2012 R2 [MPNUK YouTube channel, Nov 18, 2013]

Hosting technical training overview.

Windows Server 2012 R2: 0:00
Server Virtualization: 4:40
Storage: 11:07
Networking: 17:37
Server Management and Automation: 23:14
Web and Application Platform: 27:05

System Center 2012 R2: 31:14
Infrastructure Provisioning: 36:15
Infrastructure Monitoring: 42:48
Automation and Self-service: 45:30
Application Performance Monitoring: 48:50
IT Service Management: 51:05

More information is in the What’s New in 2012 R2 [Windows Server 2012 R2, System Center 2012 R2] series of “In the Cloud” articles by Brad Anderson:

Over the last three weeks, Microsoft has made an exciting series of announcements about its next wave of products, including Windows Server 2012 R2, System Center 2012 R2, SQL Server 2014, Visual Studio 2013, Windows Intune and several new Windows Azure services. The preview bits are now available, and the customer feedback has been incredible!

The most common reaction I have heard from our customers and partners is that they cannot believe how much innovation has been packed into these releases – especially in such a short period of time. There is a truly amazing amount new value in these releases and, with this in mind, we want to help jump-start your understanding of the key scenarios that we are enabling.

As I’ve discussed this new wave of products with customers, partners, and press, I’ve heard the same question over and over: “How exactly did Microsoft build and deliver so much in such a short period of time?” My answer is that we have modified our own internal processes in a very specific way: We build for the cloud first.

A cloud-first design principle manifests itself in every aspect of development; it means that at every step we architect and design for the scale, security and simplicity of a high-scale cloud service. As a part of this cloud-first approach, we assembled a ‘Scenario Focus Team’ that identified the key user scenarios we needed to support – this meant that our engineers knew exactly what needed to be built at every stage of development, thus there was no time wasted debating what happened next. We knew our customers, we knew our scenarios, and that allowed all of the groups and stakeholders to work quickly and efficiently.

The cloud-first design approach also means that we build and deploy these products within our own cloud services first and then deliver them to our customers and partners. This enables us to first prove-out and battle-harden new capabilities at cloud scale, and then deliver them for enterprise use. The Windows Azure Pack is a great example of this: In Azure we built high-density web hosting where we could literally host 5,000 web servers on a single Windows Server instance. We exhaustively battle-hardened that feature, and now you can run it in your datacenters.

At Microsoft we operate more than 200 cloud services, many of which are servicing 100’s of millions of users every day. By architecting everything to deliver for that kind of scale, we are sure to meet the needs of enterprise anywhere and in any industry.

Our cloud-first approach was unique for another reason: It was the first time we had common/unified planning across Windows Client, Windows Server, System Center, Windows Azure, and Windows Intune. I know that may sound crazy, but it’s true – this is a first. We spent months planning and prioritizing the end-to-end scenarios together, with the goal of identifying and enabling all the dependencies and integration required for an effort this broad. Next we aligned on a common schedule with common engineering milestones.

The results have been fantastic. Last week, within 24 hours, we were able to release the previews bits of Windows Client 8.1, Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.

By working together throughout the planning and build process, we established a common completion and Release to Manufacturing date, as well as a General Availability date. Because of these shared plans and development milestones, by the time we started the actual coding, the various teams were well aware of each dependency and the time to build the scenarios was much shorter.

The bottom-line impact of this Cloud-first approach is simple:  Better value, faster.

This wave of products shows that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and those scenarios are all delivered at a higher quality.

This wave of products demonstrates that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and each of those scenarios are all delivered at a higher quality.  This cloud-first approach also helps us deliver the Cloud OS vision that drives the STB business strategy.

The story behind the technologies that support the Cloud OS vision is an important part of how we enable customers to embrace cloud computing concepts.  Over the next eight weeks, we’ll examine in great detail the three core pillars (see the table below) that support and inspire these R2 products:  Empower People-centric IT, Transform the Datacenter, and Enable Modern Business Apps.  The program managers who defined these scenarios and worked within each pillar throughout the product development process, have authored in-depth overviews of these pillars and their specific scenarios, and we’ll release those on a weekly basis.

Pillar

Scenarios

Empower People-centric IT

People-centric IT (PCIT) empowers each person you support to work virtually anywhere on PCs and devices of their choice, while providing IT with an easy, consistent, and secure way to manage it all. Microsoft’s approach helps IT offer a consistent self-service experience for people, their PCs, and their devices while ensuring security. You can manage all your client devices in a single tool while reducing costs and simplifying management.

Transform the Datacenter

Transforming the datacenter means driving your business with the power of a hybrid cloud infrastructure. Our goal is to help you leverage your investments, skills and people by providing a consistent datacenter and public cloud services platform, as well as products and technologies that work across your datacenter, and service provider clouds.

Enable Modern Business Apps

Modern business apps live and move wherever you want, and Microsoft offers the tools and resources that deliver industry-leading performance, high availability, and security. This means boosting the impact of both new and existing applications, and easily extending applications with new capabilities – including deploying across multiple devices.

The story behind these pillars and these products is an important part of our vision for the future of corporate computing and the modern datacenter, and in the following post, David B. Cross, the Partner Director of Test and Operations for Windows Server, shares some of the insights the Windows Server & System Center team have applied during every stage of our planning, build, and deployment of this awesome new wave of products.

People want access to information and applications on the devices of their choice. IT needs keep data protected and without breaking the budget. Learn how the Microsoft People-centric IT vision helps businesses address their consumerization of IT challenges. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/pcit.aspx
Hear from Dell and Accenture how Microsoft Windows Server 2012 R2 and System Center 2012 R2 enable a more flexible workstyle and people-centric IT through virtual desktop infrastructure (VDI). Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.

If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.

Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT(PCIT).

In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.

The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:

  1. The identity of the user
  2. The user’s specific device
  3. The network the user is working from

What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.

In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.

This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.

People want access to corporate applications from anywhere, on whatever device they choose—laptop, smartphone, tablet, or PC. IT departments are challenged to provide consistent, rich experiences across all these device types, with access to native, web, and remote applications or desktops. In this video we take a look at how IT can enable people to choose their devices, reduce costs and complexity, as well as maintain security and compliance by protecting data and having comprehensive settings management across platforms.

In today’s post, we tackle a common question I get from customers: “Why move to the cloud right now?” Recently, however, this question has changed a bit to, “What should I move to the cloud first?

An important thing to keep in mind with either of these questions is that every organization has their own unique journey to the cloud. There are a lot of different workloads that run on Windows Server, and the reality is that these various workloads are moving to the cloud at very different rates. Web servers, e-mail and collaboration are examples of workloads moving to the cloud very quickly. I believe that management, and the management of smart devices, will be one of the next workloads to make that move to the cloud – and, when the time comes, that move will happen fast.

Using a SaaS solution is a move to the cloud, and taking this approach is a game changer because of its ability to deliver an incredible amount of value and agility without an IT pro needing to manage any of the required infrastructure.

Cloud-based device management is a particularly interesting development because it allows IT pros to manage this rapidly growing population of smart, cloud-connected devices, and manage them “where they live.” Today’s smart phones and tablets were built to consume cloud services, and this is one of the reasons why I believe that a cloud-based management solution for them is so natural. As you contemplate your organization’s move to the cloud, I suggest that managing all of your smart devices from the cloud should be one of your top priorities.

I want to be clear, however, about the nature of this kind of management: We believe that there should be one consistent management experience across PC’s and devices.

Achieving this single management experience was a major focus of these 2012 R2 releases, and I am incredibly proud to say we have successfully engineered products which do exactly that. The R2 releases deliver this consistent end-user experience through something we call the “Company Portal.” The Company Portal is already deployed here at Microsoft, and it is what we are currently using to upgrade our entire workforce to Windows 8.1. I’ve personally used it to upgrade my desktop, laptop, and Surface – and the process could not have been easier.

In this week’s post, Paul Mayfield, the Partner Program Manager for System Center Configuration Manager/Windows Intune, and his team return to discuss in deep technical detail some of the specific scenarios our PCIT [“People Centric IT”] team has enabled (cloud-based management, Company Portal, etc.).

Cloud computing is bringing new opportunities and new challenges to IT. Learn how Microsoft can help transform your datacenter to take advantage of the vast possibilities of the cloud while leveraging your existing resources. Learn more: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-data-center.aspx
  • Part 4, July 24, 2013: Enabling Open Source Software

image

There are a lot of great surprises in these new R2 releases – things that are going to make a big impact in a majority of IT departments around the world. Over the next four weeks, the 2012 R2 series will cover the 2nd pillar of this release:Transform the Datacenter. In these four posts (starting today) we’ll cover many of the investments we have made that better enable IT pros to transform their datacenter via a move to a cloud-computing model.

This discussion will outline the ambitious scale of the functionality and capability within the 2012 R2 products. As with any conversation about the cloud, however, there are key elements to consider as you read. Particularly, I believe it’s important in all these discussions – whether online or in person – to remember that cloud computing is a computing model, not a location. All too often when someone hears the term “cloud computing” they automatically think of a public cloud environment. Another important point to consider is that cloud computing is much more than just virtualization – it is something that involves change: Change in the tools you use (automation and management), change in processes, and a change in how your entire organization uses and consumes its IT infrastructure.

Microsoft is extremely unique in this perspective, and it is leading the industry with its investments to deliver consistency across private, hosted and public clouds. Over the course of these next four posts, we will cover our innovations in the infrastructure (storage, network, compute), in both on-premise and hybrid scenarios, support for open source, cloud service provider & tenant experience, and much, much more.

As I noted above, it simply makes logical sense that running the Microsoft workloads in the Microsoft Clouds will deliver the best overall solution. But what about Linux? And how well does Microsoft virtualize and manage non-Windows platforms, in particular Linux?  Today we’ll address these exact questions.

Our vision regarding other operating platforms is simple: Microsoft is committed to being your cloud partner. This means end-to-end support that is versatile, flexible, and interoperable for any industry, in any environment, with any guest OS. This vision ensures we remain realistic – we know that users are going to build applications on open source operating systems, so we have built a powerful set of tools for hosting and managing them.

A great deal of the responsibility to deliver the capabilities that enable the Microsoft Clouds (private, hosted, Azure) to effectively host Linux and the associated open source applications falls heavily on the shoulders of the Windows Server and System Center team. In today’s post Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will detail how building the R2 wave with an open source environment in mind has led to a suite of products that are more adaptable and more powerful than ever.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors. I repeat: 50%. The bulk of the savings comes from our storage innovations and the low costs of our licenses.

At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.

Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer. If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro.

It really all comes down to the friction-free movement of applications and VMs across clouds. Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.

We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.

And speaking of today’s post – this IaaS topic will be published in two parts, with the second half appearing tomorrow morning.

In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

I recently had an opportunity to speak with a number of leaders from the former VMWare User Group (VMUG), and it was an incredibly educational experience. I say “former” because many of the VMUG user group chapters are updating their focus/charter and are renaming themselves the Virtual Technology User Group (VTUG). This change is a direct result of how they see market share and industry momentum moving to solutions like the consistent clouds developed by Microsoft.

In a recent follow up conversation with these leaders, I asked them to describe some common topics they hear discussed in their meetings. One of the leaders commented that the community is saying something really specific: “If you want to have job security and a high paying job for the next 10 years, you better be on your way to becoming an expert in the Microsoft clouds. That is where this industry is going.” 

When I look at what is delivered in these R2 releases, the innovation is just staggering. This industry-leading innovation – the types of technical advances that VTUG groups are confidently betting on – is really exciting.

With this innovation in mind, in today’s post I want to discuss some of the work we are doing around the user experience for the teams creating the services that are offered, and I want to examine the experience that can be offered to the consumer of the cloud (i.e. the tenants). While we were developing R2, we spent a lot of time ensuring that we truly understood exactly who would be using our solutions. We exhaustively researched their needs, their motivations, and how various IT users and IT teams relate to each other. This process was incredibly important because these individuals and teams all have very different needs – and we were committed to supporting all of them.

The R2 wave of products have been built with this understanding.  The IT teams actually building and operating a cloud(s) have very different needs than individuals who are consuming the cloud (tenants).  The experience for the infrastructure teams will focus on just that – the infrastructure; the experience for the tenants will focus on the applications/ services and their seamless operation and maintenance.

In yesterday’s post we focused heavily on the innovations in these R2 releases in the infrastructure – storage, network, and compute – and, in this post, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will provide an in-depth look at Service Provider and Tenant experience and innovations with Windows Server 2012 R2, System Center 2012 R2, and the new features in Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

Today, people want to work anywhere, on any device and have access to all the resources they need to do their job. How do you enable your users to be productive on the device of their choice, yet retain control of information and meet compliance requirements? In this video we take a look at how the Microsoft access and information protection solutions allow you to enable your users to be productive, provide them with a single identity to access all resources, and protect your data. Learn more: http://www.microsoft.com/aip

In the 13+ years since the original Active Directory product launched with Windows 2000, it has grown to become the default identity management and access-control solution for over 95% of organizations around the world.  But, as organizations move to the cloud, their identity and access control also need to move to the cloud. As companies rely more and more on SaaS-based applications, as the range of cloud-connected devices being used to access corporate assets continue to grow, and as more hosted and public cloud capacity is used companies must expand their identity solutions to the cloud.

Simply put, hybrid identity management is foundational for enterprise computing going forward.

With this in mind, we set out to build a solution in advance of these requirements to put our customers and partners at a competitive advantage.

To build this solution, we started with our “Cloud first” design principle. To meet the needs of enterprises working in the cloud, we built a solution that took the power and proven capabilities of Active Director and combined it with the flexibility and scalability of Windows Azure. The outcome is the predictably named Windows Azure Active Directory.

By cloud optimizing Active Directory, enterprises can stretch their identity and access management to the cloud and better manage, govern, and ensure compliance throughout every corner of their organization, as well as across all their utilized resources.

This can take the form of seemingly simple processes (albeit very complex behind the scenes) like single sign-on which is a massive time and energy saver for a workforce that uses multiple devices and multiple applications per person.  It can also enable the scenario where a user’s customized and personalized experience can follow them from device to device regardless of when and where they’re working. Activities like these are simply impossible without a scalable, cloud-based identity management system.

If anyone doubts how serious and enterprise-ready Windows Azure AD already is, consider these facts:

  • Since we released Windows Azure AD, we’ve had over 265 billion authentications.
  • Every two minutes Windows Azure AD services over 1,000,000 authentication requests for users and devices around the world (that’s about 9,000 requests per second).
  • There are currently more than 420,000 unique domains uploaded and now represented inside of Azure Active Directory.

Windows Azure AD is battle tested, battle hardened, and many other verbs preceded by the word “battle.”

But, perhaps even more importantly, Windows Azure AD is something Microsoft has bet its own business on: Both Office 365 (the fastest growing product in Microsoft history) and Windows Intune authenticate every user and device with Windows Azure AD.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center), Alex Simons (Director of Program Management for Active Directory), Sam Devasahayam (Principle Program Management Lead for Windows Azure AD), and Mark Wahl (Principle Program Manager for Active Directory) take a look at one of R2’s most innovative features, Hybrid Identity Management.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.

One of the key elements in delivering hybrid cloud is networking. Learn how software-defined networking helps make hybrid real. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/software-defined-networking.aspx
[so called Application Centric Infrastructure (ACI)] Microsoft and Cisco will deliver unique customer value through new integrated networking solutions that will combine software-enabled flexibility with hardware-enabled scale/performance. These solutions will keep apps and workloads front and center and have the network adapt to their needs. Learn more by visiting: http://www.cisco.com/web/learning/le21/onlineevts/acim/index.html

One of the foundational requirements we called out in the 2012 R2 vision document was our promise to help you transform the datacenter. A core part of delivering on that promise is enabling Hybrid IT.

By focusing on Hybrid IT we were specifically calling out the fact that almost every customer we interacted with during our planning process believed that in the future they would be using capacity from multiple clouds. That may take the form of multiple private clouds an organization had stood up, or utilizing cloud capacity from a service provider [i.e. managed cloud] or a public cloud like Azure, or using SaaS solutions running from the public cloud.

We assumed Hybrid IT would really be the norm going forward, so we challenged ourselves to really understand and simplify the challenges associated with configuring and operating in a multi-cloud environment. Certainly one of the biggest challenges associated with operating in a hybrid cloud environment is associated with the network – everything from setting up the secure connection between clouds, to ensuring you could use your IP addresses (BYOIP) in the hosted and public clouds you chose to use.

The setup, configuration and operation of a hybrid IT environment is, by its very nature incredibly complex – and we have poured hundreds of thousands of hours into the development of R2 to solve this industry-wide problem.

With the R2 wave of products – specifically Windows Server 2012 R2 and System Center 2012 R2 – enterprises can now benefit from the highly-available and secure connection that enables the friction-free movement of VMs across those clouds. If you want or need to move a VM or application between clouds, the transition is seamless and the data is secure while it moves.

The functionality and scalability of our support for hybrid IT deployments has not been easy to build, and each feature has been methodically tested and refined in our own datacenters. For example, consider that within Azure there are over 50,000 network changes every day, and every single one of them is fully automated. If even 1/10 of 1% of those changes had to be done manually, it would require a small army of people working constantly to implement and then troubleshoot the human errors. With R2, the success of processes like these, and our learnings from Azure, come in the box.

Whether you’re a service provider or working in the IT department of an enterprise (which, in a sense, is like being a service provider to your company’s workforce), these hybrid networking features are going to remove a wide range of manual tasks, and allow you to focus on scaling, expanding and improving your infrastructure.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center) and Bala Rajagopalan(Principle Program Manager for Windows Server & System Center), provide a detailed overview of 2012 R2’s hybrid networking features, as well as solutions for common scenarios like enabling customers to create extended networks spanning clouds, and enabling access to virtualized networks.

Don’t forget to take a look at the “Next Steps” section at the bottom of this post, and check back tomorrow for the second half of this week’s hybrid IT content which will examine the topic of Disaster Recovery.

As business becomes more dependent on technology, business continuity becomes increasingly vital for IT. Learn how Micosoft is making it easier to build out business continuity plans. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/business-continuity.aspx

With Windows Server 2012 R2, with Hyper-V Replica, and with System Center 2012 R2 we have delivered a DR solution for the masses.

This DR solution is a perfect example of how the cloud changes everything

Because Windows Azure offers a global, highly available cloud platform with an application architecture that takes full advantage of the HA capabilities – you can build an app on Azure that will be available anytime and anywhere.  This kind of functionality is why we made the decision to build the control plane or administrative console for our DR solution on Azure. The control plane and all the meta-data required to perform a test, planned, or unplanned recovery will always be available.  This means you don’t have to make the huge investments that have been required in the past to build a highly-available platform to host your DR solution – Azure automatically provides this.

(Let me make a plug here that you should be looking to Azure for all the new application you are going to build – and we’ll start covering this specific topic in next week’s R2 post.)

With this R2 wave of products, organizations of all sizes and maturity, anywhere in the world, can now benefit from a simple and cost-effective DR solution.

There’s also another other thing that I am really proud of here: Like most organizations, we regularly benchmark ourselves against our competition.   We use a variety of metrics, like: ‘Are we easier to deploy and operate?’ and ‘Are we delivering more value and doing it a lower price?’  Measurements like these have provided a really clear answer: Our competitors are not even in the same ballpark when it comes to DR.

During the development of R2, I watched a side-by-side comparison of what was required to setup DR for 500 VMs with our solution compared to a competitive offering, and the contrast was staggering. The difference in simplicity and the total amount of time required to set everything up was dramatic.  In a DR scenario, one interesting unit of measurement is total mouse clicks. It’s easy to get carried away with counting clicks (hey, we’re engineers after all!), but, in the side-by-side comparison, the difference was 10’s of mouse clicks compared to 100’s. It is literally a difference of minutes vs. days.

You can read some additional perspectives I’ve shared on DR here.

In yesterday’s post we looked at the new hybrid networking functionality in R2 (if you haven’t seen it yet, it is a must-read), and in this post Vijay Tewari (Principal Program Manager for Windows Server & System Center) goes deep into the architecture of this DR solution, as well this solution’s deployment and operating principles.

As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

A revolution is taking place, impacting the speed at which Business Apps need to be built, and the jaw dropping capabilities they need to deliver. Ignoring these trends isn’t an option and yet you have no time to hit the reset button. Learn how to deliver revolutionary benefits in an evolutionary way. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-business-apps.aspx
Hear from Accenture and Hostway how Microsoft Windows Azure enables the development and deployment of modern business applications faster and more cost effectively through cloud computing. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The future of the IT Pro role will require you to know how applications are built for the cloud, as well as the cloud infrastructures where these apps operate, is something every IT Pro needs in order to be a voice in the meetings that will define an organization’s cloud strategy. IT pros are also going to need to know how their team fits in this cloud-centric model, as well as how to proactively drive these discussions.

These R2 posts will get you what you need, and this “Enable Modern Business Apps” pillar will be particularly helpful.

Throughout the posts in this series we have spoken about the importance of consistency across private, hosted and public clouds, and we’ve examined how Microsoft is unique in its vision and execution of delivering consistent clouds. The Windows Azure Pack is a wonderful example of Microsoft innovating in the public cloud and then bringing the benefits of that innovation to your datacenter.

The Windows Azure Pack is – literally speaking – a set of capabilities that we have battle-hardened and proven in our public cloud. These capabilities are now made available for you to enhance your cloud and ensure that “consistency across clouds” that we believe is so important.

A major benefit of the Windows Azure Pack is the ability to build an application once and then deploy and operate it in any Microsoft Cloudprivate, hosted or public.

This kind of flexibility means that you can build an application, initially deploy it in your private cloud, and then, if you want to move that app to a Service Provider or Azure in the future, you can do it without having to modify the application. Making tasks like this simple is a major part of our promise around cloud consistency, and it is something only Microsoft (not VMware, not AWS) can deliver.

This ability to migrate an app between these environments means that your apps and your data are never locked in to a single cloud. This allows you to easily adjust as your organization’s needs, regulatory requirements, or any operational conditions change.

A big part of this consistency and connection is the Windows Azure Service Bus which will be a major focus of today’s post.

The Windows Azure Service Bus has been a big part of Windows Azure since 2010. I don’t want to overstate this, but Service Bus has been battle-hardened in Azure for more than 3 years, and now we are delivering it to you to run in your datacenters. To give you a quick idea of how critical Service Bus is for Microsoft, consider this: Service Bus is used in all the billing for Windows Azure, and it is responsible for gathering and posting all the scoring and achievement data to the Halo 4 leaderboards (now that is really, really important – just ask my sons!). It goes without saying that the people in charge of Azure billing and the hardcore gamers are not going to tolerate any latency or downtime getting to their data.

With today’s topic, take the time to really appreciate the app development and app platform functionality in this R2 wave. I think you’ll be really excited about how you can plug into this process and lead your organization.

This post, written by Bradley Bartz (Principal Program Manager from Windows Azure) and Ziv Rafalovich (Senior Program Manager in Windows Azure), will get deep into these new features and the amazing scenarios that the Windows Azure Pack and Windows Azure Service Bus enable. As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this for links to additional information about the topics covered in this post.

A major promise underlying all of the 2012 R2 products is really simple: Consistency.

Consistency in the user experiences, consistency for IT professionals, consistency for developers and consistency across clouds. A major part of delivering this consistency is the Windows Azure Pack (WAP). Last week we discussed how Service Bus enables connections across clouds, and in this post we’ll examine more of the PaaS capabilities built and tested in Azure data centers and now offered for Windows Server. With the WAP, Windows Server 2012 R2, and System Center IT pros can make their data center even more scalable, flexible, and secure.

Throughout the development of this R2 wave, we looked closely at what organizations needed and wanted from the cloud. A major piece of feedback was the desire to build an app once and then have that app live in any data center or cloud. For the first time this kind of functionality is now available. Whether your app is in a private, public, or hosted cloud, the developers and IT Professionals in you organization will have consistency across clouds.

One of the elements that I’m sure will be especially popular is the flexibility and portability of this PaaS. I’ve had countless customers comment that they love the idea of PaaS, but don’t want to be locked-in or restricted to only running it in specific data centers. Now, our customers and partners can build a PaaS app and run it anywhere. This is huge! Over the last two years the market has really began to grasp what PaaS has to offer, and now the benefits (auto-scale, agility, flexibility, etc.) are easily accessible and consistent across the private, hosted and public clouds Microsoft delivers.

This post will spend a lot of time talking about Web Sites for Windows Azure and how this high density web site hosting delivers a level of power, functionality, and consistency that is genuinely next-gen.

Microsoft is literally the only company offering these kinds of capabilities across clouds – and I am proud to say that we are the only ones with a sustained track record of enterprise-grade execution.

With the features added by the WAP [Windows Azure Pack], organizations can now take advantage of PaaS without being locked into a cloud. This is, at its core, the embodiment of Microsoft’s commitment to make consistency across clouds a workable, viable reality.

This is genuinely PaaS for the modern web.

Today’s post was written by Bradley Bartz, a Principal Program Manager from Windows Azure. For more information about the technology discussed here, or to see demos of these features in action, check out the “Next Steps” at the bottom of this post.

More information: in the Success with Hybrid Cloud series blog posts [Brad Anderson, Nov 12, Nov 14, Nov 20, Dec 2, Dec 5, and 21 upcoming blogs posts] which “will examine the building/deployment/operation of Hybrid Clouds, how they are used in various industries, how they manage and deliver different workloads, and the technical details of their operation.”


4.2 Unlock Insights from any Data – SQL Server 2014:

With growing demand for data, you need database scale with minimal cost increases. Learn how SQL Server 2014 provides speed and scalability with in-memory technologies to support your key data workloads, including OLTP, data warehousing, and BI. Learn more: http://www.microsoft.com/sqlserver2014
Hosting technical training overview.

Microsoft SQL Server 2014 CTP2 was announced by Quentin Clark during the Microsoft SQL PASS 2013 keynote.  This second public CTP is essentially feature complete and enables you to try and test all of the capabilities of the full SQL Server 2014 release. Below you will find an overview of SQL Server 2014 as well as key new capabilities added in CTP2:

SQL Server 2014 helps organizations by delivering:

  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel

Thank you to those that have already downloaded SQL Server 2014 CTP1 and started seeing first hand the performance gains that in-memory capabilities deliver along with better high availability with AlwaysOn enhancements.  CTP2 introduces additional mission critical capabilities with further enhancements to the in-memory technologies along with new hybrid cloud capabilities.

What’s new in SQL Server 2014 CTP2?

New Mission Critical Capabilities and Enhancements

  • Enhanced In-Memory OLTP, including new tools which will help you identify and migrate the tables and stored procedures will benefit most from In-Memory OLTP, as well as greater T-SQL compatibility and new indexes which enables more customers to take advantage of our solution.
  • High Availability for In-Memory OLTP Databases:  AlwaysOn Availability Groups are supported for In-Memory OLTP, giving you in-memory performance gains with high availability.  IO Resource Governance, enabling customers to more effectively manage IO across multiple databases and/or classes of databases to provide more predictable IO for your most critical workloads.  Customers today can already manage CPU and memory.
  • Improved resiliency with Windows Server 2012 R2 by taking advantage of Cluster Shared Volumes (CSVs).  CSV’s provide improved fault detection and recovery in the case of downtime.
  • Delayed Durability, providing the option for increased transaction throughput and lower latency for OLTP applications where performance and latency needs outweigh the need for 100% durability.

New Hybrid Cloud Capabilities and Enhancements

By enabling the above in-memory performance capabilities for your SQL Server instances running in Windows Azure Virtual Machines, you will see significant transaction and query performance gains.  In addition there are new capabilities listed below that will allow you to unlock new hybrid scenarios for SQL Server.

  • Managed Backup to Windows Azure, enabling you to backup on-premises SQL Server databases to Windows Azure storage directly in SSMS.  Managed Backup also optimizes backup policy based on usage, an advantage over the manual Backup to Windows Azure.
  • Encrypted Backup, offering customer the ability to encrypt both on-premises backup and backups to Windows Azure for enhance security.
  • Enhanced disaster recovery to Windows Azure with simplified UI, enabling customers to more easily add Windows Azure Virtual Machines as AlwaysOn secondaries in SQL Server Management Studio for greater cost-effective data protection and disaster recovery solution.  Customers may also use the secondaries in Windows Azure for to scale and offload reporting and backups.
  • SQL Server Data Files in Windows Azure – New capability to store large databases (>16TB) in Windows Azure and the ability to stream the database as a backend for SQL Server applications running on-premises or in the cloud.

Learn more and download SQL Server 2014 CTP2

SQL Server 2014 helps address key business challenges of ever growing data volumes, the need to transact and process data faster, the scalability and efficiency of cloud computing and an ever growing hunger for business insights.   With SQL Server 2014 you can now unlock real-time insights with mission critical and cloud performance and take advantage of one of the most comprehensive BI solutions in the marketplace today.

Many customers are already realizing the significant benefits of the new in-memory technologies in SQL Server 2014 including: Edgenet, Bwin, SBI Liquidity, TPP and Ferranti.  Stay tuned for an upcoming blog highlighting the impact in-memory had to each of their businesses.

Learn more about SQL Server 2014 and download the datasheet and whitepapers here.  Also if you would like to learn more about SQL Server In-Memory best practices, check out this SQL Server 2014 in-memory blog series compilation. There is also a SQL Server 2014 hybrid cloud scenarios blog compilation for learning best practices.

Also if you haven’t already download SQL Server 2014 CTP 2 and see how much faster your SQL Server applications run!  The CTP2 image is also available on Windows Azure, so you can easily develop and test the new features of SQL Server 2014.

To ensure that its customers received timely, accurate product data, Edgenet decided to enhance its online selling guide with In-Memory OLTP in Microsoft SQL Server 2014.

At the SQL PASS conference last November, we announced the In-memory OLTP (project code-named Hekaton) database technology built into the next release of SQL Server. Microsoft’s technical fellow Dave Campbell’s blog provides a broad overview of the motivation and design principles behind this project codenamed In-memory OLTP.

In a nutshell – In-memory OLTP is a new database engine optimized for memory resident data and OLTP workloads. In-memory OLTP is fully integrated into SQL Server – not a separate system. To take advantage of In-memory OLTP, a user defines a heavily accessed table as memory optimized. In-memory OLTP tables are fully transactional, durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both In-memory OLTP tables and regular tables, and a transaction can update data in both types of tables. Expensive T-SQL stored procedures that reference only In-memory OLTP tables can be natively compiled into machine code for further performance improvements. The engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out mid-tier. To achieve this it uses latch-free data structures and a new optimistic, multi-version concurrency control technique. The end result is a selective and incremental migration into In-memory OLTP to provide predictable sub-millisecond low latency and high throughput with linear scaling for DB transactions. The actual performance gain depends on many factors but we have typically seen 5X-20X in customer workloads.

In the SQL Server product group, many years ago we started the investment of reinventing the architecture of the RDBMS engine to leverage modern hardware trends. This resulted in PowerPivot and In-memory ColumnStore Index in SQL2012, and In-memory OLTP is the new addition for OLTP workloads we are introducing for SQL2014 together with the updatable clustered ColumnStore index and (SSD) bufferpool extension. It has been a long and complex process to build this next generation relational engine, especially with our explicit decision of seamlessly integrating it into the existing SQL Server instead of releasing a separate product – in the belief that it provides the best customer value and onboarding experience.

Now we are releasing SQL2014 CTP1 as a public preview, it’s a great opportunity for you to get hands-on experience with this new technology and we are eager to get your feedback and improve the product. In addition to BOL (Books Online) content, we will roll out a series of technical blogs on In-memory OLTP to help you understand and leverage this preview release effectively.

In the upcoming series of blogs, you will see the following in-depth topics on In-memory OLTP:

  • Getting started – to walk through a simple sample database application using In-memory OLTP so that you can start experimenting with the public CTP release.
  • Architecture – to understand at a high level how In-memory OLTP is designed and built into SQL Server, and how the different concepts like memory optimized tables, native compilation of SPs and query inter-op fit together under the hood.
  • Customer experiences so far – we had many TAP customer engagements since about 2 years ago and their feedback helped to shape the product, and we would like to share with you some of the learnings and customer experiences, such as typical application patterns and performance results.
  • Hardware guidance – it is apparent that memory size is a factor, but since most of the applications require full durability, In-memory OLTP still requires log and checkpointing IO, and with the much higher transactional throughput, it can put actually even higher demand on the IO subsystem as a result. We will also cover how Windows Azure VMs can be used with In-memory OLTP.
  • Application migration – how to get started with migrating to or building a new application with In-memory OLTP. You will see multiple blog posts covering the AMR tool, Table and SP migrations and pointers on how to work around some unsupported data types and T-SQL surface area, as well as the transactional model used. We will highlight the unique approach on SQL server integration which supports a partial database migration.
  • Managing In-memory OLTP – this will cover the DBA considerations, and you will see multiple posts ranging from the tooling supporting (SSMS) to more advanced topics such as how memory and storage are managed.
  • Limitations and what’s coming – explain what limitations exist in CTP1 and new capabilities expected to be coming in CTP2 and RTM, so that you can plan your roadmap with clarity.

In addition – we will also have blog coverage on what’s new with In-memory ColumnStore and introduction to bufferpool extension. 

SQL2014 CTP1 is available for download here or you can read the complete blog series here:

bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012, gaining significant in-memory benefit using xVelocity Column Store. Here, bwin takes their systems one step further by using the technology preview of SQL Server 2014 In-memory OLTP (formerly known as Project “Hekaton”). Prior to using OLTP their online gaming systems were handling about 15,000 requests per second. Using OLTP the fastest tests so far have scaled to 250,000 transactions per second.

Recently I posted a video about how the SQL Server Community was looking into emerging trends in BI and Database technologies – one of the key technologies mentioned in that video was in-memory.

Many Microsoft customers have been using in-memory technologies as part of SQL Server since 2010 including xVelocity Analytics, xVelocity Column Store and Power Pivot, something we recently covered in a blog post following the ‘vaporware’ outburst from Oracle SVP of Communications, Bob Evans. Looking forward, Ted Kummert recently announced project codenamed “Hekaton,” available in the next major release of SQL Server. “Hekaton” will provide a full in-memory transactional engine, and is currently in private technology preview with a small set of customers. This technology will provide breakthrough performance gains of up to 50 times.

For those who are keen to get a first view of customers using the technology, below is the video of online gaming company bwin using “Hekaton”.

Bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012 – a story you can read here. Bwin had already gained significant in-memory benefit using xVelocity Column Store, for example – a large report that used to take 17 minutes to render now takes only three seconds.

Given the benefits, they had seen with in-memory technologies, they were keen to trial the technology preview of “Hekaton”. Prior to using “Hekaton”, their online gaming systems were handling about 15,000 requests per second, a huge number for most companies. However, bwin needed to be agile and stay at ahead of the competition and so they wanted access to the latest technology speed.

Using “Hekaton” bwin were hoping they could at least double the number of transactions. They were ‘pretty amazed’ to see that the fastest tests so far have scaled to 250,000 transactions per second.

So how fast is “Hekaton” – just ask Rick Kutschera, the Database Engineering Manager at bwin – in his words it’s ‘Wicked Fast’! However, this is not the only point that Rick highlights, he goes on to mention that “Hekaton” integrates seamlessly into the SQL Server engine, so if you know SQL Server, you know “Hekaton”.

— David Hobbs-Mallyon, Senior Product Marketing Manager

Quentin Clark
Corporate Vice President, Data Platform Group

This morning, during my keynote at the Professional Association of SQL Server (PASS) Summit 2013, I discussed how customers are pushing the boundaries of what’s possible for businesses today using the advanced technologies in our data platform. It was my pleasure to announce the second Community Technology Preview (CTP2) of SQL Server 2014 which features breakthrough performance with In-Memory OLTP and simplified backup and disaster recovery in Windows Azure.

Pushing the boundaries

We are pushing the boundaries of our data platform with breakthrough performance, cloud capabilities and the pace of delivery to our customers. Last year at PASS Summit, we announced our In-Memory OLTP project “Hekaton” and since then released SQL Server 2012 Parallel Data Warehouse and public previews of Windows Azure HDInsight and Power BI for Office 365. Today we have SQL Server 2014 CTP2, our public and production-ready release shipping a mere 18 months after SQL Server 2012. 

Our drive to push the boundaries comes from recognizing that the world around data is changing.

  • Our customers are demanding more from their data – higher levels of availability as their businesses scale and globalize, major advancements in performance to align to the more real-time nature of business, and more flexibility to keep up with the pace of their innovation. So we provide in-memory, cloud-scale, and hybrid solutions. 
  • Our customers are storing and collecting more data – machine signals, devices, services and data from outside even their organizations. So we invest in scaling the database and a Hadoop-based solution. 
  • Our customers are seeking the value of new insights for their business. So we offer them self-service BI in Office 365 delivering powerful analytics through a ubiquitous product and empowering users with new, more accessible ways of gaining insights.

In-memory in the box for breakthrough performance

A few weeks ago, one of our competitors announced plans to build an in-memory column store into their database product some day in the future. We shipped similar technology two years ago in SQL Server 2012, and have continued to advance that technology in SQL Server 2012 Parallel Data Warehouse and now with SQL Server 2014. In addition to our in-memory columnar support in SQL Server 2014, we are also pushing the boundaries of performance with in-memory online transaction processing (OLTP). A year ago we announced project “Hekaton,” and today we have customers realizing performance gains of up to 30x. This work, combined with our early investments in Analysis Services and Excel, means Microsoft is delivering the most complete in-memory capabilities for all data workloads – analytics, data warehousing and OLTP. 

We do this to allow our customers to make breakthroughs for their businesses. SQL Server is enabling them to rethink how they can accelerate and exceed the speed of their business.

image

  • TPP is a clinical software provider managing more than 30 million patient records – half the patients in England – including 200,000 active registered users from the UK’s National Health Service.  Their systems handle 640 million transactions per day, peaking at 34,700 transactions per second. They tested a next-generation version of their software with the SQL Server 2014 in-memory capabilities, which has enabled their application to run seven times faster than before – all of this done and running in half a day. 
  • Ferranti provides solutions for the energy market worldwide, collecting massive amounts of data using smart metering. With our in-memory technology they can now process a continuous data flow up to 200 million measurement channels making the system fully capable of meeting the demands of smart meter technology.
  • SBI Liquidity Market in Japan provides online services for foreign currency trading. By adopting SQL Server 2014, the company has increased throughput from 35,000 to 200,000 transactions per second. They now have a trading platform that is ready to take on the global marketplace.

A closer look into In-memory OLTP

Previously, I wrote about the journey of the in-memory OLTP project Hekaton, where a group of SQL Server database engineers collaborated with Microsoft Research. Changes in the ratios between CPU performance, IO latencies and bandwidth, cache and memory sizes as well as innovations in networking and storage were changing assumptions and design for the next generation of data processing products. This gave us the opening to push the boundaries of what we could engineer without the constraints that existed when relational databases were first built many years ago. 

Challenging those assumptions, we engineered for dramatically changing latencies and throughput for so-called “hot” transactional tables in the database. Lock-free, row-versioning data structures and compiling T-SQL and queries into native code, combined with making the programming semantics consistent with SQL Server means our customers can apply the performance benefits of extreme transaction processing without application rewrites or the adoption of entirely new products. 

image

The continuous data platform

Windows Azure fulfills new scenarios for our customers – transcending what is on-premises or in the cloud. Microsoft is providing a continuous platform from our traditional products that are run on-premises to our cloud offerings. 

With SQL Server 2014, we are bringing the cloud into the box. We are delivering high availability and disaster recovery on Windows Azure built right into the database. This enables customers to benefit from our global datacenters: AlwaysOn Availability Groups that span on-premises and Windows Azure Virtual Machines, database backups directly into Windows Azure storage, and even the ability to store and run database files directly in Windows Azure storage. That last scenario really does something interesting – now you can have an infinitely-sized hard drive with incredible disaster recovery properties with all the great local latency and performance of the on-premises database server. 

We’re not just providing easy backup in SQL Server 2014, today we announced backup to Windows Azure would be available for all our currently supported SQL Server releases. Together, the backup to Windows Azure capabilities in SQL Server 2014 and via the standalone tool offer customers a single, cost-effective backup strategy for secure off-site storage with encryption and compression across all supported versions of SQL Server.

By having a complete and continuous data platform we strive to empower billions of people to get value from their data. It’s why I am so excited to announce the availability of SQL Server 2014 CTP2, hot on the heels of the fastest-adopted release in SQL Server’s history, SQL Server 2012. Today, more businesses solve their data processing needs with SQL Server than any other database. It’s about empowering the world to push the boundaries.


4.3 Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights:

Data is being generated faster than ever before, so what can it do for your business? Learn how to unlock insights on any data by empowering people with BI and big data tools to go from raw data to business insights faster and easier. Learn more: http://www.microsoft.com/datainsights
With the abundance of information available today, BI shouldn’t be confined to analysts or IT. Learn how to empower all with analytics through familiar Office tools, and how to manage all your data needs with a powerful and scalable data platform. Learn more: http://www.microsoft.com/BI
With data volumes exploding by 10x every five years, and much of this growth coming from new data types, data warehousing is at a tipping point. Learn how to evolve your data warehouse infrastructure to support variety, volume, and velocity of data. Learn more: http://www.microsoft.com/datawarehousing
Hear from HP, Dell and Hortonworks how Microsoft SQL Server Parallel Data Warehouse and Windows Azure HDInsights can unlock data insights and respond to business opportunities through big data analytics. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ
The idea that big data will transform businesses and the world is indisputable, but are there enough resources to fully embrace this opportunity? Join Quentin Clark, Microsoft Corporate Vice President, who will share Microsoft’s bold goal to consumerize big data — simplifying the data science process and providing easy access to data with everyday tools. This keynote is sponsored by Microsoft
Quentin Clark discusses the ever-changing big data market and how Microsoft is meeting its demands.

Announcing Windows Azure HDInsight: Where big data meets the cloud [The Official Microsoft Blog, Oct 28, 2013]

post is from Quentin Clark, Corporate Vice President of the Data Platform Group at Microsoft

I am pleased to announce that Windows Azure HDInsight – our cloud-based distribution of Hadoop – is now generally available on Windows Azure. The GA of HDInsight is an important milestone for Microsoft, as its part of our broader strategy to bring big data to a billion people.

On Tuesday at Strata + Hadoop World 2013, I will discuss the opportunity of big data in my keynote, “Can Big Data Reach One Billion People?” Microsoft’s perspective is that embracing the new value of data will lead to a major transformation as significant as when line of business applications matured to the point where they touched everyone inside an organization. But how do we realize this transformation? It happens when big data finds its way to everyone in business – when anyone with a question that can be answered by data, gets their answer. The impact of this is beyond just making businesses smarter and more efficient. It’s about changing how business works through both people and data-driven insights. Data will drive the kinds of changes that, for example, allow personalization to become truly prevalent. People will drive change by gaining insights into what impacts their business, enabling them to change the kinds of partnerships and products they offer.

Our goal to empower everyone with insights is the reason why Microsoft is investing, not just in technology like Hadoop, but the whole circuit required to get value from big data. Our customers are demanding more from the data they have – not just higher availability, global scale and longer histories of their business data, but that their data works with business in real time and can be leveraged in a flexible way to help them innovate. And they are collecting more signals – from machines and devices and sources outside their organizations.

Some of the biggest changes to businesses driven by big data are created by the ability to reason over data previously thought unmanageable, as well as data that comes from adjacent industries. Think about the use of equipment data to do better operational cost and maintenance management, or a loan company using shipping data as part of the loan evaluation. All of this data needs all forms of analyticsand the ability to reach the people making decisions. Organizations that complete this circuit, thereby creating the capability to listen to what the data can tell them, will accelerate.

Bringing Hadoop to the enterprise

Hadoop is a cornerstone of how we will realize value from big data. That’s why we’ve engineered HDInsight as 100 percent Apache Hadoop offered as an Azure cloud service. The service has been in public production preview for a number of months now – the reception has been tremendous and we are excited to bring it to full GA status in Azure. 

Microsoft recognizes Hadoop as a standard and is investing to ensure that it’s an integral part of our enterprise offerings. We have invested through real contributions across the project – not just to make Hadoop work great on Windows, but even in projects like Tez, Stinger and Hive. We have put in thousands of engineering hours and tens of thousands of lines of code. We have been doing this in partnership with Hortonworks, who will make HDP (Hortwonworks Data Platform) 2.0 for Windows Server generally available next month, giving the world access to a supported Apache-pure Hadoop v2 distribution for Windows Server. Working with Hortonworks, we will support Hadoop v2 in a future update to HDInsight.

Windows Azure HDInsight combines the best of Hadoop open source technology with the security, elasticity and manageability that enterprises require. We have built it to integrate with Excel and Power BI – our business intelligence offering that is part of Office 365 – allowing people to easily connect to data through HDInsight, then refine and do business analytics in a turnkey fashion. For the developer, HDInsight also supports choice of languages: .NET, Java and more.

We have key customers currently using HDInsight, including:

  • The City of Barcelona uses Windows Azure HDInsight to pull in data about traffic patterns, garbage collection, city festivals, social media buzz and more to make critical decisions about public transportation, security and overall spending.
  • A team of computer scientists at Virginia Tech developed an on-demand, cloud-computing model using the Windows Azure HDInsight Service, enabling  easier, more cost-effective access to DNA sequencing tools and resources.
  • Christian Hansen, a developer of natural ingredients for several industries, collects electronic data from a variety of sources, including automated lab equipment, sensors and databases. With HDInsight in place, they are able to collect and process data from trials 100 time times faster than before.

End-to-end solutions for big data

These kinds of uses of Hadoop are examples of how big data is changing what’s possible. Our Hadoop-based solution HDInsight is a building block – one important piece of the end-to-end solutions required to get value from data.

All this comes together in solutions where people can use Excel to pull data directly from a range of sources, including SQL Server (the most widely-deployed database product), HDInsight, external Hadoop clusters and publicly available datasets. They can then use our business intelligence tools in Power BI to refine that data, visualize it and just ask it questions. We believe that by putting widely accessible and easy-to-deploy tools in everyone’s hands, we are helping big data reach a billion people. 

I am looking forward to tomorrow. The Hadoop community is pushing what’s possible, and we could not be happier that we made the commitment to contribute to it in meaningful ways.

Quentin Clark, Microsoft, at Big Data NYC 2013 with John Furrier and Dave Vellante

“We’re here here because we’re super committed to Hadoop,” Clark said, explaining that Microsoft is dedicated to help its customers embrace the benefits Big Data can provide them with. “Hadoop is the cornerstone of Big Data but not the entire infrastructure,” he added. Microsoft is focusing around adding security and tool integration, with thousands of hours of development put into Hadoop, to make it ready for the enterprise. “There’s a foundational piece where customers are starting,” which they can build upon and Microsoft focuses on helping them embrace Hadoop as part of the IT giant’s business goals.

Asked to compare the adoption of traditional Microsoft products with the company’s Hadoop products, Clark said, “a big part of our effort was to get to that enterprise expectations.” Security and tools integration, getting Hadoop to work on Windows is part of that effort. Microsoft aims to help people “have a conversation and dialogue with the data. We make sure we funnel all the data to help them get the BI and analytics” they need.

Commenting on Microsoft’s statement of bringing Big Data to its one billion Office users, Vellante asked if the company’s strategy was to put the power of Big Data into Excel. Clark explained it was about putting Big Data in the Office suite, going on to explain that there is already more than a billion people who are passively using Big Data. Microsoft focuses on those actively using it.

Clark mentions Microsoft has focused on the sports arena, helping major sports leagues use Big Data to power fantasy teams. “We actually have some models, use some data sets. I have a fantasy team that I’m doing pretty well with, partly because of my ability to really have a conversation with the data. On the business side, it’s transformational. Our ability to gain insight in real time and interact is very different using these tools,” Clark stated.

Why not build its own Hadoop distro?

Asked why Microsoft decided not to have its own Hadoop distribution, Clark explained that “primarily our focus has been in improving the Apache core, make Hadoop work on Windows and work great. Our partnership with Hortonworks just made sense. They are able to continue to push and have that cross platform capability, we are able to offer our customers a solution.”

Explaining there were great discrepancies in how different companies in the same industries made use of the benefits Big Data, he advised our viewers to “look at what the big companies are doing” embracing the data, and to look what they are achieving with it.

As far as the future of the Big Data industry is concerned, Clark stated: “There’s a consistent meme of how is this embraced by business for results. Sometimes with the evolution of technology, everyone is exploring what it’s capable of.” Now there’s a focus shift of the industry towards what greater purpose it leads to, what businesses can accomplish.

@thecube

#BigDataNYC


4.4 Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI):

Microsoft Virtual Desktop Infrastructure (VDI) enables IT to deliver desktops and applications to users that employees can access from anywhere on both personal and corporate devices . Centralizing and controlling applications and data through a virtual desktop enables your people to get their work done on the devices they choose while helping maintain compliance. Learn more: http://www.microsoft.com/msvdi
With dramatic growth in the number of mobile users and personal devices at work, and mounting pressure to comply with governmental regulations, IT organizations are increasingly turning to Microsoft Virtual Desktop Infrastructure (VDI) solutions. This session will provide an overview of Microsoft’s VDI solutions and will drill into some of the new, exciting capabilities that Windows Server 2012 R2 offers for VDI solutions.

In October, we announced Windows Server 2012 R2 which delivers several exciting improvements for VDI solutions. Among the benefits, Windows Server 2012 R2 reduces the cost per seat for VDI as well as enhances your end user’s experience. The following are just some of the features and benefits of Windows Server 2012 R2 for VDI:

  • Online data deduplication on actively running VMs reduces storage capacity requirements by up to 90% on persistent desktops.
  • Tiered storage spaces manage your tiers of storage (fast SSDs vs. slower HDDs) intelligently so that the most frequently accessed data blocks are automatically moved onto faster-tier drives. Likewise, older or seldom-accessed files are moved onto the cheaper and slower SAS drives.
  • The Microsoft Remote Desktop App provides easy access to a variety devices and platforms including Windows, Windows RT, iOS, Mac OS X and Android. This is good news for your end users and your mobility/BYOD strategy!
  • Your user experience is also enhanced due to improvements on several fronts including RemoteFX, DirectX 11.1 support, RemoteApp, quick reconnect, session shadowing, dynamic monitor and resolution changes.

If your VDI solutions run on Dell servers or if you are looking at deploying new VDI infrastructure, we are excited to let you know about the work we have been doing in partnership with Dell around VDI. Dell recently updated their Desktop Virtualization Solution (DVS) for Windows Server to support Windows Server 2012 R2, and DVS now delivers all of the benefits mentioned above. Dell is also delivering additional enhancements into Dell DVS for Windows Server so it will also support:

  • Windows 8.1 with touch screen devices and new Intel Haswell processors
  • Unified Communication with Lync 2013, via an endpoint plug-in that enables P2P audio and video. (Dell Wyse has certified selected Windows thin clients to this effect, such as the D90 and Z90.)
  • Virtualized shared graphics on NVidia GRID K1/K2 and AMD FirePro cards using Microsoft RemoteFX technology
  • Affordable persistent desktops
  • Highly-secure and dual/quad core Dell Wyse thin clients, for a true end-to-end capability, even when using high-end server graphics cards or running UC on Lync 2013
  • Optional Dell vWorkspace software, also supporting Windows Server 2012 R2, that brings scalability to tens of thousands of seats, advanced VM provisioning, IOPS efficiency to reduce storage requirement and improve performance, diagnostics and monitoring, flexible resource assignments, support for multi-tenancy and more.
  • Availability in more than 30 countries

Depending on where you stand in the VDI deployment cycle in your organization, Dell DVS for Windows Server is already supported today on multiple Dell PowerEdge server platforms:

  • The T110 for a pilot/POC up to 10 seats
  • The VRTX for implementation in a remote or branch office of up to about 500 users
  • The R720 for a traditional enterprise-like, flexible and scalable implementation to several thousand seats. It supports flexible deployments such as application virtualization, RDSH, pooled and persistent VMs.

This week, Microsoft and Dell will present a technology showcase at Dell World in Austin (TX), USA. If you happen to be at the show, you will be able to see for yourself how well Windows Server 2012 R2 and Windows 8.1 integrate into Dell DVS. We will show:

  • The single management console of Windows Server 2012 installed on a Dell PowerEdge VRTX, demonstrating how easy it can be for an IT administrator to manage VDI workloads based on Hyper-V in a remote or branch office environment
  • How users can chat, talk, share, meet, transfer files and conduct video conferencing within virtualized desktops set up for unified communication
  • That you can watch HD multimedia and 3D graphics files on multiple virtual desktops sharing a graphic card installed remotely in a server
  • How affordable it is to run persistent desktops with DVS and Windows Server 2012 R2

We are excited about the work that we are doing with Dell around VDI and hope you have a chance to come visit our joint VDI showcase in Austin. We will be located in the middle of the Dell booth in show expo hall. Also, we will show a VDI demo as part of the Microsoft Cloud OS breakout session at noon on Thursday (December 12th ) in room 9AB. Finally, we will show a longer VDI demo in the show expo theater (next to the Microsoft booth) at 10am on Friday (December 13th ) morning. We are looking forward to seeing you there.

With the Microsoft Remote Desktop app, you can connect to a remote PC and your work resources from almost anywhere. Experience the power of Windows with RemoteFX in a Remote Desktop client designed to help you get your work done wherever you are.

Post from Brad Anderson,
Corporate Vice President of Windows Server & System Center at Microsoft.

As of yesterday afternoon, the Microsoft Remote Desktop App is available in the Android, iOS, and Mac stores (see screen shots below). There was a time, in the very recent past, when many thought something like this would never happen.

If your company has users who work on iPads, Android, and Windows RT devices, you also likely have a strategy (or at least of point-of-view) for how you will deliver Windows applications to those devices. With the Remote Desktop App and the 2012 R2 platforms made available earlier today, you now have a great solution from Microsoft to deliver Windows applications to your users across all the devices they are using.

As I have written about before, one of the things I am actively encouraging organizations to do is to step back and look at their strategy for delivering applications and protecting data across all of their devices. Today, most enterprises are using different tools for enabling users on PCs, and then they deploy another tool for enabling users on their tablets and smart phones. This kind of overheard and the associated costs are unnecessary – but, even more important (or maybe I should say worse), is that your end-users therefore have different and fragmented experiences as they transition across their various devices. A big part of an IT team’s job must be to radically simplify the experience end users have in accomplishing their work – and users are doing that work across all their devices.

I keep bolding “all” here because I am really trying to make a point:  Let’s stop thinking about PCs and devices in a fragmented way. What we are trying to accomplish is pretty straightforward: Enable users to access the apps and datathey need to be productive in a way that can ensure the corporate assets are secure. Notice that nowhere in that sentence did I mention devices. We should stop talking about PC Lifecycle management, Mobile Device Management and Mobile Application Management – and instead focus our conversation on how we are enabling users. We need a user-enablement Magic Quadrant!

OK – stepping off my soapbox. Smile

Delivering Windows applications in a server-computing model, through solutions like Remote Desktop Services, is a key requirement in your strategy for application access management. But keep in mind that this is only one of many ways applications can be delivered – and we should consider and account for all of them.

For example, you also have to consider Win32 apps running in a distributed model, modern Windows apps, iOS native apps (side-loaded and deep-linked), Android native apps (side-loaded and deep-linked), SaaS applications, and web applications.

Things have really changed from just 5 years ago when we really only had to worry about Windows apps being delivered to Windows devices.

As you are rethinking your application access strategy, you need solutions that enable you to intelligently manage all these applications types across all the devices your workforce will use.

You should also consider what the Remote Desktop Apps released yesterday are proof of Microsoft’s commitment to enable you to have a single solution to manage all the devices your users will use.

Microsoft describes itself as a “devices and services company.” Let me provide a little more insight into this.

Devices: We will do everything we can to earn your business on Windows devices.

Services: We will light up those Windows devices with the cloud services that we build, and these cloud services will alsolight-up all (there’s that bold again) your other devices.

The funny thing about cloud services is that they want every device possible to connect to them – we are working to make sure the cloud services that we are building for the enterprise will bring value to all (again!) the devices your users will want to use – whether those are Windows, iOS, or Android.

The RDP clients that we released into the stores yesterday are not v1 apps. Back in June, we acquired IP assets from an organization in Austria (HLW Software Development GMBH) that had been building and delivering RDP clients for a number of years. In fact, there were more than 1 million downloads of their RDP clients from the Apple and Android stores.  The team has done an incredible job using them as a base for development of our Remote Desktop App, creating a very simple and compelling experience on iOS, Mac OS X and Android. You should definitely give them a try!

Also: Did I mention they are free?

To start using the Microsoft Remote Desktop App for any of these platforms, simply follow these links:

setup: – Windows 8.1 Pro run on a slow Netbook, BenQ Joybook Lite U101 with Aton N270! – HTC One X running Android 4.2.2 – HTC Fly running Android 3.2.1 How to: http://android-er.blogspot.com/2013/10/basic-setup-for-microsoft-remote.html

Satya Nadella’s (?the next Microsoft CEO?) next ten years’ vision of “digitizing everything”, Microsoft opportunities and challenges seen by him with that, and the case of Big Data

… as one of the crucial issues for that (in addition to the cloud, mobility and Internet-of-Things), via the current tipping point as per Microsoft, and the upcoming revolution in that as per Intel

Satya Nadella, Cloud & Enterprise Group, Microsoft and Om Malik, Founder & Senior Writer, GigaOM [LeWeb YouTube channel, Dec 10, 2013]

Satya is responsible for building and running Microsoft’s computing platforms, developer tools and cloud services. He and his team deliver the “Cloud OS.” Rumored to be on the short list for CEO, he shares his views on the future. [Interviewed during the “Plenary I” devoted to “The Next 10 years” at Day 1 on Dec 10, 2013.]

And why I will present Big Data after that? For very simple reason: IMHO exactly in Big Data Microsoft’s innovations came to a point at which its technology has the best chances to become dominant and subsequently define the standard for the IT industry—resulting in “winner-take-all” economies of scale and scope. Whatever Intel is going to add to that in terms of “technologies for the next Big Data revolution” is going only to help Microsoft with its currently achieved innovative position even more. But for this reason I will include here the upcoming Intel innovations for Big Data as well.

In this next-gen regard it is highly recommended to read also: Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others [‘Experiencing the Cloud’. Dec 12, 2013] !

Now the detailed discussion of Big Data:

Microsoft® makes Big Data work for you! [HP Discover YouTube channel, recorded on Dec 11; published on Dec 12, 2013]

[Doug Smith, Director, Emerging Technologies, Microsoft] Come and join our Innovation Theatre session to hear how customers are solving Big Data challenges in big ways jointly with HP!

The Garage Series: Unleashing Power BI for Office 365 [Office 365 technology blog, Nov 20, 2013]

In this week’s show, host Jeremy Chapman is joined by Michael Tejedor from the SQL Server team to discuss Power BI and show it in action. Power BI for Office 365 is a cloud based solution that reduces the barriers to deploying a self-service Business Intelligence environment for sharing live Excel based reports and data queries as well as new features and services that enable ease of data discover and information access from anywhere. Michael draws up the self-service approach to Power BI as well as how public data can be queried and combined in a unified view within Excel. Then they walk through an end-to-end demo of Excel and Power BI componentsPower Query [formerly known as “Data Explorer], Power Pivot, Power View, Power Map [formerly known as product codename “Geoflow] and Q&A–as they optimize profitability of a bar and rein in bartenders with data.

Last week Mark Kashman and I went through the administrative controls of managing user access and mobile devices, but this week I’m joined by Michael Tejedor and we shift gears completely to talk data, databases and business intelligence. Back in July we announced Power BI for Office 365 and how this new service along with the  using the familiar tools within Excel, enables you can to discover, analyze, visualize and share data in powerful ways. Power BIThe solution includes Power Query, Power Pivot, Power View, Power Map and as well as a host of Power BI features including Q&A.  and how using the familiar tools within Excel, you can discover, analyze, visualize and share data in powerful ways. Power BI includes Power Query, Power Pivot, Power View, Power Map and Q&A.  

  • Power Query [formerly known as “Data Explorer] is a data search engine allowing you to query data from within your company and from external data sources on the Internet, all within Excel.
  • Power Pivot lets you create flexible models within Excel that can process large data sets quickly using SQL Server’s in-memory database.
  • Power View allows you to manipulate data and compile it into charts, graphs and other visualizations. It’s great for presentations and reports
  • Power Map [formerly known as product codename “Geoflow] is a 3D data visualization tool for mapping, exploring and interacting with geographic and temporal data.
  • Q&A is a natural language query engine that lets users easily query data using common terms and phrases.

In many cases, the process to get custom reports and dashboards from the people running your databases, sales or operations systems is something like submitting a request to your database administrator and a few phone calls or meetings to get what you want. I came from an logistics and operations management background, it could easily take 2 or 3 weeks to even make minor tweaks to an operational dashboard. Now you can use something familiar–Excelin a self-service way to hook into your local databases, Excel flat files, modern data sources like Hadoop or public data sources via Power Query and the data catalogue. All of these data sources can be combined create powerful insights and data visualizations, all can be easily and securely shared with the people you work with through the Power BI for Office 365 service.

Of course all of this sounds great, but you can’t really get a feel for it until you see it. Michael and team built out a great demo themed after a bar and using data to track alcohol profitability, pour precision per bartender and Q&A to query all of this using normal query terms. You’ll want to watch the show to see how everything turns out and of course to see all of these power tools in action. Of course if you want to kick the tires and try Power BI for Office 365, you can register for the preview now.

Intel: technologies for the next Big Data revolution [HP Discover YouTube channel, recorded on Dec 11; published on Dec 12, 2013]

[Patrick Buddenbaum, Director, Enterprise Segment, Intel Corporation at HP Discover Barcelona 2013 on Dec 11, 11:40 AM – 12:00 PM] HP and Intel share the belief that every organization and individual should be able to unlock intelligence from the world’s ever increasing set of data sources—the Internet of Things.

 

Related “current tipping point” announcements from Microsoft:

From: Organizations Speed Business Results With New Appliances From HP and Microsoft [joint press release, Jan 18, 2011]

New solutions for business intelligence, data warehouse, messaging and database consolidation help increase employee productivity and reduce IT complexity.

… The HP Business Decision Appliance is available now to run business intelligence services ….

Delivering on the companies’ extended partnership announced a year ago, the new converged application appliances from HP and Microsoft are the industry’s first systems designed for IT, as well as end users. They deliver application services such as business intelligence, data warehousing, online transaction processing and messaging. The jointly engineered appliances, and related consulting and support services, enable IT to deliver critical business applications in as little as one hour, compared with potentially months needed for traditional systems.3 One of the solutions already offered by HP and Microsoft — the HP Enterprise Data Warehouse Appliance — delivers up to 200 times faster queries and 10 times the scalability of traditional Microsoft SQL Server deployments.4

With the HP Business Decision Appliance, HP and Microsoft have greatly reduced the time and effort it takes for IT to configure, deploy and manage a comprehensive business intelligence solution, compared with a traditional business intelligence solution where applications, infrastructure and productivity tools are not pre-integrated. This appliance is optimized for Microsoft SQL Server and Microsoft SharePoint and can be installed and configured by IT in less than one hour.

The solution enables end users to share data analyses built with Microsoft’s award-winning5 PowerPivot for Excel 2010 and collaborate with others in SharePoint 2010. It allows IT to centrally audit, monitor and manage solutions created by end users from a single dashboard.

Availability and Pricing6

  • The HP Business Decision Appliance with three years of HP 24×7 hardware and software support services is available today from HP and HP/Microsoft Frontline channel partners for less than $28,000 (ERP). Microsoft SQL Server 2008 R2 and Microsoft SharePoint 2010 are licensed separately.

  • The HP Enterprise Data Warehouse Appliance with services for site assessment, installation and startup, as well as three years of HP 24×7 hardware and software support services, is available today from HP and HP/Microsoft Frontline channel partners starting at less than $2 million. Microsoft SQL Server 2008 R2 Parallel Data Warehouse is licensed separately.

3 Based on HP’s experience with customers using HP Business Decision Appliance.
4 SQL Server Parallel Data Warehouse (PDW) has been evaluated by 16 early adopter customers in six different industries. Customers compared PDW with their existing environments and saw typically 40x and up to 200x improvement in query times.
5 Messaging and Online Collaboration Reviews, Nov. 30, 2010, eWEEK.com.
6 Estimated retail U.S. prices. Actual prices may vary.

From: HP Delivers Enterprise Agility with New Converged Infrastructure Solutions [press release, June 6, 2011]

HP today announced several industry-first Converged Infrastructure solutions that improve enterprise agility by simplifying deployment and speeding IT delivery.

Converged Systems accelerate time to application value

HP Converged Systems speed solution deployment by providing a common architecture, management and security model across virtualization, cloud and dedicated application environments. They include:

  • HP AppSystem maximizes performance while simplifying deployment and application management. These systems offer best practice operations with a standard architecture that lowers total cost of ownership. Among the new systems are HP Vertica Analytics System, as well as HP Database Consolidation Solution and HP Business Data Warehouse Appliance, which are both optimized for Microsoft SQL Server 2008 R2.

From: Microsoft Expands Data Platform With SQL Server 2012, New Investments for Managing Any Data, Any Size, Anywhere [press release, Oct 12, 2011]

New technologies will give businesses a universal platform for data management, access and collaboration.

… Kummert described how SQL Server 2012, formerly code-named “Denali,” addresses the growing challenges of data and device proliferation by enabling customers to rapidly unlock and extend business insights, both in traditional datacenters and through public and private clouds. Extending on this foundation, Kummert also announced new investments to help customers manage “big data,” including an Apache Hadoop-based distribution for Windows Server and Windows Azure and a strategic partnership with Hortonworks Inc. …

The company also made available final versions of the Hadoop Connectors for SQL Server and Parallel Data Warehouse. Customers can use these connectors to integrate Hadoop with their existing SQL Server environments to better manage data across all types and forms.

SQL Server 2012 delivers a powerful new set of capabilities for mission-critical workloads, business intelligence and hybrid IT across traditional datacenters and private and public clouds. Features such as Power View (formerly Project “Crescent,”) and SQL Server Data Tools (formerly “Juneau”) expand the self-service BI capabilities delivered with PowerPivot, and provide an integrated development environment for SQL Server developers.

From: Microsoft Releases SQL Server 2012 to Help Customers Manage “Any Data, Any Size, Anywhere” [press release, March 6, 2012]

Microsoft’s next-generation data platform releases to manufacturing today.

REDMOND, Wash. — March 6, 2012 — Microsoft Corp. today announced that the latest version of the world’s most widely deployed data platform, Microsoft SQL Server 2012, has released to manufacturing. SQL Server 2012 helps address the challenges of increasing data volumes by rapidly turning data into actionable business insights. Expanding on Microsoft’s commitment to help customers manage any data, regardless of size, both on-premises and in the cloud, the company today also disclosed additional details regarding its plans to release an Apache Hadoop-based service for Windows Azure.

Tackling Big Data

IT research firm Gartner estimates that the volume of global data is growing at a rate of 59 percent per year, with 70 to 85 percent in unstructured form.* Furthering its commitment to connect SQL Server and rich business intelligence tools, such as Microsoft Excel, PowerPivot for Excel 2010 and Power View, with unstructured data, Microsoft announced plans to release an additional limited preview of an Apache Hadoop-based service for Windows Azure in the first half of 2012.

To help customers more cost-effectively manage their enterprise-scale workloads, Microsoft will release several new data warehousing solutions in conjunction with the general availability of SQL Server 2012, slated to begin April 1. This includes a major software update and new half-rack form factors for Microsoft Parallel Data Warehouse appliances, as well as availability of SQL Server Fast Track Data Warehouse reference architectures for SQL Server 2012.

Microsoft Simplifies Big Data for the Enterprise [press release, Oct 24, 2012]

New Apache Hadoop-compatible solutions for Windows Azure and Windows Server enable customers to easily extract insights from big data.

NEW YORK — Oct. 24, 2012 — Today at the O’Reilly Strata Conference + Hadoop World, Microsoft Corp. announced new previews of Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows, the company’s Apache Hadoop-based solutions for Windows Azure and Windows Server. The new previews, available today athttp://www.microsoft.com/bigdata, deliver Apache Hadoop compatibility for the enterprise and simplify deployment of Hadoop-based solutions. In addition, delivering these capabilities on the Windows Server and Azure platforms enables customers to use the familiar tools of Excel, PowerPivot for Excel and Power View to easily extract actionable insights from the data.

“Big data should provide answers for business, not complexity for IT,” said David Campbell, technical fellow, Microsoft. “Providing Hadoop compatibility on Windows Server and Azure dramatically lowers the barriers to setup and deployment and enables customers to pull insights from any data, any size, on-premises or in the cloud.”

The company also announced today an expanded partnership with Hortonworks, a commercial vendor of Hadoop, to give customers access to an enterprise-ready distribution of Hadoop with the newly released solutions.

“Hortonworks is the only provider of Apache Hadoop that ensures a 100 percent open source platform,” said Rob Bearden, CEO of Hortonworks. “Our expanded partnership with Microsoft empowers customers to build and deploy on platforms that are fully compatible with Apache Hadoop.”

More information about today’s news and working with big data can be found at http://www.microsoft.com/bigdata.

Choose the Right Strategy to Reap Big Value From Big Data [feature article for the press, Nov 13, 2012]

From devices to storage to analytics, technologies that work together will be key for business’ next information age.

REDMOND, Wash. — Nov. 13, 2012 — It seems the gigabyte is going the way of the megabyte — another humble unit of computational measurement that is becoming less and less relevant. Long live the terabyte, impossibly large, increasingly common.
Consider this: Of all the data that’s been collected in the world, more than 90 percent has been gathered in the last two years alone. According to a June 2011 report from the McKinsey Global Institute, 15 out of 17 industry sectors of the U.S. have more data stored — per company — than the U.S. Library of Congress.
The explosion in data has been catalyzed by several factors. Social media sites such as Facebook and Twitter are creating huge streams of unstructured data in the form of opinions, comments, trends and demographics arising from a vast and growing worldwide conversation.
And then there’s the emerging world of machine-generated information. The rise of intelligent systems and the Internet of Things means that more and more specialized devices are connected to information technology — think of a national retail chain that is connected to every one of its point-of-sale terminals across thousands of locations or an automotive plant that can centrally monitor hundreds of robots on the shop floor.
Combine it all and some industry observers are predicting that the amount of data stored by organizations across industries will increase ten-fold every five years, much of it coming from new streams that haven’t yet been tapped.
It truly is a new information age, and the opportunity is huge. The McKinsey Global Instituteestimates that the U.S. health care system, for example, could save as much as $300 billion from more effective use of data. In Europe, public sector organizations alone stand to save 250 billion euros.
In the ever-competitive world of business, data strategy is becoming the next big competitive advantage. According to analyst firm Gartner Group,* “By tapping a continual stream of information from internal and external sources, businesses today have an endless array of new opportunities for: transforming decision-making; discovering new insights; optimizing the business; and innovating their industries.”
According to Microsoft’s Ted Kummert, corporate vice president of the Business Platforms Division, companies addressing this challenge today may wonder where to start. How do you know which data to store without knowing what you want to measure? But then again, how do you know what insights the data holds without having it in the first place?
“There is latent value in the data itself,” Kummert says. “The good news is storage costs are making it economical to store the data. But that still leaves the question of how to manage it and gain value from it to move your business forward.”
With new data services in the cloud such as Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows and Microsoft’s Apache Hadoop-based solutions for Windows Azure and Windows Server, organizations can afford to capture valuable data streams now while they develop their strategy — without making a huge financial bet on a six-month, multimillion-dollar datacenter project.
Just having access to the data, says Kummert, can allow companies to start asking much more complicated questions, combining information sources such as geolocation or weather information with internal operational trends such as transaction volume.
“In the end, big data is not just about holding lots of information,” he says. “It’s about how you harness it. It’s about insight, allowing end users to get the answers they need and doing so with the tools they use every day, whether that’s desktop applications, devices at the network edge or something else.”
His point is often overlooked with all the abstract talk of big data. In the end, it’s still about people, so making it easier for information workers to shift to a new world in which data is paramount is just as important as the information itself. Information technology is great at providing answers, but it still doesn’t know how to ask the right questions, and that’s where having the right analytics tools and applications can help companies make the leap from simply storing mountains of data to actually working with it.
That’s why in the Windows 8 world, Kummert says, the platform is designed to extend from devices and phones to servers and services, allowing companies to build a cohesive data strategy from end to end with the ultimate goal of empowering workers.
“When we talk about the Microsoft big data platform, we have all of the components to achieve exactly that,” Kummert says. “From the Windows Embedded platform to the Microsoft SQL Server stack through to the Microsoft Office stack. We have all the components to collect the data, store it securely and make it easier for information workers to find it — and, more importantly, understand what it means.”
For more information on building intelligent systems to get the most out of business data, please visit the Windows Embedded home page.
* Gartner, “Gartner Says Big Data Creates Big Jobs: 4.4 Million IT Jobs Globally to Support Big Data By 2015,” October 2012

Which data management solution delivers against today’s top six requirements? [The HP Blog Hub, March 25, 2013]

By Manoj Suvarna – Director, Product Management, HP AppSystems

In my last post I talked about the six key requirements I believe a data management

solution should deliver against today, namely:

1.      High performance

2.      Fast time to value

3.      Built with Big Data as a priority

4.      Low cost

5.      Simplified management

6.      Proven expertise

Today, 25th March 2013, HP has announced the HP AppSystem for Microsoft SQL Server 2012 Parallel Data Warehouse, a comprehensive data warerehouse solution jointly engineered with Microsoft, with a wide array of complementary tools, to effectively manage, store, and unlock valuable business insights.

Let’s take a look at how the solution delivers against each of the key requirements in turn:

1  High performance

With its MPP (Massively Parallel Processing) engine, and ‘shared nothing’ architecture, to effectively manage, store, and unlock valuable business insights, the HP AppSystem for Parallel Data Warehouse can deliver linear scale starting from a configuration to support small terabyte requirements all the way up to configurations supporting six Petabytes of data. 

The solution features the latest HP ProLiant Gen8 servers, with InfiniBand FDR networking, and uses the xVelocity in-memory analytics engine and the xVelocity memory-optimized columnstore index feature in Microsoft SQL Server 2012 to greatly enhance query performance. 

The combination of Microsoft software with HP Converged Infrastructure means HP AppSystem for Parallel Data Warehouse offers leading performance for complex workloads, with up to 100x faster query performance and a 30% faster scan rate than previous generations.

2  Fast time to value

HP AppSystem for Parallel Data Warehouse is a factory built, turn-key system, delivered complete from HP’s factory as an integrated set of hardware and software including servers, storage, networking, tools, software, services, and support.   Not only is the solution pre-integrated, but it’s backed by unique, collaborative HP and Microsoft support with onsite installation and deployment services to smooth implementation.  

3  Built with Big Data as a priority

Designed to integrate with Hadoop, HP AppSystem for Parallel Data Warehouse is ideally suited for “Big Data” environments. This integration allows customers to perform comprehensive analytics on unstructured, semi-structured and structured data, to effectively gain business insights and make better, faster decisions.

4  Simplified management

Providing the optimal management environment has been a critical element of the design, and is delivered through HP Support Pack Utility Suite.  This set of tools simplifies updates and several other maintenance tasks across the system to ensure that it is continually running at optimal performance.  Unique in the industry, HP Support Pack Utility Suite can deliver up to 2000 firmware updates with the click of a button.  In addition, the HP AppSystem for Parallel Data Warehouse is manageable via the Microsoft System Center console, leveraging deep integration with HP Insight Control.

5  Low cost

The HP AppSystem for Parallel Data Warehouse has been designed as part of an end to end stack for data management, integrating data warehousing seamlessly with BI solutions to minimize the cost of ownership.

It has also been re-designed with a new form factor to minimize space and maximize ease of expansion, which means the entry point for a quarter rack system is approximately 35% less expensive than the previous generation solution.    It is expandable in modular increments up to 64 nodes, which means no need for the type of fork-lift upgrade that might be needed with a proprietary solution, and is targeted to be approximately half the cost per TB of comparable offerings in the market from Oracle, IBM, and EMC*.

6 Proven expertise

Together HP and Microsoft have over 30 years experience delivering integrated solutions from desktop to datacenter.  HP AppSystem for Parallel Data Warehouse completes the portfolio for HP Data Management solutions, which give customers the ability to deliver insights on any data, of any size, combining best in class Microsoft software with HP Converged Infrastructure.

For customers, our ability to deliver on the requirements above ultimately provides agility for faster, lower risk deployment of data management in the enterprise, helping them make key business decisions more quickly and drive more value to the organization.

If you’d like to find out more, please go to www.hp.com/solutions/microsoft/pdw.

http://www.valueprism.com/resources/resources/Resources/PDW%20Compete%20Pricing%20FINAL.pdf

HP AppSystem for SQL 2012 Parallel Data Warehouse [HP product page, March 25, 2013]

Overview

Rapid time-to-value data warehouse solution

The HP AppSystem for Microsoft SQL Server 2012 Parallel Data Warehouse, jointly engineered, built and supported with Microsoft, is for customers who realize limitations and inefficiencies of their legacy data warehouse infrastructure. This converged system solution delivers significant advances over the previous generation solution including:

Enhanced performance and massive scalability

  • Up to 100x faster query performance and a 30% faster scan rate
  • Ability to start from small terabyte requirements that can  linearly scale out to 6 Petabytes for mission critical needs

Minimize costs and management complexity

  • Redesigned form factor minimizes space  and allows ease of expansion with significant up-front acquisition savings as well as reduce OPEX heating, cooling and real estate cost requirements
  • Appliance  solution is pre-built and tested as a complete, end-to-end stack — easy to deploy and minimal technical resources required
  • Extensive integration of Microsoft and 3rd party tools  allow users to work with familiar tools like Excel as well as within heterogeneous BI environments
  • Unique HP Support Pack Utility Suite set of tools significantly simplifies updates and  other maintenance tasks to ensure system is running at optimal performance

Reduce risks and manage change

  • Services delivered jointly under a unique collaborative support agreement, integrated across hardware and software, to help avoid IT disruptions and deliver faster resolution to issues
  • Backed by more than 48,000 Microsoft professionals—with more than 12,000 Microsoft Certified—one of the largest, most specialized forces of consultants and support professionals for Microsoft environments in the world

Solution Components

HP Products

    HP Services

    HP Software

      Partner’s Software

        HP Support

        [also available with Dell Parallel Data Warehouse Appliance]
        Appliance: Parallel Data Warehouse (PDW) [Microsoft PDW Software product page, Feb 27, 2013]

        PDW is a massively parallel processing data warehousing appliance built for any volume of relational data (with up to 100x performance gains) and provides the simplest integration to Hadoop.

        Unlike other vendors who opt to provide their high-end appliances for a high price or provide a relational data warehouse appliance that is disconnected from their “Big Data” and/or BI offerings, Microsoft SQL Server Parallel Data Warehouse provides both a high-end massively parallel processing appliance that can improve your query response times up to 100x over legacy solutions as well as seamless integration to both Hadoop and with familiar business intelligence solutions. What’s more, it was engineered to lower ongoing costs resulting in a solution that has the lowest price/terabyte in the market.

        What’s New in SQL Server 2012 Parallel Data Warehouse

        Key Capabilities

        • Built For Big Data with PolyBase

          SQL Server 2012 Parallel Data Warehouse introduces PolyBase, a fundamental breakthrough in data processing used to enable seamless integration between traditional data warehouses and “Big Data” deployments.

          • Use standard SQL queries (instead of MapReduce) to access and join Hadoop data with relational data.
          • Query Hadoop data without IT having to pre-load data first into the warehouse.
          • Native Microsoft BI Integration allowing analysis of relational and non-relational data with familiar tools like Excel.
        • Next-Generation Performance at Scale

          Scale and perform beyond your traditional SQL Server deployment with PDW’s massively parallel processing (MPP) appliance that can handle the extremes of your largest mission critical requirements of performance and scale.

          • Up to 100x faster than legacy warehouses with xVelocity updateable columnstore.
          • Massively Parallel Processing (MPP) architecture that parallelizes and distributes computing for high query concurrency and complexity.
          • Rest assured with built-in hardware redundancies for fault tolerance.
          • Rely on Microsoft as your single point of contact for hardware and software support.
        • Engineered For Optimal Value

          Unlike other vendors in the data warehousing space who deliver a high-end appliance at a high price, Microsoft engineered PDW for optimal value by lowering the cost of the appliance.

          • Resilient, scalable, and high performance storage features built into software lowering hardware costs.
          • Compress data up to 15x with the xVelocity updateable columnstore saving up to 70% of storage requirements.
          • Start small with a quarter rack allowing you to right-size the appliance rather than over-acquiring capacity.
          • Use the same tools and knowledge as SQL Server without retaining new tools or knowledge for scale-out DW or Big Data.
          • Co-engineered with hardware partners offering highest level of product integration and shipped to your door offering fastest time to value.
          • The lowest price/terabyte than overall appliance market (and 2.5x lower than SQL 2008 R2 PDW).

          PolyBase [Microsoft page, Feb 26, 2013]

          PolyBase is a fundamental breakthrough in data processing used in SQL Server 2012 Parallel Data Warehouse to enable truly integrated query across Hadoop and relational data.

          Complementing Microsoft’s overall Big Data strategy, PolyBase is a breakthrough new technology on the data processing engine in SQL Server 2012 Parallel Data Warehouse designed as the simplest way to combine non-relational data and traditional relational data in your analysis. While customers would normally burden IT to pre-populate the warehouse with Hadoop data or undergo an extensive training on MapReduce in order to query non-relational data, PolyBase does this all seamlessly giving you the benefits of “Big Data” without the complexities.

          Key Capabilities

          • Unifies Relational and Non-relational Data

            PolyBase is one of the most exciting technologies to emerge in recent times because it unifies the relational and non-relational worlds at the query level. Instead of learning a new query like MapReduce, customers can leverage what they already know (T-SQL)

            • Integrated Query: Accepts a standard T-SQL query that joins tables containing a relational source with tables in a Hadoop cluster without needing to learn MapReduce.
            • Advanced query options: Apart from simple SELECT queries, users can perform JOINs and GROUP BYs on data in the Hadoop cluster.
          • Enables In-place Queries with Familiar BI Tools

            Microsoft Business Intelligence (BI) integration enables users to connect to PDW with familiar tools such as Microsoft Excel, to create compelling visualizations and make key business decisions from structured or unstructured data quickly.

            • Integrated BI tools: End users can connect to both relational or Hadoop data with Excel abstracting the complexities of both.
            • Interactive visualizations: Explore data residing in HDFS using Power View for immersive interactivity and visualizations.
            • Query in-place: IT doesn’t have to pre-load or pre-move data from Hadoop into the data warehouse and pre-join the data before end users do the analysis.
          • Part of an Overall Microsoft Big Data Story

            PolyBase is part of an overall Microsoft “Big Data” solution that already includes HDInsight (a 100% Apache Hadoop compatible distribution for Windows Server and Windows Azure), Microsoft Business Intelligence, and SQL Server 2012 Parallel Data Warehouse.

            • Integrated with HDInsight: PolyBase can source the non-relational analysis from Microsoft’s 100% Apache compatible Hadoop distribution, HD Insights.
            • Built into PDW: PolyBase is built into SQL Server 2012 Parallel Data Warehouse to bring “Big Data” benefits within the power of a traditional data warehouse.
            • Integrated BI tools: PolyBase has native integration with familiar BI tools like Excel (through Power View and PowerPivot).

          Announcing Power BI for Office 365 [Office News, July 8, 2013]

          Today, at the Worldwide Partner Conference, we announced a new offering–Power BI for Office 365. Power BI for Office 365 is a cloud-based business intelligence (BI) solution that enables our customers to easily gain insights from their data, working within Excel to analyze and visualize the data in…

          Exciting new BI features in Excel [Excel Blog, July 9, 2013]

          Yesterday during the Microsoft’s Worldwide Partner Conference we announced some exciting new Business Intelligence (BI) features available for Excel. Specifically, we announced the expansion of the BI offerings available as part of Power BIa cloud-based BI solution that enables our customers to easily gain insights from their data, working within Excel to analyze and visualize the data in a self-service way.

          Power BI for Office 365 now includes:

          • Power Query, enabling customers to easily search and access public data and their organization’s data, all within Excel (formerly known as “Data Explorer).  Download details here
          • Power Map, a 3D data visualization tool for mapping, exploring and interacting with geographic and temporal data (formerly known as product codename “Geoflow).  Download details here.
          • Power Pivot for creating and customizing flexible data models within Excel. 
          • Power View for creating interactive charts, graphs and other visual representations of data.

          Head on over to the Office 365 Technology Blog, Office News Blog, and Power BI site to learn more.

          Clearing up some confusion around the Power BI “Release” [A.J. Mee’s Business Intelligence and Big Data Blog, Aug 13, 2013]

          Hey folks.  Thanks again for checking out my blog.
          Yesterday (8/12/2013), Power BI received some attention from the press.  Here’s one of the articles that I had seen talking about the “release” of Power BI:
          http://www.neowin.net/news/microsoft-releases-power-bi-office-365-for-windows-8rt

          Some of us inside Microsoft had to address all sorts of questions around this one.  For the most part, the questions revolved around the *scope* of what was actually released.  You have to remember that Power BI is a broad brand name that takes into account:

          * Power Pivot/View/Query/Map (which is available now, for the most part)

          * The Office 365 hosting of Power BI applications with cloud-to-on-premise data refresh, Natural Language query, data stewardship, etc..

          * The Mobile BI app for Windows and iOS devices

          Net-net: we announced the availability of the Mobile app (in preview form).  At present, it is only available on Windows 8 devices (x86 or ARM) – no iOS just yet.  The rest of the O365 / Power BI offering is yet to come.  Check out this article to find out how to sign up.
          http://blogs.msdn.com/b/ajmee/archive/2013/07/17/how-can-i-check-out-power-bi.aspx
          So, the headline story is really all around the Mobile app.  You can grab it today from the Store – just search on “Power BI” and it should be the first app that shows up.

          From: Power Map for Excel earns new name with significant updates to 3D visualizations and storytelling [Excel Blog, Sept 25, 2013]

          We are announcing a significant update to Power Map Preview for Excel (formerly Project codename “GeoFlow” Preview for Excel) on the Microsoft Download Center. Just over five months ago, we launched the preview of Project codename “GeoFlow” amidst a passionately announced “tour” of global song artists through the years by Amir Netz (see 1:17:00 in the keynote) at the first ever PASS Business Analytics conference in April. The 3D visualization add-in has now become a centerpiece visualization (along with Power View) within the business intelligence capabilities of Microsoft Power BI in Excel, earning the new name Power Map to align with other Excel features (Power Query, Power Pivot, and Power View).

          Information workers with their data in Excel have realized the potential of Power Map to identify insights in their geospatial and time-based data that traditional 2D charts cannot. Digital marketers can better target and time their campaigns while environmentally-conscious companies can fine-tune energy-saving programs across peak usage times. These are just a few of the examples of how location-based data is coming alive for customers using Power Map and distancing them from their competitors who are still staring blankly at a flat table, chart, or map. Feedback from customers like this lead us to introduce Power Map with some new features across experience of mapping data, discovering insights, and sharing stories.

          From: Microsoft unleashes fall wave of enterprise cloud solutions [press release, Oct 7, 2013]

          New Windows Server, System Center, Visual Studio, Windows Azure, Windows Intune, SQL Server, and Dynamics solutions will accelerate cloud benefits for customers.

          REDMOND, Wash. — Oct. 7, 2013 — Microsoft Corp. on Monday announced a wave of new enterprise products and services to help companies seize the opportunities of cloud computing and overcome today’s top IT challenges. Complementing Office 365 and other services, these new offerings deliver on Microsoft’s enterprise cloud strategy.

          Data platform and insights

          As part of its vision to help more people unlock actionable insights from big data, Microsoft next week will release a second preview of SQL Server 2014. The new version offers industry-leading in-memory technologies at no additional cost, giving customers 10 times to 30 times performance improvements without application rewrites or new hardware. SQL Server 2014 also works with Windows Azure to give customers built-in cloud backup and disaster recovery.

          For big data analytics, later this month Microsoft will release Windows Azure HDInsight Service, an Apache Hadoop-based service that works with SQL Server and widely used business intelligence tools, such as Microsoft Excel and Power BI for Office 365. With Power BI, people can combine private and public data in the cloud for rich visualizations and fast insights.

          How to take full advantage of Power BI in Excel 2013 [News from Microsoft Business UK, Oct 14, 2013]

          The launch of Power BI features in Excel 2013 gives users an added range of options for data analysis and gaining business intelligence (BI). Power Query, Power Pivot, Power View, and Power Map work seamlessly together, making it much simpler to discover and visualise data. And for small businesses looking to take advantage of self-service intelligence solutions, this is a major stride forwards.

          Power Query

          With Power Query, users can search the entire cloud for data – both public and private. With access to multiple data sources, users can filter, shape, merge, and append the information, without the need to physically bring it in to Excel.

          Once your query is shaped and filtered how you want it, you can download it into a worksheet in Excel, into the Data Model, or both. When you have the dataset you need, shaped and formed and properly merged, you can save the query that created it, and share it with other users.

          Power Pivot

          Power Pivot enables users to create their own data models from various sources, structured to meet individual needs. You can customise, extend with calculations and hierarchies, and manage the powerful Data Model that is part of Excel.

          The solution works seamlessly and automatically with Power Query, and with other features of Power BI, allowing you to manage and extend your own custom database in the familiar environment of Excel. The entire Data Model in Power Pivot – including tables, columns, calculations and hierarchies – exist as report-ready elements in Power View.

          Power View

          Power View allows users to create engaging, interactive, and insightful visualisations with just a few clicks of their mouse. The tool brings the Data Model alive, turning queries into visual analysis and answers. Data can be presented in a variety of different forms, with the reports easily shareable and open for interactive analysis.

          Power Map

          A relatively new addition to ExcelPower Map is a geocentric and temporal mapping feature of Power BI. It brings location data into powerful, engaging 3D map visualisations. This allows users to create location-based reports, visualised over a time continuum, that tour the available data.

          Using the features together

          Power BI offers a collection of services which are designed to make self-service BI intuitive and collaborative. The solution combines the power and familiarity of Excel with collaboration and cloud-based functionality. This vastly increases users’ capacity to gather, manage and draw insights from data, ensuring they can make the most of business intelligence.

          The various feature of BI can add value independently, but the real value is in integration. When used in conjunction with one another – rather than in silo – the services become more than the sum of their parts. They are designed to work seamlessly together in Excel 2013, supporting users as they look to find data, process it and create visualisations which add value to the decision making process.

          Posted by Alex Boardman

          Related upcoming technology announcements from Intel:

          GraphBuilder: Revealing hidden structure within Big Data [Intel Labs blog, Dec 6, 2012]

          By Ted Willke, Principal Engineer with Intel and the General Manager of the Graph Analytics Operation in Intel Labs.

          Big Data.  Big.  Data.  We hear the term frequently used to describe data of unusual size or generated at spectacular velocity, like the amount of social data that Facebook has amassed on us (30 PB in one cluster) or the rate at which sensors at the Large Hadron Collider collect information on subatomic particles (15 PB/year).  And it’s often deemed “unstructured or semi-structured” to describe its lack of apparent, well, structure.  What’s meant is that this data isn’t organized in a way that can directly answer questions, like a database can if you ask it how many widgets you sold last week.

          But Big Data does have structure; it just needs to be discovered from within the raw text, images, video, sensor data, etc., that comprise it.  And, companies, led by pioneers like Google, have been doing this for the better part of a decade, using applications that churn through the information using data-parallel processing and convenient frameworks for it, like Hadoop MapReduce.  Their systems chop the incoming data into slices, farm it out to masses of machines, which subsequently filter it, order it, sum it, transform it, and do just about anything you’d want to do with it, within the practical limits of the readily available frameworks.

          But until recently, only the wizards of Big Data were able to rapidly extract knowledge from a different type of structure within the data, a type that is best modeled by tree or graph structures.  Imagine the pattern of hyperlinks connecting Wikipedia pages or the connections between Tweeters and Followers on Twitter.  In these models, a line is drawn between two bits of information if they are related to each other in some way.  The nature of the connection can be less obvious than in these examples and made specifically to serve a particular algorithm.  For example, a popular form of machine learning called Latent Dirichlet Allocation (a mouthful, I know) can create “word clouds” of topics in a set of documents without being told the topics in advance. All it needs is a graph that connects word occurrences to the filenames.  Another algorithm can accurately guess the type of noun (i.e., person, place, or thing) if given a graph that connects noun phrases to surrounding context phrases.

          Many of these graphs are very large, with tens of billions of vertices (i.e., things being related) and hundreds of billions of edges (i.e., the relationships).  And, many that model natural phenomena possess power-law degree distributions, meaning that many vertices connect to a handful of others, but a few may have edges to a substantial portion of the vertices.  For instance, a graph of Twitter relationships would show that many people only have a few dozen followers while only a handful of celebrities have millions. This is all very problematic for parallel computation in general and MapReduce in particular.  As a result, Carlos Guestrin and his crack team at the University of Washington in Seattle have developed a new framework, called GraphLab, that is specifically designed for graph-based parallel machine learning.  In many cases, GraphLab can process such graphs 20-50X faster than Hadoop MapReduce.  Learn more about their exciting work here.

          Carlos is a member of the Intel Science and Technology Center for Cloud Computing, and we started working with him on graph-based machine learning and data mining challenges in 2011.  Quickly it became clear that no one had a good story about how to construct large-scale graphs that frameworks like GraphLab could digest.  His team was constantly writing scripts to construct different graphs from various unstructured data sources.  These scripts ran on a single machine and would take a very long time to execute.  Essentially, they were using a labor-intensive, low-performance method to feed information to their elegant high-performance GraphLab framework.  This simply would not do.

          Scanning the environment, we identified a more general hole in the open source ecosystem: A number of systems were out there to process, store, visualize, and mine graphs but, surprisingly, not to construct them from unstructured sources.  So, we set out to develop a demo of a scalable graph construction library for Hadoop.  Yes, for Hadoop.  Hadoop is not good for graph-based machine learning but graph construction is another story.  This work became GraphBuilder, which was demonstrated in July at the First GraphLab Workshop on Large-Scale Machine Learning and open sourced this week at 01.org (under Apache 2.0 licensing).

          GraphBuilder not only constructs large-scale graphs fast but also offloads many of the complexities of graph construction, including graph formation, cleaning, compression, partitioning, and serialization.  This makes it easy for just about anyone to build graphs for interesting research and commercial applications.  In fact, GraphBuilder makes it possible for a Java programmer to build an internet-scale graph for PageRank in about 100 lines of code and a Wikipedia-sized graph for LDA in about 130.

          This is only the beginning for GraphBuilder but it has already made a lot of connections.  We will continually update it with new capabilities, so please try it out and let us know if you’d value something in particular.  And, let us know if you’ve got an interesting graph problem for us to grind through.  We are always looking for new revelations.

          Intel, Facebook Collaborate on Future Data Center Rack Technologies  [press release, Jan 16, 2013]

          New Photonic Architecture Promises to Dramatically Change Next Decade of Disaggregated, Rack-Scale Server Designs

          NEWS HIGHLIGHTS

          • Intel and Facebook* are collaborating to define the next generation of rack technologies that enables the disaggregation of compute, network and storage resources.
          • Quanta Computer* unveiled a mechanical prototype of the rack architecture to show the total cost, design and reliability improvement potential of disaggregation.
          • The mechanical prototype includes Intel Silicon Photonics Technology, distributed input/output using Intel Ethernet switch silicon, and supports the Intel® Xeon® processor and the next-generation system-on-chip Intel® Atom™ processor code named “Avoton.”
          • Intel has moved its silicon photonics efforts beyond research and development, and the company has produced engineering samples that run at speeds of up to 100 gigabits per second (Gbps).

          OPEN COMPUTE SUMMIT, Santa Clara, Calif., Jan. 16, 2013 – Intel Corporation announced a collaboration with Facebook* to define the next generation of rack technologies used to power the world’s largest data centers. As part of the collaboration, the companies also unveiled a mechanical prototype built by Quanta Computer* that includes Intel’s new, innovative photonic rack architecture to show the total cost, design and reliability improvement potential of a disaggregated rack environment.

          “Intel and Facebook are collaborating on a new disaggregated, rack-scale server architecture that enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade,” said Justin Rattner, Intel’s chief technology officer during his keynote address at Open Computer Summit in Santa Clara, Calif. “The disaggregated rack architecture [since renamed RSA (Rack Scale Architecture)] includes Intel’s new photonic architecture, based on high-bandwidth, 100Gbps Intel® Silicon Photonics Technology, that enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today’s copper based interconnects.”

          Rattner explained that the new architecture is based on more than a decade’s worth of research to invent a family of silicon-based photonic devices, including lasers, modulators and detectors using low-cost silicon to fully integrate photonic devices of unprecedented speed and energy efficiency. Silicon photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable. Intel has spent the past two years proving its silicon photonics technology was production-worthy, and has now produced engineering samples.

          Silicon photonics made with inexpensive silicon rather than expensive and exotic optical materials provides a distinct cost advantage over older optical technologies in addition to providing greater speed, reliability and scalability benefits. Businesses with server farms or massive data centers could eliminate performance bottlenecks and ensure long-term upgradability while saving significant operational costs in space and energy.

          Silicon Photonics and Disaggregation Efficiencies

          Businesses with large data centers can significantly reduce capital expenditure by disaggregating or separating compute and storage resources in a server rack. Rack disaggregation refers to the separation of those resources that currently exist in a rack, including compute, storage, networking and power distribution into discrete modules. Traditionally, a server within a rack would each have its own group of resources. When disaggregated, resource types can be grouped together and distributed throughout the rack, improving upgradability, flexibility and reliability while lowering costs.

          “We’re excited about the flexibility that these technologies can bring to hardware and how silicon photonics will enable us to interconnect these resources with less concern about their physical placement,” said Frank Frankovsky, chairman of the Open Compute Foundation and vice president of hardware design at supply chain at Facebook. “We’re confident that developing these technologies in the open and contributing them back to the Open Compute Project will yield an unprecedented pace of innovation, ultimately enabling the entire industry to close the utilization gap that exists with today’s systems designs.”

          By separating critical components from one another, each computer resource can be upgraded on its own cadence without being coupled to the others. This provides increased lifespan for each resource and enables IT managers to replace just that resource instead of the entire system. This increased serviceability and flexibility drives improved total-cost for infrastructure investments as well as higher levels of resiliency. There are also thermal efficiency opportunities by allowing more optimal component placement within a rack.

          The mechanical prototype is a demonstration of Intel’s photonic rack architecture for interconnecting the various resources, showing one of the ways compute, network and storage resources can be disaggregated within a rack. Intel will contribute a design for enabling a photonic receptacle to the Open Compute Project (OCP) and will work with Facebook*, Corning*, and others over time to standardize the design. The mechanical prototype includes distributed input/output (I/O) using Intel Ethernet switch silicon, and will support the Intel® Xeon® processor and the next generation, 22 nanometer system-on-chip (SoC) Intel® Atom™ processor, code named “Avoton” available this year.

          The mechanical prototype shown today is the next evolution of rack disaggregation with separate distributed switching functions.

          Intel and Facebook: A History of Collaboration and Contributions

          Intel and Facebook have long been technology collaboration partners on hardware and software optimizations to drive more efficiency and scale for Facebook data centers. Intel is also a founding board member of the OCP, along with Facebook. Intel has several OCP engagements in flight including working with the industry to design OCP boards for Intel Xeon and Intel Atom based processors, support for cold storage with the Intel Atom processor, and common hardware management as well as future rack definitions including enabling today’s photonics receptacle.

          Disruptive technologies to unlock the power of Big Data [Intel Labs blog, Feb 26, 2013]

          By Ted Willke, Principal Engineer with Intel and the General Manager of the Graph Analytics Operation in Intel Labs.

          This week’s announcement by Intel that it’s expanding the availability of the Intel® Distribution for Apache Hadoop* software to the US market is seriously exciting for the employees of this semiconductor giant, especially researchers like me.  Why?  Why would I say this given the amount of overexposure that Hadoop has received?  I mean, isn’t this technology nearly 10 years old already??!!  Well, because the only thing I hear more than people touting Hadoop’s promise are people venting frustration in implementing it.  Rest assured that Intel is listening.  We get that users don’t want to make a career out of configuring Hadoop… debugging it…  managing it… and trying to figure out why the “insight” it’s supposed to be delivering often looks like meaningless noise.

          Which brings me back to why this is a seriously exciting event for me.  With our product teams doing the heavy lifting of making the Hadoop framework less rigid and easier to use while keeping it inexpensive, Intel Labs gets a landing zone for some cool disruptive technologies. In December, I blogged about the launch of our open source scalable graph construction library for Hadoop, called Intel® Graph Builder for Apache Hadoop software (f.k.a. GraphBuilder), and explained how it makes it easy to construct large scale graphs for machine learning and data mining. These structures can yield insights from relationships hidden within a wide range of big data sources, from social media and business analytics to medicine and e-science. Today I’ll delve a bit more into Graph Builder technology and introduce the Intel® Active Tuner for Apache Hadoop software, an auto-tuner that uses Artificial Intelligence (AI) to configure Hadoop for optimal performance.  Both technologies will be available in the Intel Distribution.

          So, Intel® Graph Builder leverages Hadoop MapReduce to turn large unstructured (or semi-structured) datasets into structured output in graph form.  This kind of graph may be mined using graph search of the sort that Facebook recently announced.  Many companies would like construct such graphs out of unstructured datasets and Graph Builder makes it possible.  Beyond search, analysis may be applied to an entire graph to answer questions of the type shown in the figure below.  The analysis may be performed using distributed algorithms implemented in frameworks like GraphLab, which I also discussed in my previous post.

          image

          Intel® Graph Builder performs extract, transform, and load operations, terms borrowed from databases and data warehousing.  And, it does so at Hadoop MapReduce scale.  Text is parsed and tokenized to extract interesting features.  These operations are described in a short map-reduce program written by the data scientist.  This program also defines when two vertices (i.e., features) in the graph are related by an edge.  The rule is applied repeatedly to form the graph’s topology (i.e., the pattern of edge relationships between vertices), which is stored via the library.  In addition, most applications require that additional tabulated information, or “network information,” be associated with each vertex/edge and the library provides a number of distributed algorithms for these tabulations.

          At this point, we have a large-scale graph ready for HDFS, HBase, or another distributed store.  But we need to do a few more things to ensure that queries and computations on the graph will scale up nicely, like:

          • Cleaning the graph’s structure and checking that it is reasonable
          • Compressing the graph and network information to conserve cluster resources
          • Partitioning the graph in a way that will minimize cluster communications while load balancing computational effort

          The Intel Graph Builder library provides efficient distributed algorithms for all of the above, and more, so that data scientists can spend more of their time analyzing data and less of their time preparing it.  Enough said. The library will be included in the Intel Distribution shortly and we look forward to your feedback.  We are constantly on the hunt for new features as we look to the future of big data.

          Whereas Intel® Graph Builder was developed to simplify the programming of emerging applications, Intel® Active Tuner was developed to simplify the deployment of today’s applications by automating the selection of configuration settings that will result in optimal cluster performance. In fact, we initially codenamed this technology “Gunther,” after a well-known circus elephant trainer, because of its ability to train Hadoop to run faster :-) .  It’s cruelty-free to boot, I promise.  Anyway, many Hadoop configuration parameters need to be tuned for the characteristics of each particular application, such as web search, medical image analysis, audio feature analysis, fraud detection, semantic analysis, etc.  This tuning significantly reduces both job execution and query time but is time consuming and requires domain expertise. If you use Hadoop you know that the common practice is to tune it up using rule-of-thumb settings published by industry leaders.  But these recommendations are too general and fail to capture the specific requirements of a given application and cluster resource constraints.  Enter the Active Tuner.

          Intel® Active Tuner implements a search engine that uses a small number of representative jobs to identify the best configuration from among millions or billions of possible Hadoop configurations.  It uses a form of AI known as a genetic algorithm to search out the best settings for the number of maps, buffer sizes, compression settings, etc., constantly striving to derive better settings by combining those from pairs of trials that show the most promise (this is where the genetic part comes in) and deriving future trials from these new combinations.  And, the Active Tuner can do this faster and more effectively than a human can using the rules-of-thumb.  It can be controlled from a slick GUI in the new Intel Manager for Apache Hadoop, so take it for a test run when you pick up a copy of the Intel Distribution.  You may see your cluster performance improve by up to 30% without any hassle.

          To wrap, these are one-of-a-kind technologies that I think you’ll have fun playing with.  And, despite offering quite a lot, Intel® Graph Builder and Intel® Active Tuner are just the beginning.  I am very excited by what’s coming next.  Intel is moving to unlock the power of Big Data and Intel Labs is preparing to blow it wide open.

          *Other names and brands may be claimed as the property of others

          Intel Unveils New Technologies for Efficient Cloud Datacenters [press release, Sept 4, 2013]

          From New SoCs to Optical Fiber, Intel Delivers Cloud-Optimized Innovations Across Network, Storage, Microservers, and Rack Designs

          NEWS HIGHLIGHTS

          • The Intel® Atom™ C2000 processor family is the first based on Silvermont micro-architecture, has 13 customized configurations and is aimed at microservers, entry-level networking and cold storage.
          • New 64-bit, system-on-chip family for the datacenter delivers up to six times1 the energy efficiency and up to seven times2 the performance compared to previous generation.
          • The first live demonstration of a Rack Scale Architecture-based system with high-speed Intel® Silicon Photonics components including a new MXC connector and ClearCurve* optical fiber developed in collaboration with Corning*, enabling data transfers speeds up to 1.6 terabits4 per second at distances up to 300 meters5 for greater rack density.

          SAN FRANCISCO, Calif., September 4, 2013 – Intel Corporation today introduced a portfolio of datacenter products and technologies for cloud service providers looking to drive greater efficiency and flexibility into their infrastructure to support a growing demand for new services and future innovation.

          Server, network and storage infrastructure is evolving to better suit an increasingly diverse set of lightweight workloads, creating the emergence of microserver, cold storage and entry networking segments. By optimizing technologies for specific workloads, Intel will help cloud providers significantly increase utilization, drive down costs and provide compelling and consistent experiences to consumers and businesses.

          The portfolio includes the second generation 64-bit Intel® Atom™ C2000 product family of system-on-chip (SoC) designs for microservers and cold storage platforms (code named “Avoton”) and for entry networking platforms (code named “Rangeley”). These new SoCs are the company’s first products based on the Silvermont micro-architecture, the new design in its leading 22nm Tri-Gate SoC process delivering significant increases in performance and energy efficiency, and arrives only nine months after the previous generation.

          “As the world becomes more and more mobile, the pressure to support billions of devices and users is changing the very composition of datacenters,” said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. “From leadership in silicon and SoC design to rack architecture and software enabling, Intel is providing the key innovations that original equipment manufacturers, telecommunications equipment makers and cloud service providers require to build the datacenters of the future.”

          Intel also introduced the Intel® Ethernet Switch FM5224 silicon which, when combined with the WindRiver Open Network Software suite, brings Software Defined Networking (SDN) solutions to servers for improved density and lower power.

          Intel also demonstrated the first operational Intel Rack Scale Architecture (RSA)-based rack with Intel® Silicon PhotonicsTechnology in combination with the disclosure of a new MXC connector and ClearCurve* optical fiber developed by Corning* with requirements from Intel. This demonstration highlights the speed with which Intel and the industry are moving from concept to functionality.

          Customized, Optimized Intel® Atom™ SoCs for New and Existing Market Segments
          Manufactured using Intel’s leading 22nm process technology, the new Intel Atom C2000 product family features up to eight cores, a range of 6 to 20Watts TDP, integrated Ethernet and support for up to 64 gigabytes (GB) of memory, eight times the previous generation. OVH* and 1&1, leading global web-hosting services companies, have tested Intel Atom C2000 SoCs and plan to deploy them in its entry-level dedicated hosting services next quarter. The 22 nanometer process technology delivers superior performance and performance per watt.

          Intel is delivering 13 specific models with customized features and accelerators that are optimized for particular lightweight workloads such as entry dedicated hosting, distributed memory caching, static web serving and content delivery to ensure greater efficiency. The designs allow Intel to expand into new markets like cold storage and entry-level networking.

          For example, the new Intel Atom configurations for entry networking address the specialized needs for securing and routing Internet traffic more efficiently. The product features a set of hardware accelerators called Intel® QuickAssist Technology that improves cryptographic performance. They are ideally suited for routers and security appliances.

          By consolidating three communications workloads – application, control and packet processing – on a common platform, providers now have tremendous flexibility. They will be able to meet the changing network demands while adding performance, reducing costs and improving time-to-market.

          Ericsson, a world-leading provider of communications technology and services announced that its blade-based switches used in the Ericsson Cloud System, a solution which enables service providers to add cloud capabilities to their existing networks, will sooninclude the Intel Atom C2000 SoC product family.

          Microserver-Optimized Switch for Software Defined Networking
          Network solutions that manage data traffic across microservers can significantly impact the performance and density of the system. The unique combination of the Intel Ethernet Switch FM5224 silicon and the WindRiver Open Network Software suite will enable the industry’s first 2.5GbE, high-density, low latency, SDN Ethernet switch solutions specifically developed for microservers. The solution enhances system level innovation, and complements the integrated Intel Ethernet controller within the Intel Atom C2000 processor. Together, they can be used to create SDN solutions for the datacenter.

          Switches using the new Intel Ethernet Switch FM5224 silicon can connect up to 64 microservers, providing up to 30 percent3 higher node density. They are based on Intel Open Network Platform reference design announced earlier this year.

          First Demonstration of Silicon Photonics-Powered Rack
          Maximum datacenter efficiency requires innovation at the silicon, system and rack level. Intel’s RSA design helps industry partners to re-architect datacenters for modularity of components (storage, CPU, memory, network) at the rack level. It provides the ability to provision or logically compose resources based on application specific workload requirements. Intel RSA also will allow for the easier replacement and configuration of components when deploying cloud computing, storage and networking resources.

          Intel today demonstrated the first operational RSA-based rack equipped with the newly announced Intel Atom C2000 processors, Intel® Xeon® processors, a top-of-rack Intel SDN-enabled switch and Intel Silicon Photonics Technology. As part of the demonstration, Intel also disclosed the new MXC connector and ClearCurve* fiber technology developed by Corning* with requirements from Intel. The fiber connections are specifically designed to work with Intel Silicon Photonics components.

          The collaboration underscores the tremendous need for high-speed bandwidth within datacenters. By sending photons over a thin optical fiber instead of electrical signals over a copper cable, the new technologies are capable of transferring massive amounts of data at unprecedented speeds over greater distances. The transfers can be as fast as 1.6 terabits per second4 at lengths up to 300 meters5 throughout the datacenter.

          To highlight the growing range of Intel RSA implementations, Microsoft and Intel announced a collaboration to innovate on Microsoft’s next-generation RSA rack design. The goal is to bring even better utilization, economics and flexibility to Microsoft’s datacenters.

          The Intel Atom C2000 product family is shipping to customers now with more than 50 designs for microservers, cold storage and networking. The products are expected to be available in the coming months from vendors including Advantech*, Dell*, Ericsson*, HP*, NEC*, Newisys*, Penguin Computing*, Portwell*, Quanta*, Supermicro*, WiWynn*, ZNYX Networks*.

          Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

          NEWS HIGHLIGHTS.

          • Intel discloses form factors and memory configuration details of the CPU version of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing“), to ease programmability for developers while improving performance.
          • Intel® Xeon® processor-based systems power more than 82 percent of all supercomputers on the recently announced 42nd edition of the Top500 list.
          • New Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools bring the benefits of Big Data analytics and HPC together.
          • Collaboration with HPC community designed to deliver customized products to meet the diverse needs of customers.

          SUPERCOMPUTING CONFERENCE, Denver, Nov. 19, 2013 –Intel Corporation unveiled innovations in HPC and announced new software tools that will help propel businesses and researchers to generate greater insights from their data and solve their most vital business and scientific challenges.

          “In the last decade, the high-performance computing community has created a vision of a parallel universe where the most vexing problems of society, industry, government and research are solved through modernized applications,” said Raj Hazra, Intel vice president and general manager of the Technical Computing Group. “Intel technology has helped HPC evolve from a technology reserved for an elite few to an essential and broadly available tool for discovery. The solutions we enable for ecosystem partners for the second half of this decade will drive the next level of insight from HPC. Innovations will include scale through standards, performance through application modernization, efficiency through integration and innovation through customized solutions.”

          Accelerating Adoption and Innovation
          From Intel® Parallel Computing Centers to Intel® Xeon Phi™ coprocessor developer kits, Intel provides a range of technologies and expertise to foster innovation and adoption in the HPC ecosystem. The company is collaborating with partners to take full advantage of technologies available today, as well as create the next generation of highly integrated solutions that are easier to program for and are more energy-efficient. As a part of this collaboration Intel also plans to deliver customized HPC products to meet the diverse needs of customers. This initiative is aimed to extend Intel’s continued value of standards-based scalable platforms to include optimizations that will accelerate the next wave of scientific, industrial, and academic breakthroughs.

          During the Supercomputing Conference (SC’13), Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing”), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will significantly reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking.

          Knights Landing will also offer developers three memory options to optimize performance. Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.

          In addition, Intel and Fujitsu recently announced an initiative that could potentially replace a computer’s electrical wiring with fiber optic links to carry Ethernet or PCI Express traffic over an Intel® Silicon Photonics link. This enables Intel Xeon Phi coprocessors to be installed in an expansion box, separated from host Intel Xeon processors, but function as if they were still located on the motherboard. This allows for much higher density of installed coprocessors and scaling the computer capacity without affecting host server operations.

          Several companies are already adopting Intel’s technology. For example, Fovia Medical*, a world leader in volume rendering technology, created high-definition, 3D models to help medical professionals better visualize a patient’s body without invasive surgery. A demonstration from the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) showed a 2D simulation of an F4 tornado, and addressed how a forecaster will be able to experience an immersive 3D simulation and “walk around a storm” to better pinpoint its path. Both applications use Intel® Xeon® technology.

          High Performance Computing for Data-Driven Discovery
          Data intensive applications including weather forecasting and seismic analysis have been part of the HPC industry from its earliest days, and the performance of today’s systems and parallel software tools have made it possible to create larger and more complex simulations. However, with unstructured data accounting for 80 percent of all data, and growing 15 times faster than other data1, the industry is looking to tap into all of this information to uncover valuable insight.

          Intel is addressing this need with the announcement of the Intel® HPC Distribution for Apache Hadoop* software (Intel® HPC Distribution) that combines the Intel® Distribution for Apache Hadoop software with Intel® Enterprise Edition of Lustre* software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.

          The Intel® Cloud Edition for Lustre* software is a scalable, parallel file system that is available through the Amazon Web Services Marketplace* and allows users to pay-as-you go to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud.

          With numerous vendors announcing pre-configured and validated hardware and software solutions featuring the Intel Enterprise Edition for Lustre, at SC’13, Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. Partners announcing these appliances include Advanced HPC*, Aeon Computing*, ATIPA*, Boston Ltd.*, Colfax International*, E4 Computer Engineering*, NOVATTE* and System Fabric Works*.

          Intel Tops Supercomputing Top 500 List
          Intel’s HPC technologies are once again featured throughout the 42nd edition of the Top500 list, demonstrating how the company’s parallel architecture continues to be the standard building block for the world’s most powerful supercomputers. Intel-based systems account for more than 82 percent of all supercomputers on the list and 92 percent of all new additions. Within a year after the introduction of Intel’s first Many Core Architecture product, Intel Xeon Phi coprocessor-based systems already make up 18 percent of the aggregated performance of all Top500 supercomputers. The complete Top500 list is available at www.top500.org.


          1 From IDC Digital Universe 2020 (2013)

          Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
          Optimization Notice
          Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
          Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

          Fujitsu Lights up PCI Express with Intel Silicon Photonics [The Data Stack blog of Intel, Nov 5, 2013]

          Victor Krutul is the Director of Marketing for the Silicon Photonics Operation at Intel.  He shares the vision and passion of Mario Paniccia that Silicon Photonics will one day revolutionize the way we build computers and the way computers talk to each other.  His other passions are tennis and motorcycles (but not at the same time)!

          I am happy to report that Fujitsu announced at its annual Fujitsu Forum on November 5th 2013, that it has worked with Intel to build and demonstrate the world’s first Intel® Optical PCIe Express (OPCIe) based server.  This OPCIe server was enabled by Intel® Silicon Photonics technology.  I think Fujitsu has done some good work when they realized that OPCIe powered servers offer several advantages over non OPCIe based servers.  Rack based servers, especially 1u and 2u servers are space and power constrained.  Sometimes OEMs and end users want to add additional capabilities such as more storage and CPUs to these servers but are limited  because there is simply not enough space for these components or because packing too many components too close to each other increases the heat density and prevents the system from being able to cool the components.

          Fujitsu found a way to fix these limitations!

          The solution to the power and space density problems is to locate the storage and compute components on a remote blade or tray in a way that they appear to the CPU to be on the main motherboard.  The other way to do this is to have a pool of hard drives managed by a second server – but this approach requires messages be sent between the two servers and this adds latency – which is bad.  It is possible to do this with copper cables; however the distance the copper cables can span is limited due to electro-magnetic interference (EMI).  One could use amplifiers and signal conditioners but these obviously add power and cost.  Additionally PCI Express cables can be heavy and bulky.  I have one of these PCI Express Gen 3 16 lanes cables and it feels like it weighs 20 lbs.  Compare this to a MXC cable that carries 10x the bandwidth and weighs one to two pounds depending on length.

          Fujitsu took two standard Primergy RX200 servers and added an Intel® Silicon Photonics module into each along with an Intel designed FPGA.  The FPGA did the necessary signal conditioning to make PCI Express “optical friendly”.  Using Intel® Silicon Photonics they were able to send PCI Express protocol optically through an MXC connector to an expansion box (see picture below).  In this expansion box was several solid state disks (SSD) and Xeon Phi co-processors and of course there was a Silicon Photonics module along with the FPGA to make PCI Express optical friendly.  The beauty of this approach was that the SSD’s and Xeon Phi’s appeared to the RX200 server as if they were on the mother board.  With photons traveling at 186,000 miles per second the extra latency of travelling down a few meters of cable cannot reliably be measured (it can be calculated to be ~5ns/meter or 5 billionths of a second).So what are the benefits of this approach?  Basically there are four.  First, Fujitsu was able to increase the storage capacity of the server because they now were able to utilize the additional disk drives in the expansion box.  The number of drives is determined by the physical size of the box.  The 2nd benefit is they were able to increase the effective CPU capacity of the Xeon E5’s in the RX200 server because the Xeon E5’s could now utilize the CPU capacity of the Xeon Phi co-processors. In a standard 1u rack it would be hard if not impossible to incorporate Xeon Phi’s.  The third benefit is the cooling.  First putting the SSD’s in a expansion box allows one to burn more power because the cooling is divided between the fans in the 1U rack and those in the expansion box,  The fourth benefit is what is called cooling density or, how much heat needs to be cooled per cubic centimeter.  Let me make up an example. For simplicity sake let’s say the volume of a 1u rack is 1 cubic meter and let’s say there are 3 fans cooling that rack and each fan can cool 333 watts for a total capacity of 1000 watts of cooling.  If I evenly space components in the rack each fan does its share and I can cool 1000 watts.  Now assume I put all the components so that just one fan is cooling them because there is no room in front of the other two fans.  If those components expend more than 330 watts they can’t be cooled.  That’s cooling density.  The Fujitsu approach solves the SSD expansion problem, the CPU expansion problem and the total cooling and cooling density problems.

          image

          Go to:https://www-ssl.intel.com/content/dam/www/public/us/en/images/research/pci-express-and-mxc-2.jpg  if you want to see the PCI Express copper cable vs the MXC optical cable (you will also see we had a little fun with the whole optical vs copper thing.)

          Besides Intel® Silicon Photonics the Fujitsu demo also included Xeon E5 microprocessors and Xeon Phi co-processors.

          Why does Intel want to put lasers in and around computers?

          Photonic signaling (aka fiber optics) has 2 fundamental advantages over copper signaling.  First, when electric signals go down a wire or PCB trace they emit electromagnetic radiation (EMI) and when this EMI from one wire or trace couples into an adjacent wire it causes noise, which limits the bandwidth distance product.  For example, 10G Ethernet copper cables have a practical limit of 10 meters.  Yes, you can put amplifies or signal conditioners on the cables and make an “active copper cable” but these add power and cost.  Active copper cables are made for 10G Ethernet and they have a practical limit of 20 meters.

          Photons don’t emit EMI like electrons do thus fiber based cables can go much longer.  For example with the lower cost lasers used in data centers today at 10G you can build 500 meter cables.  You can go as far as 80km if you used a more expensive laser, but these are only needed a fraction of the time in the data center (usually when you are connecting the data center to the outside world.)

          The other benefit of optical communication is lighter cables.  Optical fibers are thin, typically 120 microns and light.  I have heard of situations where large data centers had to reinforce the raised floors because with all the copper cable, the floor loading limits would be exceeded.

          So how come optical communications is not used more in the data center today? The answer is cost!

          Optical devices made for data centers are expensive.  They are made out of expensive and exotic materials like Lithium-Niobate or Gallium-Arsenide.  Difficult to pronounce, even more difficult to manufacture.  The state of the art for these exotic materials is 3 inch wafers with very low yields.  Manufacturing these optical devices is expensive.  They are designed inside of gold lined cans and sometimes manual assembly is required as technicians “light up” the lasers and align them to the thin fibers.  A special index matching epoxy is used that sometimes can cost as much as gold per ounce.  Bottom line is that while optical communications can go further and uses light fiber cables it costs a lot more.

          Enter Silicon Photonics!  Silicon Photonics is the science of making Photonic devices out of Silicon in a CMOS fab.  Also known as optical but we use the word photonics because the word “optical” is also used when describing eye glasses or telescopes.  Silicon is the most common element in the Earth’s crust, so it’s not expensive.  Intel has 40+ years of CMOS manufacturing experience and has worked over the 40 years to drive costs down and manufacturing speed up.  In fact, Intel currently has over $65 Billion of capital investment in CMOS fabs around the world.  In short, the vision of Intel® Silicon Photonics is to combine the natural advantages of optical communications with the low cost advantages of making devices out of Silicon in a CMOS fab.

          Intel has been working on Intel® Silicon Photonics (SiPh) for over ten years and has begun the process of productizing SiPh.  Earlier this year, at the OCP summit Intel announced that we have begun the long process of building up our manufacturing abilities for Silicon Photonics.  We also announced we had sampled customers with early parts.

          People will often ask me when we will ship our products and how much they will cost?   They also ask me for all sort of technical details about out SiPh modules.  I tell them that Intel is focusing on a full line of solutions – not a single component technology. What our customers want are complete Silicon Photonic based solutions that will make computing easier, faster or less costly.  Let me cite our record of delivering end-to-end solutions:

          Summary of Intel Solution Announcements

          January 2013:  We did a joint announcement with Facebook at the Open Compute Project (OCP) meeting that we worked together to design disaggregated rack architecture (since renamed RSA [Rack Scale Architecture]).  This architecture used Intel® Silicon Photonics and allowed for the storage and networking to be disaggregated or moved away from the CPU mother board.  The benefit is that users can now choose which components they want to upgrade and are not forced to upgrade everything at the same time.

          April 2013: At the Intel Developer Forum we demonstrated the first ever public demonstration of Intel® Silicon Photonics at 100G.

          September 2013: We demonstrated a live working Rack Scale Architecture solution using Intel® Silicon Photonics links carrying Ethernet protocol.

          September 2013: Joint announcement with Corning for new MXC and ClearCurve fiber solution capable of transmission of 300m with Intel® Silicon Photonics at 25G.  This reinforced our strategy of delivering a complete solution including cables and connectors that are optimized for Intel® Silicon Photonics.

          September 2013Updated Demonstration of a solution using Silicon Photonics to send data at 25G for more than 800 meters over multimode fibers – A new world record.

          Today: Intel has extended its Silicon Photonics solution leadership with a joint announcement with Fujitsu demonstrating the world’s first Intel® Silicon Photonics link carrying PCI Express protocol.

          I hope you will agree with me that Intel is focusing on more than just CPUs or optical modules and will deliver a complete Silicon Photonics solution!

          Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others

          My Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post already introduced the HP Moonshot System. This post is discussing Moonshot in a much wider context, as well as providing the information which came after Dec 6, 2013, particularly at the HP Discover Barcelona 2013 event:
          1. The essence of IT industry’s state-of-the-art regarding the datacenter and the cloud
          2. Recent academic research: the disaggregated datacenter phenomenon
          3. Details about HP’s converged systems and next-gen cloud technology
          4. Latest details about HP’s Moonshot technology
          1. The essence of IT industry’s state-of-the-art regarding the datacenter and the cloud

          There is a new way of thinking in the IT industry which is best represented by No silo left behind: Convergence in the age of virtualization, cloud, and Big Data [HP Discover YouTube channel, recorded on Dec 10, 11:20 AM – 12:20 PM; published on Dec 11, 2013] presentation by HP on its HP Discover Barcelona 2013 event:

          Join Tom Joyce, Senior Vice President, Converged Systems, as he highlights the latest HP innovations and solutions that are leading the new end-to-end revolution.

          As far as the cloud is concerned today’s issue is Making hybrid real for IT and business success [HP Discover YouTube channel, recorded on Dec 10, 12:40 PM – 1:40 PM; published on Dec 11, 2013]

          Join Saar Gillai, senior vice president and general manager, HP Cloud, in this provocative, hype-busting session to learn what IT and business leaders can do today to set their organizations on a successful path to the cloud. [IMHO the part mentioning “traditional apps put into the cloud via virtualization” (typically via VMware) vs. modern “scale-out apps” is especially important. Starting at [10:45]!]

          Then one should at least briefly understand HP Cloud strategy and benefit of leveraging a portfolio of solutions [HP Discover YouTube channel, Dec 12, 2013]

          [Margaret Dawson, VP Marketing, HP Cloud] HP Cloud is designed to run and operate next gen web services at scale on a global basis. HP’s Converged Cloud strategy approach provides customers a common architecture allowing them to integrate private, managed and public cloud with traditional IT infrastructures.

          And HP is just about half year from the point (in time) when it will have its final answer to the question: How open source will reinvent cloud computing – again [HP Discover YouTube channel, Dec 12, 2013], the presentation which was originally announced under the title “The Rise of Open Source Clouds” and finally delivered with the following slides (to wet your apetite for watching the record of the presentation following next):

          image

          image

          image

          image

          image

          image

          image
          “Different delivery models being private, manage and public. … On the top you can the see six workload areas. These areas are basicaly we’ll build our product portfolio against. So we’ll be moving away from just sort of a catalogue of SKUs and piece parts into building offers in a workload base, things like dev test, business continuity, technical computing or HPC, and of course things like analytics and infrastructure.”

          Bill Hilf, Vice President of Product Management for HP Cloud, will walk you through HP’s strategy and innovation with OpenStack and how it helps customers deploy, manage, and secure cloud environments.

          Now we can take a brief Tour of the Cloud Booth at HP Discover Barcelona [hpcloud YouTube channel, Dec 11, 2013] in order to understand the cloud-related announcements made by HP (some of these will be detailed in this post later as related to the title of post)

          And Moonshot-specific announcements are briefly summarized in HP Moonshot latest innovations allow your business can embrace the new style of IT [HP Discover YouTube channel, Dec 12, 2013]

          HP has defined and led the industry standard server market for years. HP’s John Gromala and Janet Bartleson discuss how HP has taken HP Moonshot to the next level with the latest innovations and how they can benefit you. The idea is simple: Using energy efficient CPUs in architecture tailored for a specific application results in radical power, space, and cost savings when run at scale.

          Finally The future according to HP Labs [HP Discover YouTube channel, Dec 12, 2013]

          HP Discover is all about the future. HP Labs — HP’s central research arm — is all about the far future. Come and hear how three of HP’s most senior technologists see the IT landscape evolving and how it will transform all our lives.

          This is the essence of IT industry’s state-of-the-art regarding the datacenter and the cloud.


          2. On the other hand recent academic research has just been awakening to, what they are calling, the disaggregated datacenter phenomenon already happening as the “next big thing” in the industry, as evidenced by the following excerpts from the Network Support for Resource Disaggregation in Next-Generation Datacenters [research paper* on HotNets-XII**, Nov 21-22, 2013]

          Datacenters have traditionally been architected as a collection of servers wherein each server aggregates a fixed amount of computing, memory, storage, and communication resources. In this paper, we advocate an alternative construction in which the resources within a server are disaggregated and the datacenter is instead architected as a collection of standalone resources.

          Disaggregation brings greater modularity to datacenter infrastructure, allowing operators to optimize their deployments for improved efficiency and performance. However, the key enabling or blocking factor for disaggregation will be the network since communication that was previously contained within a single server now traverses the datacenter fabric. This paper thus explores the question of whether we can build networks that enable disaggregation at datacenter scales.

          image

          image

          Figure 2: Architectural differences
          between server-centric and resource-centric datacenters***

          As illustrated in Figure 2, the high-level idea behind diaggregation is to develop standalone hardware “blades”for each resource type including CPUs, memory, storage, and network interfaces as well as specialized components (GPUs, various ASIC accelerators, etc.). Those resource
          blades
          are interconnected by a datacenter-wide network fabric. Understanding the specifications and nature of this network fabric is our focus in this paper.

          Abbreviations used above for Figure 2. (in addition to “C” for CPU and “M” for Memory):

          Martin Fink, CTO and Director of HP Labs, speaks at NTH Generation’s 13th Annual Symposium.

          * Sangjin Han (U.C.Berkeley), Norbert Egi (Huawei Corp.), Aurojit Panda, Sylvia Ratnasamy (U.C.Berkeley), Guangyu Shi (Huawei Corp.), Scott Shenker (U.C.Berkeley and ICSI)
          ** Twelfth ACM Workshop on Hot Topics in Networks
          *** I should emphasize here that a disaggregated datacenter with shared disaggregated memory (as on the (b) part of the Figure 2. above) is NOT a kind of academic exageration but a relatively “near term reality” of the future. It became somewhat obvious from the recent  The future according to HP Labs video included in the end of the first section above, especially when Moonshot was mentioned. To provide more evidence watch the Tectonic shifts: Where the future of convergence is taking us [NTH Generation Computing, Inc. YuTube channel, recorded on Aug 1; published on Aug 20, 2013] keynote presentation above. In this HP’s CTO Martin Fink said that a new type of device HP has been working on for years, called memristor, could be made into a non-volatile and non-hierarchical, i.e. universal memory system, replacing both DRAM and flash, as well as magnetic storage in perspective. He also hinted at specialised Moonshot cartridges, possibly using memristor memory instead of DRAM, linked by terabit-class photonic connects to memristor storage arrays. He was already showing a prototype memristor wafer as well. There is no wonder therefore that according to HP’s own Six IT technologies to watch [Enterprise 20/20 Blog, Sept 5, 2013] article:

          Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.
          The Future of Big Data – an interview with John Sontag, VP and director of HP Labs’ Systems Research [HP Enterprise Business Community, Nov 14, 2013] is providing even bigger prospects as:
          If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data.
          Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways
          On top of all that, we need to reduce costsif we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you can conduct experiments on this data literally as fast as we can think them up.
          The combination of non-volatile, memristor-powered memory and very large scales is causing the people who think about storage and algorithms to realize that the tradeoff has changed. For the last 50 years, we’ve had to think of every bit of data that we process as something that eventually has to get put on a disk drive if you intend to keep it. That means you have to think about the time to fetch it, to re-sort it into whatever way you want it to rest in memory, and to put it back when you’re done as one of your costs of doing business.
          If you don’t have those issues to worry about, you can leave things in memory – graphs, for example, which are powerful expressions of complex data – that at present you have to spend a lot of compute time and effort pulling apart for storage. The same goes for processing. Right now we have to worry about how we break data up, what questions we ask it and how many of us are asking it at the same time. It makes experimentation hard because you don’t know whether the answer’s going to come immediately or an hour later.
          Our vision is that you can sit at your desk and know you’ll get your answer instantly. Today we can do that for small scale problems, but we want to make that happen for all of the problems that you care about. What’s great is that we can begin to do this with some questions that we have right now. We don’t have to wait for this to change all at once. We can go at it in an incremental way and have pieces at multiple stages of evolution concurrently – which is exactly what we’re doing.
          There are people who have given up on thinking about certain problems because there’s no way to compactly express them with the systems we have today. They’re going to be able to look at those problems again – it’s already happening with Moonshot and HAVEn [HP’s Big Data platform], and at each stage of this evolution we’re going to allow another set of people to realize that the problem they thought was impossible is now within reach.
          One example of where this already happened is aircraft design. When we moved to 64-bit processors that fit on your desktop and that could hold more than four gigabytes of memory, the people who built software that modeled the mechanical stresses on aircraft realized that they could write completely different algorithms. Instead of having to have a supercomputer to run just a part of their query, they could do it on their desktop. They could hold an entire problem in memory, and then they could look at it differently. From that we got the Airbus A380, the Boing 777 and 787, and, jumping industries, most new cars.

          Now back to the academic research for Network Support for Resource Disaggregation in Next-Generation Datacenters [presentation slides on HotNets-XII*, Nov 21-22, 2013] to illustrate their understandin of the trends

          The Trends: Disaggregation 

          HP MoonShot
          –  Shared cooling/casing/power/mgmt for server blades
          [Note that Moonshot is much more than that, as it was already presented in all detail in my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post.]

          image

          AMD SeaMicro
          –  Virtualized I/O
          image

          [from the research paper:]
          SeaMicro’s server architecture [6] uses a looser coupling of components within a single server … the network in SeaMicro’s architecture implements a 3D torus interconnect, which only disaggregate I/O and does not scale beyond the rack … [6] SeaMicro Technology Overview.

          Intel Rack Scale Architecture

          image

          [from the research paper: SeaMicro’s server architecture [6] uses a looser coupling of components within a single server,] while Intel’s Rack Scale
          Architecture (RSA) [15] extends this approach to rack scales. …
          [15] Intel Newsroom. Intel, Facebook Collaborate on Future Data Center Rack Technologies

          Open Compute Project

          image                

          image

          image

          Closing Remarks

          • Disaggregated datacenter will be “the next big thing”   
            – Already happening. We [i.e. the academic research] need to catch up!   


          3. And next continue with the details about HP’s converged systems and next-gen cloud technology:

          Why HP uses its own Converged Infrastructure solutions [Enterprise CIO Forum YouTube channel, Nov 11, 2013]

          HP’s CIO, Ramon Baez, tells us about the benefits HP has found in using its own Converged Infrastructure solutions, including Networking, Storage, and Moonshot servers. For more, see hp.com/ci

          From “Sharks” in the press at HP Discover, Barcelona – Day One coverage [HP Converged Infrastructure blog, Dec 10, 2013]

          … we were hosting a large press announcement that went out over the wire on Monday at 3 pm local time (CET).

          Here’s a brief summary of the announcement that was presented by Tom Joyce,  Senior Vice President and General Manager, HP Converged Systems. The HP ConvergedSystem is a new product line completely reengineered up based on 21st-century assets and architectures for the New Style of IT. This is an important point as Tom emphasized – this is not a collection of piece parts, this is a completely new engineered solution, built on core building that are workload-optimized systems which are easy to buy, manage, and support – order to operations in as few as 20 days, with ONE tool to manage and most importantly having ONE point of accountability.

          Built using HP Converged Infrastructure’s best-in-class servers, storage, networking, software and services, the new HP ConvergedSystem family of products deliver a total systems experience “out of the box.”

          • HP ConvergedSystem for Virtualization helps clients easily scale computing resources to meet business needs with preconfigured, modular virtualization systems supporting 50 to 1,000 virtual machines at twice the performance, and at an entry price 25 percent lower than competitive offerings.
          • HP ConvergedSystem 300 for Vertica speeds big data analytics, helping organizations turn data into actionable insights at 50 to 1,000 times faster performance and 70 percent lower cost per terabyte than legacy data warehouses.
          • HP ConvergedSystem 100 for Hosted Desktops, based on the award-winning HP Moonshot server, delivers a superior desktop experience compared to traditional virtual desktop infrastructure. This first PC on a chip for the data center delivers six times faster graphics performance and 44 percent lower total cost of ownership

          The physical press release in my opinion was pretty cool, and one of the better ones  I have attended. The new HP ConvergedSystem for Virtualization 300 and 700 debuted on stage with the theme from Jaws, with much snapping of camera flashes. Tom explained why the sharks theme was so integral to this particular system with core attributes of  most “efficient”, ”best in class”, extremely “fast”, very “agile” and that it “never sleeps”!!

          The best one liner from Tom Joyce during the session was “If I were VCE [VMware/Cisco/EMC combination] I would be getting out of the water!!” which was capture on the HP live streaming video s found here. Check it out as it is worth watching. I have also included the full “HP Shark” press release HP Introduces Innovations Built for the Data Center of the Future.

          Here is a detailed press report on that: HP Targets VCE With Converged System Lineup [Dec 10, 2013].

          HP ConvergedSystem: Innovation to reduce the complexity of technology integration [HP Discover YouTube channel, Dec 11, 2013]

          Tom Joyce, Senior Vice President of HP ConvergedSystem business unit talks about how over the last two decades, IT has been forced to focus too many products, too many tools, and overly complex processes and spending too many resources on maintenance and not enough on innovation. To break free, IT must move from infrastructure craftsmen to business service experts with workload-optimized, engineered systems that are easy to procure, manage, and support and enable their business to quickly capitalize on new applications like big data and new delivery models such as cloud.

          The HP “Sharks” are in the Water [HP Converged Infrastructure blog, Dec 9, 2013]

          Written by guest blogger Tom Joyce, Senior Vice President and General Manager, HP Converged Systems

          Seven months ago HP announced the formation of our new Converged Systems business unit.  I was excited to be asked to lead this new team because so many of our customers had told us they needed truly converged platforms for their datacenters.  Over the last five years HP had developed Converged Infrastructure technologies for storage, networking and servers that enabled better and more cost effective solutions, but it was time to take it to the next level.  We needed to bring all those technologies together in a way that collapsed the cost of IT infrastructure and made everything faster and easier.

          Starting last summer, we built our team.  We hired the best of the best from within HP and from elsewhere.  We put in place an operating model and set of processes that allow us to do agile product development and deliver products to market rapidly and with high quality.  And we got really creative in our thinking.  We were also fortunate to get a lot of time with Meg [Whitman, HP CEO] and other top people throughout HP.  This was critical because to deliver a game changing set of new products, we had to break down or change a lot of established processes in development, manufacturing, support and go-to-market.  We had to break some glass, and Meg helped us do that by making this a high priority.

          Based on the customer input, there were some critical things I knew we needed to do. 

          • Move fast.  The IT market is changing quickly, and I wanted to get our first set of products out by the end of the calendar year. 
          • Do more than just combine existing server, storage, networking and software components.  We needed to engineer these new products to deliver more with less infrastructure, and to handle the most important customer workloads exceptionally well. 
          • Everything had to be simple – the ordering process, the system design, management, support, easy upgrades – everything.
          • Think about the “whole offer” and experience for the customer, not just the product itself.  This meant providing a better process from end to end.
          • Deliver exceptional economics.  The new product had to be priced to market with a clear return on investment for the customer. 
          • Most importantly, we needed to make sure that our channel partners could make money selling this product, and could provide specialize services around it.

          After developing our plan, we started “Project Sharks”.  We called it this because if you think about it, a shark is perfectly engineered to accomplish its mission – it is the ideal hunting machine.  When I was a kid I was fascinated by sharks.  People tend to think of sharks as primitive creatures, but they are actually extremely sophisticated.  Everything is designed with a purpose, and there is no waste.  Sharks have a unique hydroskeleton, musculature, and skin.  All these parts are connected to maximize thrust so that the animal can move fast, like a torpedo.  Sharks are noted for being able to sense blood in the water, but beyond that they have an amazingly complete set of sensors – perhaps the most sophisticated set of “sensors in the sea.” 🙂

          Our goal with “project sharks” was to build a perfectly designed virtual infrastructure machine.  This week at HP Discover, Barcelona, we announced the new HP ConvergedSystem for VirtualizationClick here to find out more information.  The two models are designed to be core building blocks for constructing a converged data center.  They are very fast and efficient, delivering better raw IOPS for virtualization at a great cost point.  They can handle a lot more virtual machines than a traditional configuration.  They can also deliver about a 58% lower cost per VM over a 3 year period, as compared to our closest competitor.

          Perhaps more important, we redesigned our whole delivery process as part of “project sharks”.  The result is that HP or a channel partner can actually produce a configuration and quote for an HP ConvergedSystem in about 20 minutes, and the whole thing will be on one sheet of paper.  HP ConvergedSystem 300 and the 700 installed and in production in a customer data center in as few as 20 days.  We have also fully integrated the management, to make it simple, and the support.  If support is needed, only one call to HP is required; you don’t need to deal with a server vendor, a storage vendor, etc.  When it is time for firmware upgrades, the process for the whole system is integrated.  And when you need additional capacity, we can ship a module out from our factory in one day, and it will be up and running in about five days.

          These new “sharks” are not just for virtualization.  We also announced that the HP ConvergedSystem 300 for Vertica as a new platform for big data analytics.  The HP ConvergedSystem 100 is based on HP Moonshot servers, and ships as a Citrix XenDesktop system

          In the future the HP ConvergedSystem products will support additional workloads and ISV applications, and will be used as building blocks for HP CloudSystem private clouds, so stay tuned for more.

          Our new Converged Systems business unit team is very excited about the opportunity to unleash these new “sharks”, and put them in the water. We are looking forward to hearing from our customers and partners about what they want us to do next, because the spirit of innovation is alive and well at HP.

          On the Dec 10 HP Discover Barcelona 2013 keynote HP’s hybrid cloud strategy was presented with the following slides, with comments made by the presenter added only for the HP CloudSystem private clouds part:

          image

          image
          Bill Hilf
          Vice President, Converged Cloud Products and Services, is driving HP’s entire cloud roadmap
          (who came to HP 6 months ago from Microsoft where he was GM of Windows Azure Product Management): “HP Next Gen CloudSystemto be released in the 1st half of 2014” with the following major characteristics:
          ConsistencyChoice Confidence 

          More information:
          HP Unveils Innovations in Cloud to help Customers Thrive in a Hybrid World [The HP Blogs Hub, Dec 11, 2013] in which it is stated “As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft® Windows® Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company. “
          – A press release of similar title with additional lead and closing “Pricing and availability” parts
          HP CloudSystems stand apart [HP Enterprise Business community blog, Dec 10, 2013]
          How HP CloudSystem stacks up against competitors [Porter Consulting, June 14, 2013] Comparison of offerings from HP, IBM [PureSystems], and VCE [formed as a joint venture by Cisco and EMC, with minor investments from VMware and Intel; resulting in Vblock products based on Cisco UCS servers, Cisco network components, EMC storage arrays, and the VMware virtualization suite]

          image“We created a killer interface. An easy to use, consumer inspired interface that is consistent across multiple types of experiences (from classic PC, administration, to mobile experiences). We also designed and optimized the interface for the different types of roles in the organization (from architect who might be designing a service, to end user or consumer of that service, as well as for IT operator and adminstrator).”

          More information: Empowering users and the new face of cloud [HP Enterprise Business community blog, Dec 11, 2013] written by Ken Spear, Senior Marketing Manager (HP CloudSystem and OneView)

          image“We spent considerable effort and energy an choice and ability to really give customers the heterogeneous workload support they need. And now we are taking openess to an entirely new level. And so for the first time with CloudSystem we are shipping HP Cloud OS which is our enterprise class, OpenStack**** platform which gives customers the great innovation from OpenStack to build modern cloud workloads. But we are also supporting the power of matrix, so that you can bridge today’s and
          tomorrow’s workloads on the same system.”

          **** OpenStack APIs are compatible with Amazon EC2 (see Nova/APIFeatureComparison) and Amazon S3 (see Swift/APIFeatureComparison) and thus client applications written for Amazon Web Services can be used with OpenStack with minimal porting effort. Note that HP nixes Amazon EC2 API support — at least in its public cloud [Gigaom, Dec 6, 2013] “based upon significant input from developers and customers” as “customers want to avoid getting locked in to what he called, ‘Amazon’s spider web’ ”. Tier 1 Research analyst Carl Brooks said via email: “HP doesn’t need to support AWS APIs — OpenStack will do that for them to the limited extent it already does”.

          image“And finally we’re giving customers and partners more confidence
          than they’ve ever had before in this type of solution. … And that will be available in both a quick-ship, channel-ready fixed configuration as well as in a highly customizable solution. In addition CloudSystem will ship with cloud service automation (CSA), the industry-leading orchestration and hybrid cloud management software [read NEW! HP’s solution for managing private and hybrid clouds] that gives an easy experience and easy management of next hybrid cloud environment. That could be clouds delivered in any physical infrastructure: public, managed or private. And lastly, when customers use clouds as to build private cloud there is boundless growth, because you can extend CloudSystem with public cloud resources: from the HP public cloud, or Amazon, or Savvis. And this week we are also announcing support for Windows Azure, as well as two very important European partners: SFR and arsys, a service provider right here in Spain.

          More information:
          HP Cloud Service Automation – See new, do new at HP Discover! [HP Enterprise Business community blog, Dec 11, 2013]
          HP Unveils Innovations in Cloud to help Customers Thrive in a Hybrid World [The HP Blogs Hub, Dec 11, 2013] in which it is stated “As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft® Windows® Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company. “
          – A press release of similar title with additional lead and closing “Pricing and availability” parts 

          Underlying core technologies:

          HP Converged Cloud delivers choice, confidence, and consistency. Learn how HP Cloud OS as part of the HP Converged Cloud portfolio leverages OpenStack to enable workload portability, simplified installation, and enhanced service lifecycle management.http://hp.com/cloud
          Live demonstration of the HP Moonshot server running HP Cloud OS based on OpenStack at HP DIscover Barcelona 2013

          Open source has long been linked to innovation. With a history tracing back to the origins of the public web, the concept of open source relies on the assumption that shared knowledge produces more and better innovation, which is better for everyone—as well as the business world.

          Some pundits believe that it is the combination of cloud and the power of the open source community that has enabled such rapid cloud development, adoption, and innovation.

          OpenStack: cloud source code at the ready

          OpenStack® provides the building blocks for developing private and public cloud infrastructures. OpenStack comprises a series of interrelated projects, characterized by their powerful capabilities and massive scalability.

          Like all open source projects, OpenStack is a group collaboration, consisting of a global community of developers and cloud computing technologists. HP is a top contributor and driving force behind OpenStack, helping it to become a leading software for open cloud platforms.

          In other words, there’s a bright future for OpenStack, which is why HP chose it as the foundation for its hybrid cloud solutions.

          HP Cloud OS

          HP Cloud OS is the world’s first OpenStack-based cloud technology platform for hybrid delivery. HP Cloud OS enables our existing cloud solutions portfolio and new innovative offerings by providing a common architecture that is flexible, scalable, and easy to build on.

          “We are in a new phase of cloud computing. Enterprises, government agencies, and industry are all placing demands on cloud computing technologies that exceed a singular, one-size-fits all delivery model,” says Bill Hilf, vice president of product management for HP Cloud. “HP Cloud OS, built on the power of OpenStack, is the foundation for the HP Cloud portfolio and a key part of the HP solutions that enable real customer choice and consistency.”

          Watch the HP Cloud OS story at HP Discover

          Attendees at HP Discover 2013 in Barcelona, don’t miss this opportunity to hear the inside story of HP’s development of HP Cloud OS. Join the Innovation Theater session:

          IT3261 – The rise of open source clouds

          In this session, Bill Hilf will walk you through his experiences working with large public cloud systems, the rise of open source clouds in the enterprise, and HP’s strategy and innovation with OpenStack, including a discussion of HP Cloud OS (Wednesday, 12/11/13, 4:30 pm).

          Highlights from the presentation include:

          • How open source has affected the development of the cloud
          • The requirements of enterprises related to cloud computing
          • How OpenStack enables HP’s cloud platform
          • Top ten lessons learned when building HP’s public cloud
          • HP’s overall cloud strategy
          Discussion with William Franklin, VP OpenStack & Technology Enablement talks about HP Cloud and our open source strategy with OpenStack at the OpenStack Summit Hong Kong 2013.
          Monty Taylor, Distinguished Technologist and OpenStack Guru, talks about the OpenStack community, HP’s contributions to Havana and OpenStack projects, and the future of OpenStack.

          Gartner’s Allessandro Perilli’s latest observations about the OpenStack (he is focusing on private cloud computing in the Gartner for Technical Professionals (GTP) division):
          What I saw at the OpenStack Summit [Nov 12, 2013] in which he is particularly describing how OpenStack vendors are divided into two camps that I called “purists” and “pragmatists”. He notes that purists tend to ignore the fact that many large enterprises are interested in OpenStack for the reason of reducing their dependency from VMware and frightened by rewriting their traditional multi-tier LoB applications into new cloud-aware applications advocated by purists.
          Why vendors can’t sell OpenStack to enterprises [Nov 19, 2013] where he notes that: “In fact, for the largest part, vendors don’t know how to articulate the OpenStack story to win enterprises. They simply don’t know how to sell it.” Then he gives at least four reasons for why vendors can’t tell a resonating story about OpenStack to enterprise prospects:
          1. “Lack of clarity about what OpenStack does and does not.”
          2. “Lack of transparency about the business model around OpenStack.”
          3. “Lack of vision and long term differentiation.”
          4. “Lack of pragmatism”, i.e. “purist” approach described in his previous post.

          J.R. Horton, HP CloudOS Sr. Product Manager details the HP Cloud OS technology preview allowing developers access to a complete enterprise-grade OpenStack package for fast installation and deployment.
          [Mark Perreira, Chief Architect of HP Cloud OS:] This video demonstrates how HP Cloud OS can help simplify delivery, enhance lifecycle management and optimize workloads for your cloud environment. It includes information on Cloud OS architecture, kernel and base services, and administrative tools.
          Mark Perreira, Chief Architect of HP Cloud OS, whiteboards the hybrid provisioning capabilities in HP Cloud OS.

          J.R. Horton, HP Cloud OS Sr. Product Manager presents the HP Cloud architecture at HP Discover in Barcelona 2013. [Note that in addition to HP other OpenStack Foundation Platinum Members (providing a significant portion of the funding) are: AT&T, Canonical, IBM, Nebula, Rackspace, Red Hat, Inc., SUSE. Just today the news came as well that Oracle raised its membership to Platinum level.]


          4. Finally latest details about HP’s Moonshot technology


          • Moonshot: one of the “INFRA” (see above in the “HP Cloud OS Whiteboard Demo” video) building blocks for the HP CloudOS, actually the most future-oriented one

          The Power of Moonshot [HP Discover YouTube channel, Dec 10, 2013]

          “Like many companies, HP was a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States. With the addition of four EcoPODs and Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.”

          My Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post introduced the HP Moonshot System as follows:

          On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part  of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.

          image

          Also the Dec 6 update to the above post already provided significant roadmap information:

          With Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013] saying

          We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]

          For the details about the ARM SoC technologies behind that go to the Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post!

          But the initial Moonshot System launched in April’13 had support just for light workloads, such as such as website front ends and simple content delivery. This meant, nevertheless, a lot in the hosting space as evidenced by serverCONDO Builds its Business on Moonshot [Janet Bartleson YouTube channel, Dec 9, 2013] video:

          serverCONDO President John Brown wanted to expand to offer dedicated hosting, and traditional 1U servers looked pretty good, until the team discovered HP Moonshot. Hear more about what he was looking for and the results serverCONDO achieved. http://www.servercondo.com http://www.hp.com/go/moonshot

          More information from the same source:
          Why serverCONDO is in the Dedicated Hosting Business
          Old School and New School Cloud Servers (serverCONDO)

          OR taking a true large-scale example watch this HP.com Takes 3M Hits on Moonshot [Janet Bartleson YouTube channel, Nov 26, 2013] video:

          Volker Otto talks about the results of using Moonshot for HP.com’s web site, caching, and ftp downloads. http://www.hp.com/go/moonshot

          According to Meg Whitman’s keynote at Discover 2013 on Dec 10 they would be able to go from 6 datacenters to 4 thanks to Moonshot, even considering the future needs and workloads. Something as dramatic as when HP moved previously (3 years ago) from 86 datacenters to 6 datacenters.

          So, to appreciate the full potential of Moonshot one should, on the other hand, understand the following system architecture information provided in the HP Moonshot System, the world’s first software defined servers [April 10, 2013] technical whitepaper:

          HP Moonshot System

          HP Moonshot System is the world’s first software defined server accelerating innovation while delivering breakthrough efficiency and scale with a unique federated environment, and processor-neutral architecture. Traditional servers rely on dedicated components, including management, networking, storage, power cords and cooling fans in a single enclosure. In contrast, the HP Moonshot System shares these enclosure components. The HP Moonshot 1500 Chassis has a maximum capacity of 1800 servers per 47U rack with quad server cartridges. This gives you more compute power in a smaller footprint, while significantly driving down complexity, energy use and costs.

          The first server available on HP Moonshot System is HP ProLiant Moonshot Server based on Intel® Atom™ processor S1260, and it provides an ideal solution for web serving, offline analytics and hosting.

          HP Moonshot 1500 Chassis design

          The HP Moonshot 1500 Chassis incorporates independent component design and hosts 45 cartridges, two network switches, and the infrastructure components within the chassis. The Moonshot 1500 Chassis’ electrically passive design makes this completely hot pluggable design possible. The Moonshot 1500 Chassis uses no active electrical components, other than EEPROMs required for manufacturing and configuration control purposes.

          Figure 1 shows the elements of the Moonshot 1500 Chassis. HP controls the design on all elements of the chassis except for the server (initial server contain a single server) and the network switch module which may be designed by the Moonshot server or network switch partners.

          Figure 1.

          image

          The HP Moonshot 1500 Chassis accommodates up to 45 individually serviceable hot plug cartridges. Two high-density, low-power HP Moonshot 45G Switch Modules, each with a 10g x6 HP Moonshot 6SFP Uplink Module, handle network communication for all cartridges in the chassis. These switches use Layer 2/Layer 3 routing, QoS management (CLI, SFLOW), and require no license keys. The dual network switches and I/O modules provide traffic isolation, or stacking capability for resiliency. Rack level stacking simplifies the management domain.

          The Moonshot System uses the HP Moonshot 1500 Chassis Management module (CM) module for complete chassis management, including power management with shared cooling. The server platform is powered by four 1200W Common Slot Power Supplies in an N+1 configuration and cooled by five hot pluggable fans also in an N+1 configuration. The CM uses component-based satellite controllers to communicate with and manage chassis elements. The modular faceplate design allows for future feature development.

          HP ProLiant Moonshot Server

          Each software defined server contains its own dedicated memory, storage, storage controller, and two NICs [Network Interface Controllers] (1Gb). For monitoring and management, each server contains management logic in the form of a Satellite Controller with a dedicated internal network connection (100 Mb). Figure 5 shows HP ProLiant Moonshot Server with a single Intel® Atom™ processor S1260and a single SFF drive.

          Figure 5. HP ProLiant Moonshot Server and functional block diagram

          image

          These servers provide the base hardware functionality of the system. Future software defined servers can take the following forms:

          • One or more discrete server with separate compute, storage, memory and I/O
          • One or more complete cartridge designs with integrated compute, storage, memory, and I/O
          • One or more forms of storage accessible to adjacent cartridges

          Future servers will incorporate these descriptions to provide a wide degree of flexibility for customizing and tuning based on the desired performance, cost, density, and power constraints.

          The available ProLiant Moonshot server design includes one processor and a single HDD or SDD. This server is ideal for application workloads such as website front ends and simple content delivery. Table 1 gives you the current server component descriptions.

          image

          The Intel Atom is the world’s first 6-watt server-class processor. In addition to lower power requirements, it includes data-center-class features such as 64-bit support, error correcting code (ECC) memory, increased performance, and broad software ecosystem. These features, coupled with the revolutionary HP Moonshot System design are ideal for workloads using many extreme low-energy servers densely packed into a small footprint can be much more efficient than fewer standalone servers.

          Intel® Atom™ processor S1260 integrates two CPU cores, single-channel memory controller, and PCI Express 2.0 interface. Each CPU core will has its own dedicated 32KB instruction and 24 KB data L1 caches, and 512 KB L2 cache. The processors incorporate Hyper-Threading, which allows them to run up to 4 threads simultaneously. Additionally, the chips have VT-x virtualization enabled.

          Each Moonshot server boots from a local hard drive, or the network using PXE [Preboot eXecution Environment]. The Moonshot System use HP BIOS and “headless” operation (no video or USB). No additional HP software is required to run the cartridge. NIC, storage, and other drivers are included in the compatible Linux distributions (described later in the OS management section).

          Fabrics and topology

          We designed the HP Moonshot System to provide application-specific processing for targeted workloads. Creating a fabric infrastructure capable of accommodating a wide range of application-specific workloads requires highly flexible fabric connectivity. This flexibility allows the Moonshot System fabric architecture to adapt to changing requirements of hyperscale workload interconnectivity.

          The Moonshot System design includes three physical production fabrics, the Radial Fabric, the Storage Fabric, and the 2D Torus Mesh Fabric. The fabrics are connected to 45 cartridges slots, two slots for the network switches, and two corresponding I/O modules.

          Figure 9 shows the eight 10Gb lanes routed from each of the cartridge slots to the pair of core network fabric slots in the center of the Moonshot 1500 chassis. Four lanes from each cartridge go to one core network fabric slot and four to the other (A and B). From each core fabric slot there are 16 10Gb lanes routed to the back of the chassis to attach to an I/O module.

          Figure 9.

          image

          Radial Fabric

          The Radial Fabric provides a high-speed interface between each cartridge and the two core fabric slots.

          The Radial fabric includes these links:
          • 2x GbE channels
          • One port to each network switch

          Figure 10 illustrates a torus topology interlinking cartridge to cartridge in combination with the radial topology linking to the network switches.

          Figure 10.

          image

          The Radial fabric handles all Ethernet-based traffic between the cartridge and external targets. The exception is iLO* management network traffic using the dedicated iLO port.

          *[iLO: Integrated Lights-Out]

          Storage fabric

          A Moonshot System Storage Fabric will use existing Moonshot 1500 Chassis connections to span each 3×3 cartridge slot subsection within the chassis baseboard (Figure 11). The Storage Fabric will be part of future HP Moonshot System releases. This fabric implementation will use the Storage Fabric as a connection between servers and local storage devices.

          Figure 11.

          image

          In this implementation, SAS/SATA is sent over lanes between each adjacent cartridge for primary storage along with additional lanes to other cartridges in the subsection for redundancy or other storage requirements. Although the figure shows a specific configuration of compute and storage nodes, there is flexibility to configure the subsections in different ways as long it does not violate the rules of the interface or storage technology. While the example in Figure 11 shows the proximal fabric being used for SAS/SATA, any type of communication is possible due to the dynamic nature of the fabric.

          2D Torus Mesh Fabric

          Like the Storage Fabric, future releases of the HP Moonshot System will use existing Moonshot 1500 Chassis connections to implement the 2D Torus Mesh Fabric, providing a high speed general purpose interface among the cartridges for those applications that benefit from high bandwidth node-to-node communication. The 2D Torus Mesh fabric can be used as Ethernet, PCIe, or any other interface protocol. At chassis power on, the CM [Chassis Management] ensures the compatibility on all interfaces before allowing the cartridges to power on.

          The 2D Torus Mesh fabric is routed as torus ring configuration capable of providing four 10Gb bandwidths in each direction to its north, south, east and west neighbors. This allows the HP Moonshot System to meet many unique HPC [High-Performance Computing] applications where efficient localized traffic is needed.

          • 16 lanes from each cartridge
          • Four up, four down, four left, and four right
          • Can support speeds up to 10Gb

          Topologies

          Topologies utilize the physical fabric infrastructure to achieve a desired configuration. In this case, Radial and 2D Torus Mesh fabrics are the desired Moonshot topologies. The Radial Fabric pathways are optimized for a network topology utilizing two Ethernet switches. The 2DTorus Mesh fabric pathways are passive copper connections negotiated with neighbors and optimized for topology protocols that change over time to accommodate future Moonshot System releases.

          Moonshot System network configurations

          Moonshot System network switches and uplink modules provide resiliency and efficiency when configured as stand-alone or stackable networks. This feature allows you to connect up to nine Moonshot 1500 Chassis and then to your core network, eliminating the need for a top of rack (TOR) switch.

          • Dual switches provide traffic isolation or can be stacked
          • Rack level stacking simplifies management domain
          • Redundant switch configurations provide a more resilient infrastructure
          • Layer 2, Layer 3 Routing & QoS, Management (CLI, SNMP, SFLOW). No license keys

          Moonshot 1500 Chassis stacking

          Stacking allows you to select a tradeoff between overall performance and cost of TOR switches. Stacking can eliminate the cost of TOR switches for workloads able to tolerate extra latency. The switch firmware architecture elects a master management processor to control all stacked switches. Stacking does not scale in a linear way; stacking size is constrained by the capability of a single management processor. The P2020 [switch management] processor is sized to reliably stack nine network switches (405 ports).

          We can create two stacked switches in a single rack with no performance issues. Up to nine modules can be stacked to form a single logical switch. A simple loop consumes two ports per I/O module in this Figure 12 layout.

          Figure 12.

          image

          Management

          The HP Moonshot System relies on a federated iLO system. Federation requires the physical or logical sharing of compute, storage or networking resources within the Moonshot 1500 Chassis. The chassis shares four individual iLO4 ASICs [Application-Specific Integrated Circuits] in the CM module with high-speed connections to the management network through a single management port uplink.

          The CM provides a single point of management for up to 45 cartridges, and all other components in the Moonshot 1500 Chassis, using Ethernet connections to the internal private network. Each hot pluggable component includes a resident satellite controller. The CM and satellite controllers use data structures embedded in non-volatile memory for discovery, monitoring, and control of each component.

          HP Moonshot 1500 Chassis Management module

          The CM includes four iLO processors sharing the management responsibility for 45 cartridges, the power and cooling processor, two networks switches and Moonshot 1500 chassis management. We’ve federated the iLO system functionality by assigning certain iLO processors responsibility for managing certain hardware interfaces. We balanced the workload among the three cartridge zones in the chassis (physically separated by network switches), and dedicated one iLO processor to manage chassis hardware and the switches. Communication between the CM and the Satellite Controllers is an internal private Ethernet network. This eliminates the requirements for a large number of IP addresses being used on the production network.

          The iLO subsystem includes an intelligent microprocessor, separate memory, and a dedicated network interface. iLO uses the management logic on each cartridge and module, and up to 1,500 sensors within the Moonshot 1500 Chassis, to monitor component thermal conditions. This design makes iLO independent of the host servers and their operating systems.

          iLO monitors all key Moonshot components. The CM user interfaces and API’s include a Command-Line Interface (CLI) and Intelligent Platform Management Interface (IPMI) support. These provide the primary gateway for node management, aggregation and inventory. A text-based interface is available for power capping, firmware management and aggregation, asset management and deployment. Alerts are generated directly from iLO, regardless of the host operating system or even if no host operating system is installed. Using iLO, you can do the following:

          • Securely and remotely control the power state of the Moonshot cartridges (text-based Remote Console)
          • Obtain access to each and all serial ports using a secure Virtual Serial Port (VSP) session
          • Obtain asset and hardware specific information (MAC Addresses, SN)
          • Control cartridge boot configuration

          OS deployment and support

          The Moonshot System hosts multiple individual systems, and network switches. Unlike other HP ProLiant BladeSystem-class servers, Moonshot cartridges provide OS installation only through network Installation, with console access provided by an integrated Virtual Serial Port to each server. Network Installation is performed in a manner similar to other HP ProLiant, or standard x86 servers, with the only required modifications being the specification of the serial console instead of a standard VGA display (described below.)

          Linux Distributions
          The initial release of the HP Moonshot System is compatible with these versions of Linux:
          • Red Hat Enterprise Linux 6.4
          • SuSE SLES 11SP2
          • Ubuntu 12.04
          HP Insight Cluster Management Utility
          The HP Insight Cluster Management Utility (CMU) is well suited for performing network installations, image capture and deploy, and ongoing management of large numbers of servers such as the density provided by the Moonshot 1500 Chassis. If you are using CMU, the directions included in the following “Setting up an installation server” section are not required, and you should instead refer to the CMU documentation.
          The CMU is optional and basic network installation of the OS may be performed using a standard PXE-based installation server.

          Conclusion

          The HP Moonshot System addresses the needs of data centers deploying servers at a massive scale for the new era of IoT. Industry sources estimate that lightweight web serving and analytics workloads will equal 14% of the x86 server market by 2015. The HP Moonshot System changes the current computing paradigm with an innovative completely hot pluggable architecture that increases the value of your investment and reduces TCO. You get a significant reduction in power usage, hardware costs, and use of space. You’ll see simplification in the areas of network switches, cabling, and management. Moonshot System’s use of shared hot pluggable infrastructure includes power supplies and fans. The HP Moonshot 1500 Chassis Management module, with proven HP iLO management processors, gives you detailed reporting on all platform components while the power and cooling controller manages the N+1 fan and power supply configurations. Dual network switches and I/O modules increase Moonshot’s resiliency and flexibility, allowing you to stack HP Moonshot Switch Modules. The Moonshot System is the first software defined, application-optimized server platform in the industry. Look for a growing library of software defined servers from multiple HP partners targeting specific IoT workloads compatible with emerging web, cloud, and massive scale environments, as well as analytics and telecommunications.

          Now we have 2 aditional cartridges: the m300 and the m700

          Moonshot ProLiant m300 Server Cartridge Overview [Janet Bartleson YouTube channel, Nov 27, 2013]

          @SC13, HP Product Manager Thai Nguyen gives us a quick overview of the ProLiant m300 Server Cartridge.

          A new big little HP Moonshot server cartridge is shipping!! [The HP Blog Hub, Dec 10, 2013]

          Guest blog written by Nigel Church, HP Servers

          We call it the HP ProLiant m300 Server cartridge for the HP Moonshot System. This is the “big brother” to the current HP ProLiant Moonshot server cartridge sporting the new Intel Atom Avoton—an eight core processor running at 2.4GHz with 32GB memory [with 1 TB disk storage on the cartridge] delivering up to six times the energy efficiency and up to seven times more performance.

          Now, in just one Moonshot System with 45 ProLiant m300 Servers you have 360 cores, 1,440GB memory and up to 45TB of storage. For the right workloads, you can accomplish the same work using just 19% of the power of a traditional server!

          What workloads can it support? If you have a growing web site serving dynamic content [note that for the first Atom based server cartridge static content was mentioned when describing the type of workload supported] currently running on ageing traditional servers you must take a look at Moonshot to save space, power and prepare yourself for the future.

          If you’re attending HP Discover in Barcelona, come to the show floor and see HP Moonshot in action–or visit the HP Discover News & Social Buzz page and get the latest updates!  Otherwise, visit the HP ProLiant m300 Server Cartridge web page for more details on the newest Moonshot Cartridge.

          HP ProLiant m300 Server Cartridge [HP product page, Dec 11, 2013]

          Overview

          Are traditional servers more than you need for your scale-out big data, Web and content delivery network workloads? Are you paying for underutilized servers that use more and more space and energy? Companies running scale-out big data applications, serving web pages, images, videos, or downloads over the Internet often need to carry out simultaneous lightweight computing tasks over and over, at widely distributed locations. The HP ProLiant m300 Server Cartridge based on the Intel® Atom™ System on a Chip (SOC) delivers breakthrough performance and scale with up to 360 processor cores, 1,440 GB of memory and 45 TB of storage in a single Moonshot System.       

          Features

          A Platform for Big Data with NoSQL/NewSQL

          NoSQL/NewSQL on HP ProLiant m300 Server Cartridges gives cost-effective scalable performance for online transactional processing and maintains the ACID (Atomicity, Consistency, Isolation, Durability) of traditional databases.
          NoSQL/NewSQL thrives in a distributed cluster of shared-nothing nodes like the HP ProLiant m300 Server Cartridges. SQL queries are split into query fragments and sent to the node that owns the data. These databases are able to scale linearly as nodes are added, without suffering from bottlenecks.

          Scale-out Platform for Your Web Needs

          Companies need the scalability of the HP ProLiant m300 Server Cartridge to serve web pages, including image and video downloads while carrying out simultaneous lightweight computing tasks over and over, at widely distributed locations.
          For Web workloads, a platform based on the HP ProLiant m300 Server Cartridge means you don’t waste energy, space, and money on a high-end server when a low-cost density-optimized server can handle the job.

          Content Delivery Anytime from Any Device

          The m300 Server Cartridge provides high-speed efficient transcoding of media streams to match specific user devices. This allows efficient management of content by reducing library size and transcoding on demand, for specific device characteristics.
          Using less energy and space at a lower cost compared to traditional servers, the compact m300 Server Cartridge has Intel Atom-based SOCs to quickly deliver Web content to a variety of mobile devices.

          System Features

          Compute: Intel® Atom™ Processor C2750, 2.4 GHz

          Memory: DDR3 PC3-12800 SDRAM (1600 MHz); Four (4) SODIMM slots; 32GB (4x8GB)

          Storage: (1) SFF 500GB HDD, 1TB HDD, and 240GB SSD

          Networking: (Internal) dual port 1GbE per CPU; HP Moonshot 45G Switch Module Kit; HP Moonshot 6SFP Uplink Module Kit

          Enclosure: Moonshot 1500 Chassis

          Warranty: 1 year

          Intel® Atom™ Processor C2750 (4M Cache, 2.40 GHz) [Intel product page, Dec 3, 2013]

          SPECIFICATIONS

          Essentials
          Status
          Launched
           
          Launch Date
          Q3’13
           
          Processor Number
          C2750
           
          # of Cores
          8
           
          # of Threads
          8
           
          Clock Speed
          2.4 GHz
           
          Max Turbo Frequency
          2.6 GHz
           
          Cache
          4 MB
           
          Instruction Set
          64-bit
           
          Embedded Options Available
          No
           
          Lithography
          22 nm
           
          Max TDP
          20 W
           
          Recommended Customer Price
          TRAY: $171.00
           
          Memory Specifications
          Max Memory Size (dependent on memory type)
          64 GB
           
          Memory Types
          DDR3, 3L 1600
           
          # of Memory Channels
          2
           
          Max Memory Bandwidth
          25.6 GB/s
           
          Physical Address Extensions
          36-bit
           
          ECC Memory Supported
          Yes
           
          Expansion Options
          PCI Express Revision
          2
           
          PCI Express Configurations
          x1,x2,x4,x8,x16
           
          Max # of PCI Express Lanes
          16
           
          I/O Specifications
          USB Revision
          2
           
          # of USB Ports
          4
           
          Total # of SATA Ports
          6
           
          Integrated LAN
          4x 2.5 GbE
           
          UART
          2
           
          Max # of SATA 6.0 Gb/s Ports
          2
           
          Package Specifications
          TCASE
          97°C
           
          Package Size
          34 mm x 28 mm
           
          Sockets Supported
          FCBGA1283
           
          Low Halogen Options Available
          See MDDS
           
          Advanced Technologies
          Intel® Turbo Boost Technology
          2.0
           
          Intel® Virtualization Technology (VT-x)
          Yes
           
          Intel® Data Protection Technology
          AES New Instructions
          Yes
           

           


          HP’s Moonshot and AMD are taking cloud computing to a whole new level [AMD YouTube channel, published on Dec 4, 2013]

          Learn more about AMD and HP cloud computing:http://bit.ly/HP_and_AMD At APU13 HP’s Scott Herbal, World Wide Product Marketing Manager, shows off the Moonshot chassis which holds up to 45 AMD server cartridges inside. [His presentation was at AMD Developer Summit – APU13, Nov 13, Wed, 4:00 – 4:45, CC-4150, Scott Herbel, HP, HP Moonshot System + AMD’s Opteron X2150 = develop anything, anywhere with hosted desktops]

          ProLiant m700 Server Cartridge in HP Moonshot Overview [Janet Bartleson YouTube channel, Dec 9, 2013]

          Product manager Scott Herbel [http://www.linkedin.com/in/scottherbel: WorldWide Product Marketing Manager, Moonshot at Hewlett-Packard since May, 2010]

          HP ProLiant m700 Server Cartridge [HP product page, Dec 11, 2013]

          Overview

          Looking for a cost-effective solution for hosted desktop infrastructure, mobile gaming or cloud multi-media workloads? The HP ProLiant m700 Server Cartridge in a Moonshot 1500 Chassis offers lower cost (price per seat), simplified systems management and user support, vastly improved system/data security, and efficient systems resource use for your hosted desktop infrastructure (HDI) and cloud multi-media workloads. Each m700 Server Cartridge has four servers, each with an AMD Opteron™ X2150 APU with fully-integrated graphics processing and CPU. The m700 Server Cartridge delivers outstanding compute density and price/performance for cloud multi-media workloads.
          You can power mobile games, or other web content, objects, or applications, live and on-demand streaming media.       

          Features

          Hosted Desktop Infrastructure (HDI) Solution with Power and Scalability

          The centralized nature of hosting desktops on the HP ProLiant m700 Server Cartridge provides lower cost (price per seat), simplified system management and user support, vastly improved system/data security, and efficient system resource use.
          Each cartridge has four AMD-processor-based servers. Each server contains the AMD Opteron™ X2150 APU with graphics processing and CPU.
          The overall density means that you can cost-effectively have 180 servers in less than 5U of rack space.

          Mobile Content and Gaming Any Time from Any Device

          The HP ProLiant m700 Server Cartridge excels at powering graphics-intensive content delivery such as hosted videos and mobile games.
          The cartridge provides high-speed, efficient transcoding of source media streams to match specific user devices. This allows efficient management of content by reducing library size and transcoding closer to the customer, on demand, for specific device characteristics.
          Using less energy and space at a lower cost compared to traditional servers, the m700 Server Cartridge has four AMD Opteron x2150-based servers, each with integrated graphics processing capabilities to quickly deliver mobile games to your device, wherever you are.

          System features

          Compute: AMD Opteron™ X2150 APU, 1.5 GHz, with AMD Radeon™ HD 8000 graphics

          Memory: DDR3 PC3-12800 SDRAM (1600 MHz); Four (4) SODIMM slots; 32GB (8GB per SoC)

          Storage: 4 x 32 GB iSSD (1 per SoC)

          Networking: (Internal) BCM5720 dual port 1GbE per CPU; HP Moonshot-180G Switch Module; HHP Moonshot-4QSFP+ Uplink Module

          Enclosure: Moonshot 1500 Chassis

          Warranty: 1 year

          AMD Opteron™ X2150 APU [AMD product page, May 29, 2013]

          Introducing the World’s First Server-class x86 APU SoC

          Specification

          image

          Features

          Feature
          Function
          Benefit
          4 Energy Efficient X86 Cores, Codenamed “Jaguar”
          Optimize x86 performance/watt for microservers.
          Helps enable low datacenter TCO
          Flexible TDP
          Allows user to control their own power profile by adjusting CPU and GPU frequencies in the BIOS to match their application needs (GPU integrated in X2150 only)
          Gives users more control over their workload performance and power consumption
          Integrated I/O
          Integrates legacy Northbridge and Southbridge functionality directly on the processor
          Smaller footprint enables dense microserver designs
          Core, Northbridge and Memory P-states
          Dynamically adjusts performance levels based on application requirements
          Helps reduce power consumption
          Server Infrastructure support

          Feature
          Function
          Benefit
          DDR3 Memory with ECC Support
          High-speed, highly reliable server-class memory
          Helps reduce server failures due to memory.
          Integrated I/O
          Integrate PCIe Gen2, SATA 2/3, USB 2.0 and USB 3.0 functionality onto the processor.
          Enable enterprise-class functionality in a single chip solution.
          Server Processor Reliability
          Processor undergoes a back-end test flow to ensure proper quality
          Ensure product quality is that of other server-class products for greater reliability.
          Integrated Graphics

          Feature
          Function
          Benefit
          Graphics Core Next Architecture with AMD Radeon™ HD 8000 Series Graphics
          Provide high-quality graphics capabilities in a server SoC.
          Outstanding performance in media-oriented workloads such as remote DT, online gaming and imaging
          Display Controller Engine
          Allows for VGA and HDMI display capabilities
          Helps reduce cost by eliminating need for add-on display cards
          Unified Video Decoder 4.2
          Dedicated hardware video decoding block
          Help enable a near-native experience in remote DT applications.
          Video Compression Engine 2.0
          Hardware-assisted encoding of HD video streams
          Help enable a near-native experience in remote DT applications

          Citrix hosted desktops–powered by HP Moonshot [The HP Blog Hub, Dec 10, 2013]

          Written by Citrix Guest Blogger Kevin Strohmeyer, Director Product Marketing, Citrix

          Veterans of server-based computing and VDI are all too familiar with the complexities of buying and deploying desktop virtualization. Great strides have been made to simplify the sizing and configuration of desktop virtualization infrastructure, but ultimately, when you build and deliver shared resources, you should carefully consider how those resources will be used; and decide how much excess capacity you need to ensure peak usage can be supported.

          The distributed nature of PCs, coupled with management challenges of patching and updates plus the vulnerability of unsecured, sensitive data has left IT looking for a better answer. This brings us right back to centralized desktop virtualization.

          The HP ConvergedSystem 100 for Hosted Desktops with Citrix XenDesktop is a new as well as unique type of desktop virtualization. Instead of just leveraging a hypervisor to abstract the OS from hardware, XenDesktop streams an OS right to bare metal to dedicated microsystems with dedicated CPU, memory and graphics all neatly arranged in a rack mount chassis. This eliminates the overhead and complexity of abstracting the hardware and managing VMs. This also eliminates the system overhead required to share those resources leaving more power for the desktop. All in all, the solution presents a very interesting alternative to VDI.

          image

          The HP ConvergedSystem 100 for Hosted Desktops is an all-in-one compute, storage and networking system based on HP Moonshot, delivering 180 desktops for Citrix XenDesktop environments.  The system provides an independent, remote PC experience with business graphics and multimedia performance essential for mainstream knowledge workers, and all while delivering up to 44% improvement in TCO and 63% lower power requirements.  Other benefits include:

          • Predictable, fixed cost per user reduces OPEX
          • Independent compute and graphics delivers consistent end user performance 
          • Deploy with Citrix XenDesktop in approximately 2 hours

          At the same time, this solution is great example of the power of FlexCast technology from Citrix. And that power is reflected in the way the FlexCast management infrastructure is designed to promote these innovative solutions that leverage common image management, profile management and app virtualization in a common delivery architecture. The unique Citrix Provisioning Services (PVS) technology that enables bare metal and just in time OS provisioning provides all the benefits of VDI without hypervisor management.

          What makes this solution most interesting is the ease of purchasing and deploying. There is no configuration work required to figure out how much hardware or storage to purchase,  you simply buy as many systems as you need and rack and stack as you grow from the first 180 desktop on up. This alone could make this solution very attractive to organizations desiring the security and management of centralized virtual desktops, but who want to avoid the management of virtual infrastructure.

          If you are attending HP Discover in Barcelona this week, come by to see the ConvergedSystem 100 for Hosted Desktops in the Discover Zone. 

          Learn more about the new HP ConvergedSystem 100 for Hosted Desktops.

          Offering a no compromise PC experience [The HP Blog Hub, Dec 9, 2013]

          By HP guest blogger Dan Nordhues, HP Client Virtualization Worldwide Manager

          Poor performance is one of the major reasons users reject VDI or remote desktop implementations. While all your workers may sit at PCs, each user population has unique needs that dictate requirements. For example, task workers need only a couple of applications to do their jobs, but workstation-class users require accelerated graphics capabilities to handle workloads like CAD/CAM and Oil and Gas applications.

          Right in the middle of the PC-user continuum sits the mainstream knowledge worker—the largest segment of the PC user population— with unique requirements of their own. Meeting the needs of these users is the goal of HP ConvergedSystem 100 for Hosted Desktops powered by HP Moonshot—a next-generation solution engineered specifically for meeting the needs of today’s knowledge workers, while also meeting your requirements for simplicity, lower deployment cost, and energy efficiency.

          image

          HP ConvergedSystem 100 for Hosted Desktops provides an all-in-one compute, storage, and networking system that delivers desktops for Citrix XenDesktop non-persistent users. Provide your mainstream users a dedicated PC experience with the business graphics and multimedia performance they need, while reducing TCO by up to 44 percent and lowering power requirements up to 63 percent.

          If you plan to attend HP Discover Barcelona 2013, you can take advantage of great hands-on experience with HP Converged Systems.  And check out these sessions for more information on HP’s client virtualization portfolio:

          • BB2391 – Architecting client virtualization for task worker to workstation-class users  10 December 10-11am
          • DT3108 – Moonshot-hosted desktop infrastructure: an innovative way for hosting end-user desktops  11 December, 11:30-12
          • DT3177 – Moonshot-hosted desktop infrastructure: an innovative way for hosting end-user desktops, Part II   12 December, 11:30-12

          Learn more about the new HP ConvergedSystem 100 for Hosted Desktops.

          Precedence for TD-LTE by Chinese government to benefit China Mobile to launch its China-originated 4G service as early as Dec 18, 2013

          … it looks like the government was waiting till China Mobile was ready to launch, meanwhile delaying FDD-LTE by declaring a necessity to “test a converged TD-LTE/LTE FDD network at a later date”.

          4G TD-LTE Licenses Officially Issued by MIIT [Global TD-LTE Initiative Updates, Dec 4, 2013]

          After months of waiting and dithering, China is moving into the 4G era.

          Today Chinese Ministry of Industry and Information Technology (MIIT) has finally issued the first batch of 4G licenses to China Mobile, China Unicom and China Telecom. China Mobile gets access to 130MHz of spectrum (1880-1900 MHz, 2320-2370 MHz, 2575-2635 MHz), China Unicom gets 40MHz (2300-2320 MHz, 2555-2575 MHz) and China Telecom has 40MHz (2370-2390 MHz, 2635-2655 MHz) for TD-LTE operation. The commercialization of TD-LTE in China by these three operators will certainly promote the TD-LTE scale deployment globally.

          China issues 4G licenses [Xinhua, Dec 4, 2013]

          China’s Ministry of Industry and Information Technology (MIIT) on Wednesday issued 4G licenses to three Chinese telecom operators, marking the beginning of a new era in China’s high-speed mobile network.

          China Mobile, China Telecom and China Unicom received permits to offer fourth-generation (4G) mobile network services employing homegrown TD-LTE technology.

          The ministry said the three companies have conducted large-scale tests of TD-LTE, or Time-Division Long-Term Evolution, one of two international standards, and their technology is ready for commercial service.

          Zhang Feng, the MIIT’s spokesman, said 4G technology will lower bandwidth costs and promise faster mobile broadband.

          The ministry’s figures showed that the Internet speed of 4G networks is 10 times that of 3G services, and allows mobile users to download a 7-megabyte music file in less than one second.

          China Mobile said the rates for 4G services will be cheaper than those for 3G. In some cities where the company has launched the 4G network for trial commercial use, the tariff is 20 percent less than similar 3G network plans.

          Li Yue, president of China Mobile, said the price of 4G smartphones will go down quickly following the approval of the 4G network for commercial use.

          Now only a number of smartphone models in China are equipped with modules that support home-grown 4G TD-LTE technology, with their prices ranging from 350 U.S. dollars to 800 U.S. dollars.

          Li said 4G terminals for as little as 150 U.S. dollars will be available on the market by the end of this year.

          The MIIT also said Wednesday it will test a converged TD-LTE/LTE FDD network at a later date.

          China is the major promoter of the TD-LTE standard and is also a major owner of the standard’s core patents. LTE FDD is the other international 4G standard and is popular in Europe.

          The MIIT said the convergence of the two standards is gaining momentum in the global telecom industry. A total of 10 converged TD-LTE/LTE FDD commercial networks have been established so far worldwide.

          China will issue licenses for LTE FDD when the condition is ripe,” said the ministry.

          Experts believe the commercialization of TD-LTE will create a new impetus for China’s economic growth, as the country is home to the largest number of mobile phone users in the world.

          The ministry’s statistics showed that the 3G network contributed 211 billion yuan (34 billion U.S. dollars) to China’s GDP in its first three years of commercial use.

          “The 4G industry chain, which involves terminal manufacturing and the software sector, will further improve the services of China’s telecom sector,” said spokesman Zhang Feng.

          60% of phone users in China have no plans to upgrade to 4G: report [Want China Times, Dec 6, 2013, 14:46 (GMT+8)]

          More than 60% of China’s cell phone users have no plans to switch to the latest 4G technology, the Guangzhou-based Souther Daily reported on Dec. 5.

          Though the paper did not give detailed information on how its poll was conducted, it said more than 60% of the people it surveyed said they are happy with their 3G smartphones and that they do not feel the need to upgrade.

          Those polled said they have a greater choice of 3G smartphones at more competitive prices than the 4G options currently available.

          Southern Daily said 4G services, for which the government began to issue licenses this week, would be attractive for the younger generation in particular but telecom carriers may need to offer more promotions and incentives to persuade people to retire their current cell phones.

          3G vs. LTE Network Architecture – SixtySec [ExploreGate YouTube channel, May 4, 2012]

          Visit http://www.exploregate.com for more videos on this topic.

          What are the differences between TDD LTE (TD-LTE) and FDD LTE (FD-LTE)? [Global TD-LTE Initiative, Nov 4, 2013]

          FDD LTE and TDD LTE are two different standards of LTE 4G technology. LTE is a high-speed wireless technology from the 3GPP standard. 3G growth reached its end at HSPA+, and mobile operators have already started deploying 4G networks to provide much more bandwidth for mobile users. 4G speed will provide a virtual LAN reality to mobile handsets by offering very high speed access to the Internet to experience real triple play services such as data, voice and video from a mobile network.

          LTE is defined to support both the paired spectrum for Frequency Division Duplex (FDD) and unpaired spectrum for Time Division Duplex (TDD). LTE FDD uses a paired spectrum that comes from a migration path of the 3G network, whereas TDD LTE uses an unpaired spectrum that evolved from TD-SCDMA.

          TD-LTE does not require a paired spectrum since transmission and reception occurs in the same channel. In FD-LTE, it requires a paired spectrum with different frequencies with a guard band.

          TD-LTE is cheaper than FD-LTE since in TD-LTE there is no need for a diplexer to isolate transmission and receptions.

          In TD-LTE, it’s possible to change the uplink and downlink capacity ratio dynamically according to the needs. In FD-LTE, capacity is determined by frequency allocation by regulatory authorities, making it difficult to make a dynamic change.

          In TD-LTE, a larger guard period is necessary to maintain the uplink and downlink separation that will affect the capacity. In FD-LTE, the same concept is referred to as a guard band for isolation of uplink and downlink, which will not affect capacity.

          Cross slot interference exists in TD-LTE, which is not applicable to FD-LTE.

          What are TD-LTE’s technical highlights? [Global TD-LTE Initiative, Nov 4, 2013]

          TD-LTE transmissions travel in both directions on the same frequency band, a methodology formally known as “unpaired spectrum.” It is distinct from “paired spectrum,” where two frequencies are allocated, one for the transmit channel and the other for the receive channel (formally called “Frequency Division”). “Time Division” means the receive channel and the transmit channel take turns (i.e., divide the time between them) on the same frequency band. The time divisions are asymmetric, meaning that more time-slots are allocated to data going from the tower to the phone than from the phone to the tower. The usage patterns of the future (fewer phone calls, more Internet) are asymmetric in this manner.

          The frequency bands used by TD-LTE are 3.4–3.6 GHz in Australia and the UK, 2.57−2.62 GHz in the US and China, 2.545-2.575 GHz in Japan, and 2.3–2.4 GHz in India and Australia. The technology supports scalable channel bandwidth, between 1.4 and 20 MHz. A typical range measures up to 200 meters indoors on a 2.57–2.62 GHz radio frequency link.

          China Telecommunications: Who says TD-LTE doesn’t work? [Global TD-LTE Initiative Updates, Nov 25, 2013]

          Our existing ‘counter consensus’ view on the outlook for Chinese Telecoms is based on the belief that LTE will cause a reversal of fortune among the key players. China Mobile will solve the biggest problem identified in our consumer research (slow data speeds) and will once again have the ‘best’ mobile network in China on all dimensions. China Unicom, having gained strong momentum on the basis of their superior 3G data speeds will face a slowing of momentum – at least among high value customers seeking the latest technology

          Over the last few weeks we have heard many arguments from China Mobile Bears as to why our hypothesis will be wrong. The initial arguments are usually targeted at the technology itself – that TD-LTE is a Chinese standard and a poor cousin to the much better FD-LTE more popular in Europe (it isn’t), that it doesn’t handle voice calls well (irrelevant – no operator in the world has launched a new LTE network with voice over LTEin all cases they use existing 2G or 3G networks for voice), that handsets will not be available (ever heard of the iPhone? Not to mention Samsung, Sony, HTC, Huawei…)

          China Mobile launched its TD-LTE network in Shenzhen for ‘test’ operations in early November. We thought the best way to address the Bear’s technology concerns was to go test the network for ourselves. Nearly 120 speed tests conducted from different indoor and outdoor locations supported our hypothesis that TD-LTE will be demonstratively better than Unicom’s existing 3G network in data speeds. On average we experienced download speeds 10 times faster, upload speeds 7 times faster and a dramatic improvement in latency. We concur that service coverage for LTE is currently weaker, but locations meaningful to high value customers are already largely covered. Coverage will continue to improve as China Mobile rolls out new sites.

          Over the last few years, China Mobile has underperformed the market while Unicom has outperformed – we attribute most of the difference in fortune of these two companies to the relative data speed of their respective 3G networks. We believe the launch of TD-LTE services by China Mobile will start the process of reversing this. Speed test in Shenzhen affirm our belief that TD-LTE technology works and is demonstratively superior to W-CDMA in data speeds.

          Click to download:
          China Telecommunications: Who says TD-LTE doesn’t work?
          We experienced lightning speeds in Shenzhen

          [a 10 pages long whitepaper by Berstein Research, Nov 18, 2013]

          Some important excerpts from that:

          China Mobile has been selling TD-LTE devices and rate plans in Shenzhen since November 1st. As 4G licenses are not yet issued, these sales are described as “trials” and are limited to a small number of devices and are only available in a few cities. The LTE rate plans are provisional: service contracts are signed under a 3G rate plan which will transfer to a 4G plan in January. We believe that sales of 4G services in advance of an actual license is an aggressive move, and highlights how important 4G is for China Mobile’s management.

          We conducted over 100 speed tests in Shenzhen to compare the new TD-LTE network versus Unicom’s existing 3G network. Unicom has benefited tremendously from China Mobile’s misfortune with TD-SCDMA and its own good fortune of being licensed with WCDMA. Unicom also stands to suffer the most if its leadership on speed is lost. Our proprietary customer research indicated this was a key buying factor for many of Unicom’s existing customers. We went to Shenzhen (one of the cities where China Mobile is already selling 4G services) to pit China Unicom and China Mobile’s networks head-to-head. We conducted ~120 tests across various locations (indoors, outdoors, in-transit, and under-ground) to reach robust conclusions on speed, latency and coverage. Our test approach and sampling criteria are shown on Exhibit 1; our 4G test equipments are shown in Exhibits 2 and 3.

          As expected, our test highlighted that TD-SCDMA lags Unicom’s WCDMA in 3G data speeds. First we wanted to confirm Unicom’s data speed superiority over China Mobile on 3G network. As expected we found Unicom’s WCDMA to download and upload around 3 times faster than China Mobile’s TD-SCDMA. TD-SCDMA clocked an average of 1.1MB/s on download and 0.2MB/s on upload, compared to 2.7MB/s and 0.7MB/s for WCDMA. These results were broadly similar to field tests done by the Chinese Ministry of Industry and Information Technology (MIIT) in 2010 (see Exhibits 4 and 5).

          However, China Mobile’s TD-LTE is everything it is promised to be: the new leader in data speed. We then moved on to test TD-LTE… We found it had 3 times less latency (Exhibit 6) which improves the browsing experience making the phone feel more responsive. Download speeds clocked an average of 26.2MB/s, which was ~10 times faster than Unicom’s 3G network (Exhibit 7). Upload speeds averaged 5MB/s, which was 7 times faster than Unicom’s 3G (Exhibit 8). These performance levels were consistently observed across all locations where there was a signal. Part of TD-LTE’s outperformance is due to a lack of users on the network, however, given the large amount of spectrum expected to be allocated for LTE services we believe there will continue to be a material performance advantage over WCDMA even as the subscriber base expands.

          The TD-LTE network had more coverage gaps but this will improve over time. China Mobile’s TD-LTE network did have some coverage issues, even within urban Shenzhen. However the problem was less significant than feared. All the outdoor sites tested received good signals, and high traffic indoor locations (e.g. shopping malls, cafes) are also covered. The only test site where we failed to receive a signal was the underground metro station (Refer back to Exhibit 1). We suspect there are many more ‘gaps’ around, but these will be progressively fixed over time.

          Anecdotally there appears to be pent-up demand for TD-LTE services; improving availability of handsets will be key to unlocking this. Currently there are only two LTE handsets available from China Mobile: a Samsung Galaxy Note II at 5299RMB [$871] and a cheaper Huawei model at 2888RMB [$475]. One clerk told us that since launching 4G “trials” 2 weeks ago, her store had only sold one TD-LTE phone. However many customers with TD-LTE compatible iPhones (5S/5C models bought in Hong Kong) are signing up to 4G plans. We are wary of making too much from this, but agree that improving handset availability will be key to a broader uptake of the service. With integrated 2G/3G/4G chipsets available and China now being the largest smartphone market, we believe it will not be long before a large number of mid to low end devices start to appear on the market.

          More than Half of Asian Population Will Be Covered by LTE-TDD by 2018 [ABI Research News, Nov 4, 2013]

          LTE network deployments will continue to grow rapidly globally. Time-division duplex (TDD) network is picking up the pace and gaining more market traction. In Asia-Pacific, LTE-TDD networks will cover more than 53% of the population by 2018 at a compound annual growth rate (CAGR) of 41.1% between 2012 and 2018, while frequency-division duplex (FDD) networks will reach 49% population coverage by the end of 2018.

          “The increase of LTE-TDD population coverage is mainly driven by wide deployment in some Asian countries with large populations, such as China, India, and Japan,” comments Marina Lu, research associate at ABI Research. “Due to its complementarity of using unpaired spectrum, a number of LTE-FDD operators will expand their networks with LTE-TDD in additional spectrum to improve network capacity.”

          Among Asia-Pacific’s recently completed, on-going, and upcoming 4G spectrum auctions, 25% concern 2,600 MHz, 25% 1,800 MHz, and 20% 800 MHz, which is consistent with the popularity of the 2,600 MHz band for LTE-TDD networks. “Asia-Pacific will be the region with the most LTE-TDD networks,” adds Jake Saunders, VP and practice director. “Of global LTE-TDD concluded contracts awarded to vendors so far, 47% come from Asia-Pacific and the second largest portion of 18% is contributed by the Middle East.”

          Considering spectrum efficiency, spectrum bandwidth, network capacity, etc., a number of operators are preparing to upgrade LTE networks to LTE-Advanced networks. In ABI Research’s latest survey, there have been 29 LTE Advanced network commitments worldwide by Q3 2013, of which 10 commitments come from Western Europe, 9 from Asia-Pacific, and 5 from North America.

          TD-LTE global market overview [Global TD-LTE Initiative Updates, Sept 13, 2013]

          With the Long Term Evolution (LTE) standard continuing to develop, international differences in plannings and frequency allocation timetables have resulted in different frequency bands being used in different countries. TD-LTE standard’s greater efficiency in terms of frequency spectrum usage has attracted the attention of carriers in a number of other countries.

          21 TD-LTE commercial networks have been launched as of August, 2013, and 39 LTE TDD commercial networks are in progress or planned. (Source: GSA)

          image

          TD-LTE’s unique features have also played an important part in the technology’s growing stature in the market. Because TD-LTE makes asymmetrical use of unpaired spectrum, for both uplink and downlink, it is a spectral efficient technology. Spectrum is a valuable commodity for mobile operators, especially those who operate in countries where there is a limited amount of available FDD spectrum; or where only single unpaired frequency is available. Driven by its spectral efficiency, TD-LTE is now increasingly being viewed as an attractive proposition in markets.

          GSA confirms 244 LTE networks are commercially launched, LTE1800 now mainstream [news article by GSA, Dec 5, 2013]

          The latest update of the Evolution to LTE report from GSA (Global mobile Suppliers Association) confirms that 244 operators have commercially launched LTE services in 92 countries.

          98 LTE networks have been commercially launched so far in 2013.

          The report confirms that 499 operators are investing in LTE in 143 countries. This is made up of 448 firm operator commitments to build LTE networks in 134 countries, plus 51 additional operators engaged in various trials, studies, etc. in a further 9 countries.

          From amongst the committed operators, 244 have commercially launched services, which is 78% more than a year ago.

          GSA forecasts there will be 260 LTE networks in commercial service by the end of this year.

          The majority of LTE operators have deployed the FDD mode of the standard. The most widely used band in network deployments continues to be 1800 MHz which is used in over 44% of commercially launched LTE networks. 108 operators worldwide have launched LTE1800 (band 3) systems, 157% more than a year ago, in 58 countries, either as a single band system, or as part of a multi-band deployment.
          1800 MHz spectrum is typically refarmed from its original use for 2G/GSM, facilitated by technology-neutral licensing policies.
          As 1800 MHz is the prime band for LTE deployments worldwide, it will greatly assist international roaming for mobile broadband. Mobile licences for 1800 MHz have been awarded to 350+ operators in nearly 150 countries.
          The number of LTE1800 terminals has tripled in each of the past 2 years. One third of all announced LTE user devices can operate in 1800 MHz band 3 spectrum. LTE1800 is a mature, mainstream technology.
          The next most popular contiguous bands are 2.6 GHz (band 7) as used in 29% of networks in commercial service today, followed by 800 MHz (band 20) in 12% of networks, and AWS (band 4) in 8% of networks.

          Interest in the TDD mode continues to be strengthening globally ahead of the large-scale commercial deployments in China. Worldwide, 25 LTE TDD (TD-LTE) systems are commercially launched in 20 countries, of which 12 are deployed in combined LTE FDD & TDD operations.

          image
          The report includes a growing list of operators who have commercially launched or preparing to introduce enhancements to their networks including multicarrier support for Category 4 user devices (150 Mbps theoretical peak downlink speed), and LTE-Advanced features, especially carrier aggregation, which is a key trend.
          The report also confirms how voice service has moved up the agenda for many LTE operators as network coverage has improved (nationwide in many cases) and as the penetration and usage of LTE-capable smartphones has increased. VoLTE services have been launched by operators in Asia, Europe, and North America and several more operators have committed to VoLTE deployments and launches over the next few months.
          The Evolution to LTE report (December 5, 2013) is a free download for registered site users
          Registration page for new users: http://www.gsacom.com/user/register
          Numerous charts, maps etc. confirming the progress of mobile broadband developments including LTE are also available on the home page and at www.gsacom.com/news/statistics.

          GSA confirms 1,240 LTE user devices launched, support building for LTE-Advanced systems [news article by GSA, Nov 7, 2013]

          The latest update to the ‘Status of the LTE Ecosystem’ report published by the GSA (Global mobile Suppliers Association) confirms that 120 manufacturers have announced 1,240 LTE-enabled user devices, including frequency and carrier variants.
          680 new LTE user devices were announced in the past year. The number of manufacturers increased by 44% in this period. Smartphones continue to be the largest LTE device category with 455 products released, representing 36% share of all LTE device types. 99% of LTE smartphones also operate on 3G networks (HSPA/HSPA+ or EV-DO or TD-SCDMA technologies).

          The report embraced devices that operate on the FDD and/or TDD modes of the LTE system. The majority of products are designed for operation in the FDD mode. However, 274 devices can operate in the LTE TDD (TD-LTE) mode, and this figure is 159 higher than a year ago.

          The largest LTE device ecosystems for the FDD bands are as follows:
          – 2600 MHz band 7 = 448 devices
          – 1800 MHz band 3 = 412 devices
          – 800 MHz band 20 = 314 devices
          – 2100 MHz band 1 = 305 devices
          – 700 MHz bands 12, 17 = 289 devices
          – AWS band 4 = 279 devices
          – 700 MHz band 13 = 250 devices
          – 850 MHz band 5 = 189 devices
          – 900 MHz band 8 = 174 devices
          – 1900 MHz band 2 = 134 devices
          TDD bands:
          – 2600 MHz band 38 = 197 devices
          – 2300 MHz band 40 = 184 devices
          – 1900 MHz band 39 = 71 devices
          – 2600 MHz band 41 = 63 devices
          – 2500 MHz bands 42, 43 = 15 devices
          (totals include carrier and operator variants)

          The Evolution to LTE report (October 17, 2013) is also available as a free download to registered site users via the link at http://www.gsacom.com/gsm3g/infopapers

          Note that by the time of 4G based on TD-LTE the leading edge of LTE will much further ahead as SK Telecom Demonstrates 225 Mbps LTE-Advanced [press release, Nov 28, 2013]

          • Successfully demonstrates the upgraded LTE-Advanced: Aggregates 20MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band to offer up to 225Mbps of speed
          • Expects to launch the ‘20MHz+10MHz’ LTE-Advanced service in the second half of 2014 and plans to introduce 3 Band Carrier Aggregation in an early manner
          SK Telecom (NYSE:SKM) today held a press conference to demonstrate the upgraded LTE-Advanced service that offers up to 225Mbps of speed by aggregating 20MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band.

          LTE can only offer up to 150Mbps of speeds using a maximum of 20MHz of continuous spectrum in one band, while LTE-Advanced can support speeds over 150Mbps by combining different bands through Carrier Aggregation (CA).

          Insert of mine:
          [WIS2013] SK텔레콤 LTE-Advanced [SK telecom YouTube channel, May 20, 2013]

          SK텔레콤도 World IT Show 2013에 ‘선을 넘다.’라는 테마로 함께 했습니다. LTE를 넘어서는 LTE-Advanced! WIS2013도 SKT와 함께 하세요! 🙂 (Bing translation: SK Telecom also World IT Show 2013 ‘ Over the line ‘ called the theme together. LTE beyond LTE-Advanced! W I S – SKT with now!)
          In June 2013, SK Telecom has commercialized, for the first time in the world, LTE-Advanced service using 10MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band. Backed by a wide range of mobile value added services specially designed for the LTE-Advanced network, and a rich lineup of LTE-Advanced capable devices (8 different smartphone models), SK Telecom’s LTE-Advanced service is attracting subscribers at a rapid pace.
          Moreover, on August 30, 2013, SK Telecom has gained authorization to operate the 35 MHz bandwidth (20 downlink + 15 uplink) in 1.8GHz band, and immediately launched diverse measures to strengthen both its LTE and LTE-Advanced services by utilizing the newly acquired bandwidth.

          Once SK Telecom commercializes the upgraded LTE-Advanced (20MHz+10MHz), customers will be able to download an 800MB movie in just 28 seconds, significantly faster than other networks. Measured at their maximum speeds, downloading the same movie file via 3G, LTE, and the existing LTE-Advanced (10MHz+10MHz) would take 7 minutes and 24 seconds, 1 minute and 25 seconds, and 43 seconds, respectively.

          The company said that it expects to launch the ‘20MHz+10MHz’ LTE-Advanced service nationwide through smartphones in the second half of 2014 as the smartphone chipset that supports 225 Mbps of speeds is currently being developed.

          Furthermore, by successfully demonstrating the ‘20MHz+10MHz’ CA, SK Telecom moves one step closer to realizing the next level of LTE-Advanced technology: Aggregating three component carriers (20MHz+10MHz+10MHz) to support up to 300Mbps of speed.

          Alex Jinsung Choi, Executive Vice President and Head of ICT R&D Division at SK Telecom said, “SK Telecom has been leading the development of wireless networks since it commercialized CDMA (2G) technology for the world’s first time in 1996. Today’s successful demonstration of 225 Mbps LTE-Advanced will serve as a momentum for SK Telecom to realize more innovative network technologies, which will also lead to the growth of relevant industries, including device, content and convergence fields.”
          But already SK Telecom, China Mobile agree on automatic LTE roaming service [Yonhap, Dec 5, 2013]
          SK Telecom Co., South Korea’s largest mobile operator, said Thursday that it has agreed to launch an automatic international Long Term Evolution (LTE) roaming service with China Mobile Ltd., as well as other LTE services.
          Under the deal, travelers and businesspeople will be able to use their regular LTE services offered by the two mobile carriers more easily between the two countries, according to SK Telecom.
          About 6.8 million Koreans and Chinese traveled between the two countries last year.
          Early this year, SK Telecom and CSL Ltd. of Hong Kong successfully demonstrated the compatibility of their two LTE networks. The international automatic LTE roaming service has been available since June this year.
          Since October, SK Telecom also has offered a similar roaming service with Saudi Arabia.
          image
          SK Telecom CEO Ha Sung-min (R) and China Mobile’s Chairman Xi Guohua
          pose for a photo at SK Telecom’s headquarters in Seoul.


          China Mobile:

          New era for mobiles as 4G licenses issued to carriers [Xinhuanet, Dec 5, 2013]

          China issued long-awaited 4G licenses to three telecommunications carriers yesterday, which would offer mobile Internet access 20 to 50 times faster than the current 3G network and create a new trillion-yuan market for devices and services.
          China, the world’s biggest mobile phone market, has now officially entered the 4G era five years after it issued 3G licenses. The technology is widely adopted in the United States, Europe, Japan, South Korea and other regional markets.
          The network, along with e-commerce and software businesses, is expected to boost information consumption and market demand, and encourage innovation in China, according to the Ministry of Industry and Information Technology.

          China Mobile will launch 4G services in Shanghai, Beijing and 11 other cities by the end of this year. The number of cities will expand to 340 by the end of 2014.

          Users can upgrade to the 4G network without changing phone numbers, China Mobile said yesterday. It has been testing 4G networks for two years.

          China Mobile, China Unicom and China Telecom all got 4G licenses based on TD-LTE (time division-long term evolution) technology. China Unicom and China Telecom also got approval to test another 4G technology FD-LTE (frequency division-LTE), which is mainly used in overseas markets.
          China will issue FD-LTE 4G licenses later, the ministry said.
          China Mobile also got the approval to operate fixed-line business including family broadband, which makes it possible to launch bundled services, the ministry added.
          “It’s a national strategy to boost commercial 4G development to boost consumption and fuel-related investment,” the ministry said on its website.
          The ministry said that 4G had become an engine for the development of the whole IT industry, fueling demand for the latest smartphones. With greatly improved speed and more powerful phones, new mobile Internet services will appear that will enrich people’s daily lives, the ministry said.
          With 4G, mobile users can download a film (700 megabytes) in two minutes and a high-quality song (7MB) in less than a second. More 4G-related services such as video on demand, conferencing, high-quality music streaming, multiplayer games and remote video monitoring for medical and security services are being tested, industry insiders said.
          The initial investment for 4G will reach 500 billion yuan (US$82 billion) in a few years, and is expected to hit 1 trillion yuan with the industry’s development.
          “4G LTE is the fastest growing mobile technology since the inception of mobility some 25 years ago. And we know that mobile broadband will have a huge impact on people, business and society and be one of the most critical infrastructures for any country,” Hans Vestberg, chief executive of Ericsson, the world’s largest telecommunications equipment vendor.
          By 2019, China will be home to 700 million mobile subscribers on 4G, making it the world’s biggest 4G market, according to Ericsson.
          Equipment makers including Ericsson, Huawei, ZTE and Alcatel-Lucent Shanghai Bell are going to benefit from the 4G wave.
          “We are fully prepared for providing handsets for China’s own 4G technology, from entry-level to high-end phones,” said Cher Wang, HTC’s chairman.

          China Mobile is going to launch 4G services with a new brand He, meaning harmony in Chinese, on December 17. The carrier may offer iPhones supporting TD-LTE then, according to industry sources.

          In cities such as Beijing and Shenzhen, China Mobile have allowed users to apply for trial commercial use of 4G services with their own devices. In Shanghai, more than 1,800 people had been invited to test 4G services.

          Its target is to cover 100 cities by the middle of next year and 340 by the end of 2014, when it plans to launch 4G phones that cost less than 1,000 yuan each. In the first half, it will launch 50 new 4G phones.

          In Shanghai, nine TD-LTE phones will be available by the end of this year. Users can apply for 4G services at China Mobile’s outlets on Madang Road and Minsheng Road initially, to be expanded to 20 outlets citywide.

          Shanghai Mobile also plans to establish an additional 3,000 4G base stations next year from the current 700, to cover the whole city including suburban and rural regions.

          (Source: Shanghai Daily)

          From 2013 Interim Results Presentation as of Aug 15, 2013

          image

          From China Mobile 2012 Annual Report [April 25, 2013]

          Business Overview

          … starting from 2013, we commenced investments in the development of TD-LTE network. We intend to use the TD-LTE network to primarily carry high bandwidth and high quality wireless broadband businesses. In 2012, the extended large scale trial of the TD-LTE network was carried out in 15 cities in Mainland China and approximately 20,000 base stations were built. The quality and scale of the TD-LTE networks in Hangzhou, Guangzhou and Shenzhen have reached pre-commercial standard. In addition, we started providing commercial 4G services in Hong Kong in 2012 with the LTE FDD and TD-LTE bandwidths we previously obtained from the Office of the Telecommunications Authority of Hong Kong in 2009 and 2012, respectively. We plan to construct more than 200,000 TD-LTE base stations in 2013. [Certain 3G base stations may also be upgraded to TD-LTE base stations.]

          China Mobile lifts hopes of Apple deal and 4G launch [Shanghai Daily via Xinhuanet, Oct 31, 2013]

          China Mobile is raising consumer hopes that the next-generation 4G mobile network will be launched soon and that a long-awaited deal between the world’s largest telco and Apple Inc may be unveiled as early as next week.

          The telco’s website displays a cartoon tornado advertisement that announces “the invasion of 4G” and “November 9-11.” The ad links to a page showing two images of smartphones that resemble iPhones and a caption that says “special discounts.”

          November 11, or Singles’ Day, is the busiest shopping day of the year in China. Last year, it generated 4 billion U.S.dollars in online sales alone, according to retail consultant McKinsey Global Institute.

          China Mobile declined to comment but its senior executives said earlier that it would distribute 4G phones, including Apple’s latest iPhone 5S, after China issues 4G licenses expected by the end of this year.

          Meanwhile, the Ministry of Industry and Information Technology has approved the sale of several 4G models made by Sony, ZTE and other vendors.

          China Mobile hopes the expected tie-up with Apple will boost revenue and profit, especially in the high-end market segment, after its net profit for the first three quarters of this year fell for the first time by 1.9 percent to 91.5 billion yuan (14.8 billion U.S.dollars).

          China Mobile’s Beijing branch jumps on 4G technology wave [China Daily USA, Nov 6, 2013]

          Carrier to begin sales of newest network-enabled smartphones
          Beijing has become the latest Chinese city to join the wave of tests for fourth generation, or 4G, mobile networks, despite the fact that the government has yet to issue 4G licenses to telecom carriers.
          On Tuesday, China Mobile Ltd’s Beijing branch said it would start sales of 4G smartphones on Wednesday. The first batch of 4G handsets includes two models – Sony Corp’s M35T and Samsung Electronics Co Ltd’s Galaxy Note 2.
          Customers do not need to change their phone numbers but just have to get a new SIM card for their 4G handsets, according to a statement from China Mobile. Fourth-generation wireless networks achieve data download speeds of up to 80 megabits per second, four times faster than 3G networks.
          However, the coverage of 4G networks in Beijing is limited, said Gao Shu, a spokeswoman for China Mobile’s Beijing branch. Only people in areas inside the capital’s Third Ring Road will be able to access the network.
          “Our 4G smartphones are aimed at high-end, white-collar workers in Beijing,” Gao said.
          Before Beijing, a handful of affluent Chinese cities, including Guangzhou and Hangzhou, have started offering 4G services on a trial basis.
          China Mobile – the only operator in the country currently testing 4G networks – has adopted the domestic Time Division-Long Term Evolution (TD-LTE) 4G technology.
          The number of applicants for 4G services is expected to surpass 100,000 in major cities, according to a China Mobile official, who asked not to be named.
          Meanwhile, the lack of mature 4G smartphones has long been seen as a major obstacle for the expansion of China Mobile’s 4G business. But the situation has improved in recent months. According to a report from Bank of China International Securities, as of Sept 11, smartphone models received the permission from Chinese authorities to run on 4G networks. The new smartphones are being made by domestic and international companies, including Samsung, Sony, Huawei Technologies Co Ltd and ZTE Corp, the report said.
          “The planned 4G commercial rollout is very good news for China Mobile, as well as for smartphone companies and mobile Internet companies,” said Wang Jun, an analyst with Beijing-based research firm Analysys International.
          China Mobile’s net profit dropped 9 percent in the third quarter partly due to the increasing challenges posed by mobile Internet applications such as Tencent Holdings Ltd’s WeChat.
          “The 4G business can help the carrier to attract more high-end users from rivals,” Wang said.
          Apple Inc has also said that its latest iPhone 5S and iPhone 5C handsets may support TD-LTE technology.
          James Yan, an analyst with IDC China, pointed out that the timing for launching 4G services in China is right.
          “The environment could not be better. Customers favor smartphones, carriers have the motivation to do 4G services, and distributors know how to sell 4G products to people,” Yan said.
          The launch of 4G services in China will definitely be a new driver for the growth of the nation’s smartphone market, he added.
          “4G will be an important factor to make people buy new phones,” Yan said.
          Ryan Reith, program director at IDC’s Worldwide Quarterly Mobile Phone Tracker, said that China has become one of the fastest-growing smartphone markets in the world, accounting for more than one-third of total shipments in the third quarter of the year.

          China Mobile to launch all-service brand [China Daily, Nov 20, 2013]

          China Mobile Ltd, the nation’s biggest telecom carrier by subscriber numbers, revealed onTuesday that it would officially launch a new brand “He” (And) on Dec 18, mainly targeting the upcoming fourth generation (4G) mobile business.

          The new brand’s logo features grass green and peach blossom colors. According to ChinaMobile officials, the company’s current-running brands – GoTone, EasyOwn, M-Zone and G3for 3G mobile services, will be phased out after the launch of “He”.

          That means “He” will take the stage as an all-service brand for China Mobile and provide customers with integrated 2G, 3G and 4G mobile services.

          Commercial 4G to start December 18 [Shanghai Daily, Nov 25, 2013]

          China will start commercial 4G mobile communications services on December 18, bringing the most advanced telecommunications technology to the country’s more than 1 billion mobile users.
          China Mobile, the country’s No. 1 mobile operator with over 700 million users, will start 4G services on that date with a new brand He, meaning harmonious in the Chinese language.
          China is expected to issue licences for 4G before the telco’s new services start.

          “It will be a national event and users are allowed to apply for 4G services without changing numbers,” said a Shanghai Mobile official.

          Users in Beijing, Guangzhou and Chongqing will be the first to enjoy commercial 4G, or fourth generation, services. Shanghai, which is still building a citywide 4G network, will launch the services later.

          Though China is the world’s biggest mobile phone market with more than 1 billion users on its mainland, it lacks the 4G technology that is used in some other countries and regions including the United States, South Korea, Japan, Singapore and Hong Kong.

          The 4G phone will become rapidly popular on China’s mainland, thanks to the low cost of 4G phones, according to Li Yue, China Mobile’s president, who expects some 4G phones priced below 1,000 yuan (US$162) to appear in the second half of next year.

          Apple Inc is also set to introduce iPhones supporting the 4G network in China, industry insiders said. The US giant and China Mobile are in negotiations over the 4G iPhone and they will launch it officially on December 18.
          China Telecom and China Unicom are now Apple’s carrier partners for its smartphone on the Chinese mainland.

          Apple will partner with China Mobile [CNN YouTube channel, Dec 5, 2013]

          Sanford Bernstein Senior Analyst Mark Newman discusses reported China Mobile iPhone deal.

          China Mobile still talking to Apple on iPhones [Reuters, Dec 5, 2013 9:27am EST]

          Earlier in the day, the Wall Street Journal reported that the two giants had signed a deal, citing an anonymous source familiar with the matter.

          We are still negotiating with Apple, but for now we have nothing new to announce,” China Mobile spokeswoman Rainie Lei said, declining to elaborate. Apple also declined comment.

          Moody’s: TD-LTE License Is Credit Positive for China Mobile [Moody’ Global Credit Research announcement, Dec 6, 2013]

          Hong Kong, December 06, 2013 — Moody’s Investors Service says that the Chinese government’s decision to issue a Time-Division Long-Term Evolution (TD-LTE), or 4G, license, is credit positive for China Mobile Limited (Aa3 stable) as this will help strengthen its market position in the growing wireless data business.

          On 4 December, China Mobile announced that the Ministry of Industry and Information Technology had granted its parent, China Mobile Communications Corporation (CMCC, unrated), permission to operate the TD-LTE business and China Mobile will assist CMCC in the construction and operations of the TD-LTE network.

          China Mobile is likely to enjoy the first mover advantage in the TD-LTE business as it has been investing in the technology since early 2013, well ahead of its competitors.

          China Mobile targets to build over 200,000 commercial-ready base stations and expand its network coverage to 100 major cities by the end of this year. It has already started trials in some of the major cities, including Beijing.

          While its two major competitors — such as China United Network Communications Group Co Ltd (China Unicom, unrated) and China Telecom Corporation (unrated) — also obtained TD-LTE licenses at the same time, we expect these companies to only start major investments in 2014.

          In fact, these companies plan to use Frequency Division Duplex (FDD)-LTE — an international standard used outside China — as their mainstream 4G technology. However, the FDD-LTE licenses have not yet been granted and any delay in the issuance of the licenses will be advantageous for China Mobile.

          Although TD-LTE is a home-grown technology, China Mobile is unlikely to be hampered by the lack of choice in 4G handsets, as was the case with its 3G indigenous technology platform (Time Division-Code Division Multiple Access, or TD-SCDMA).

          TD-LTE technology has been accepted internationally, with 59 operators and 54 manufacturers joining the global TD-LTE initiative as of H1 2013. In addition, 25 models of TD-LTE trial devices were launched and over 100 models are under development, of which 15 handsets are intended for commercial use.

          Moody’s believes that Apple’s new iPhones have also become technologically compatible with TD-LTE, as well as TD-SCDMA, although China Mobile has not yet started selling iPhones.

          The launch of TD-LTE is strategically important for China Mobile to strengthen its market position in the growing wireless data business.

          China Mobile had about 759 million customers as of October 2013, of which 176 million were 3G customers. Its 3G subscribers are growing rapidly with over 100% growth since May 2013 on a year-over-year basis.

          Moody’s expects its wireless data business to continue its solid growth. The wireless data revenue has grown 62% in H1 2013 on a year-over-year basis. In H1 2013 the business accounted for 17% of its telecommunications services revenue, up from 11% in H1 2012.

          However, China Mobile’s market share for 3G services has been much smaller than its overall mobile market share. As of October 2013, its 3G market share was 45% (China Unicom 30% and China Telecom 25%) while its overall mobile market share was 62% (China Unicom 23% and China Telecom 15%), largely because of the use of TD-SCDMA despite the recent improvement in its 3G market share.

          Moody’s expects the launch of TD-LTE will help China Mobile improve its market position in the wireless data segment and slow the pace of declines in average revenue per user (ARPU), as the ARPU of data users tends to be higher.

          The large investments in TD-LTE will continue to pressure China Mobile’s cash flow. Moody’s expects its adjusted free cash flow (FCF)/debt to fall to below 0% in 2013 and 2014 from over 60% in 2012.

          Moody’s expects that the company’s adjusted capital expenditure as a percentage of revenue from telecommunications services will increase to over 30% in 2013 and 2014, from below 25% of its revenue in 2012.

          Nevertheless, its overall credit profile will remain in line with its rating, supported by its solid overall operating and financial profiles, as well as its excellent liquidity. For example, Moody’s expects China Mobile’s adjusted debt/EBITDA to remain at approximately 0.3x.

          The principal methodology used in this rating was the Global Telecommunications Industry published in December 2010. Please see the Credit Policy page on http://www.moodys.com for a copy of this methodology.

          China Mobile is the leading provider of mobile telecommunications services in China, offering voice and data services in all 31 provinces and autonomous regions, as well as in Hong Kong. It is 74% owned by CMCC, which in turn is wholly owned by China’s State-owned Assets Supervision and Administration Commission.


          China Telecom:

          LTE/4G DIGITAL CELLULAR MOBILE SERVICE OPERATION PERMIT [China Telecom’s regulatory announcement for Hong Kong Exchange, Dec 4, 2015]

          This announcement is made pursuant to Rule 13.09 of the Rules Governing the Listing of the Securities on The Stock Exchange of Hong Kong Limited and Part XIVA of the Securities and Futures Ordinance (Cap. 571 of the Laws of Hong Kong).
           

          The Board (the “Board”) of directors of China Telecom Corporation Limited (the “Company”) announced that the Company was notified by China Telecommunications Corporation (the parent company of the Company) that China Telecom has been granted by the Ministry of Industry and Information Technology of the PRC the permit to operate the LTE/4G digital cellular mobile service (TD-LTE). Meanwhile, China Telecom will apply for the permit to operate the LTE/4G digital cellular mobile service (LTE FDD) as soon as practicable.

          In order to proactively implement national innovation strategy and leverage collaborated use of different spectrum resources to meet customers’ demand, the Company aims to adopt a flexible approach in deployment of LTE network with one hybrid network of integrated resources. The Company will flexibly deploy the LTE network with regard to data business growth and value chain development. In particular, the LTE deployment would only start from densely populated areas, overlaying on existing superior 3G network for long-term integrated operation. The Company would grasp the rapidly growing data business opportunities with an aim to better enhance customers experience and corporate return.
          The Company believes that the issue of 4G digital cellular mobile service operation permit will be beneficial to the sustainable development of the telecommunications industry. It will also foster the informatisation consumption and economic growth. However, it will simultaneously intensify market competition. The Company will proactively leverage its operation edge and strive to foster the sustainable development of its business.
          In the meantime, investors are advised to exercise caution in dealing in the securities of the Company.
          By Order of the Board
          China Telecom Corporation Limited
          Wang Xiaochu
          Chairman and Chief Executive Officer

          From Edited Transcript of 2013 Interim Results Investor Presentation and 2013 Interim Results Presentation of Aug 21, 2013:

          image

          Slide 10: To Deploy LTE Trial Network Timely & Appropriately
          To support national technology innovations and allow flexible use of spectrum resources to meet customer demand, we plan to deploy one hybrid LTE network of integrated resources, sharing the core network with wireless access through both TDD and FDD. Thus, most of the LTE network investments would support both TDD and FDD services, offering us flexibility in long term development and return enhancement.
          We will continue to fully leverage existing nationwide superior 3G and fibre broadband networks to serve our customers. LTE deployment would only start from densely populated areas.
          We plan to flexibly deploy LTE network with regard to future LTE licensing, data business growth & value chain development, overlaying on existing superior 3G network for long-term integrated operation to enhance customer experience & return.

          China Telecom to launch TD-LTE trial network construction [Global TD-LTE Initiative Updates, Oct 25, 2013]

          According to informed sources, the Ministry has recently approved the China Telecom launched TD-LTE trial network construction and pre-commercial related business. This means that China Telecom 4G future will get two licenses for FDD LTE/TD-LTE network integration.

          “China Telecom will use FDD LTE/TD-LTE network integration approach build 4G network.” China Telecom Chairman Mr. Wang had previously publicly stated that “since the frequency is restricting the operator’s core resources in the 4G era, network integration is inevitable.”

          A week ago, China Telecom completed the LTE core network master device EPC Jicai tender. It is understood that although China Telecom’s LTE core network master device bidding amount is not large, but the coverage of the country’s 31 provinces, including ZTE, Huawei, Shanghai Bell, Ericsson and other equipment manufacturers, including domestic and international mainstream have received certain share, which, ZTE, Huawei, Shanghai Bell’s winning share is relatively large.

          It is understood that the successful vendor device support FDD/TDD multi-mode network, this also shows that China Telecom has begun preparations related to the deployment of TD-LTE.

          Late last year, China Telecom in Shanghai, Nanjing and other cities in Guangdong 4G trial, however, was mainly dominated by FDD LTE trial network. The Ministry of approval, indicating that China Telecom has determined will be FDD LTE/TD-LTE 4G mode hybrid network test network construction.

          Prior to the introduction, according to Mr. Wang in China Telecom’s 4G network planning, large-scale, wide coverage 4G networks will use FDD standard, while the urban area densely populated areas will use TDD system, using this integrated program will be able to achieve all of the user needs.

          In addition, from China Telecom’s terminal planning can be seen that China Telecom in 4G mobile phones mainly uses standard FDD LTE multimode phones, but in the data card is the main use of TD-LTE network resources.


          China Unicom:

          Announcement LTE/4G Digital Cellular Mobile Service Operation (TD-LTE) Permit [China Unicom’s regulatory announcement for Hong Kong Exchange, Dec 4, 2015]

          This announcement is made pursuant to Rule 13.09 of the Rules Governing the Listing of Securities on The Stock Exchange of Hong Kong Limited (the “Listing Rules”) and Part XIVA of the Securities and Futures Ordinance (Cap. 571).

          On 4 December 2013, China Unicom (Hong Kong) Limited (the “Company”) was notified by its ultimate parent company, China United Network Communications Group Company Limited (中國聯合網絡通信集團有限公司) (“Unicom Parent”), that Unicom Parent has been granted the license to operate LTE/4G digital cellular mobile service (TD-LTE) by the Ministry of Industry and Information Technology of the People’s Republic of China (“MIIT”) on 4 December 2013. MIIT has also granted approval for Unicom Parent to license China United Network Communications Corporation Limited (中國聯合網絡通信有限公司), a wholly-owned subsidiary of the Company, to operate LTE/4G digital cellular mobile service (TD-LTE) nationwide in China

          Meanwhile, the Company will continue to proactively apply for the launch of LTE FDD technology test run. It aims to leverage on the 3G network in order to provide users with mobile broadband data services with a higher speed.

          By Order of the Board
          CHINA UNICOM (HONG KONG) LIMITED
          CHU KA YEE
          Company Secretary

          From 2013 Interim Results Presentation as of Aug 8, 2013

          image

          From INTERIM REPORT 2013 as of August 8, 2013

          [p. 3]

          To support its sustainable growth in the future, the Company further enhanced its network capabilities with a focus on network architecture as well as mobile, broadband and transmission networks so as to strengthen its network advantages in broadband and mobile Internet. In the first half year, the Company added 33 thousand new 3G base stations, and opened HSPA+ 21Mbps services over the whole 3G network, with speed up to 42Mbps at some urban hot spot areas. The Company accelerated fiber optic deployment. Its broadband access ports increased by 19.9% year-on-year, and FTTH/B accounted for 63% of total access ports, representing an increase of 10 percentage points over the same period last year. In order to better meet the demand from HSPA+, LTE and integrated services, the Company optimised the structure and enhanced the coverage of its infrastructure and transmission networks.

          From China’s telecom firms reveal 4G strategies [Xinhuanet, June 27, 2013]

          … the other two smaller Chinese telecom operators – China Unicom (Hong Kong) Ltd and China Telecom Corp Ltd – have expressed their willingness to adopt the Frequency Division Duplex-Long Term Evolution, or FDD-LTE, technology, or at least to build a converged network under both standards.

          TD-LTE and FDD-LTE are the two major 4G international standards, but the latter has gained more popularity across the globe and has stronger industry support.

          Lu Yimin, general manager of China Unicom, said the company is conducting tests for 4G wireless networks with mixed technologies. It is the first time that China Unicom has admitted that it is actively preparing to launch 4G services.

          However, Lu added that because the Chinese government has not yet awarded the 4G licenses, China Unicom’s final strategy is still “uncertain.” Lu also made the remarks at Shanghai’s Mobile Asia Expo.

          Last weekend, Wang Xiaochu, China Telecom’s chairman, confirmed that the company is stepping up efforts for its LTE network trials.

          “It’s inevitable (for China Telecom) to adopt a converged network, since the spectrum is at the core of every carrier’s resources,” Wang said.

          China Unicom tests 4G network [China Daily via Xinhuanet, Aug 9, 2013]

          China United Network Communications Co Ltd, known as China Unicom, said on Thursday that it has started testing a TD-LTE 4G network, which it will use if the government doesn’t allow it to use its favored FDD-LTE technology in the upcoming 4G licensing process.

          China’s second-biggest mobile operator by subscribers is said to have taken the preemptive action because it expects the government to follow a similar strategy as in its 3G auction, when it first awarded licenses for TD-LTE networks, a technology which is mostly backed by its arch-rival China Mobile Ltd, which has the most subscribers in the country.
          The government is widely expected to award 4G licenses before the end of the year. And if it licenses TD-LTE networks first, it will give China Mobile a big edge in the 4G market over its competitors.
          After reporting a 55 percent jump in its first-half profit, Chang Xiaobing, the company’s chairman, said investment on TD-LTE technology has already started and testing will begin in major cities. Funds will come from Hong Kong-listed China Unicom, rather than from its controlling company China United Network Communications Corp Ltd, which previously funded some of China Unicom’s network tests.
          “I expect Beijing to license TD-LTE first, so we have to prepare,” Chang told a news conference in Hong Kong on Thursday.
          Beijing favors TD-LTE, or Time-Division Long-Term Evolution, because the network’s core technologies are developed by Chinese companies. The technology was developed specifically for the Chinese market and is expected to serve a quarter of the global market by 2016.
          China Unicom’s infrastructure mainly supports FDD-LTE, or Frequency Division Duplexing Long-Term Evolution, which is the world’s dominant 4G technology. Out of the 156 commercial 4G networks operating around the world in March 2013, 142 were FDD-LTE and 14 were TD-LTE networks. China Mobile operates a FDD-LTE network in Hong Kong and is trying to integrate it with the mainland’s TD-LTE market.
          Chang said China Unicom’s capital expenditure will stay within the full-year budget of 80 billion yuan (12.96 billion U.S. dollars), despite the planned investment in TD-LTE networks.
          Media reports said that China Telecom Corp Ltd, the other major operator in China, will rent China Mobile’s TD-LTE 4G infrastructure. Chang refused to say if China Unicom will do the same.
          China Unicom’s first-half profit surged to 5.32 billion yuan compared with 3.43 billion yuan in the same period in 2012. Revenue was up 18.6 percent to 144.3 billion yuan, boosted by a 52 percent increase in income from 3G services to 40.9 billion yuan. The company’s 3G subscribers grew a stunning 74 percent to more than 100 million.
          China Unicom shares gained 2.67 percent on Thursday. Trading of the stocks was suspended in the afternoon, after the website of the State-owned Asset Supervision and Administration Commission published the company’s earnings before they were reported to the Hong Kong stock exchange. China Unicom shares surged after the disclosure at around 3:30 pm.
          A China Unicom spokesman apologized for the incident and promised it won’t happen again.

          China Unicom to procure TD-, FDD-LTE equipment, says report [DIGITIMES, Oct 24, 2013]

          China United Network Communications (China Unicom) has started an open-bid process for procuring 34,000 FDD-LTE base stations, 10,000 TD-LTE base stations and 8,000 FDD-LTE small cells, according to China-based tech.sina.com.

          Of the mobile telecom carriers in China, China Mobile has adopted TD-LTE only, while China Telecom and China Unicom have adopted FDD LTE as their main 4G standard and TD-LTE as an auxiliary in line with the China government’s policy promoting TD-LTE.

          China Telecom procured about 50,000 FDD-LTE base stations and about 20,000 TD-LTE ones in the third quarter of 2013.

          The future is here: Yes, it is Microsoft Surface 2 with modern apps only! (And ARM, not x86/x64!)

          This video is speaking for itself (and for the title): Why I Love my Microsoft Surface 2 : Tips and Tricks [Sean Ong YouTube channel, Nov 3, 2013]

          In this video I show off my favorite features in the Microsoft Surface 2, with windows 8.1 RT. I show off voice control (windows speech recognition), multiple monitor support, and a variety of accessories via USB hub (including external hard drive, mouse, keyboard, and Xbox 360 controller integration). I show how I connect the Surface 2 to my HDTV as well as wireless casting of music and video! I also go through some other features, such as Spotify web player, and icloud web. Also kid friendly applications and multiple accounts. There’s so much stuff this thing can do, it will blow your mind away

          That is how Sean Ong, a senior consultant at Navigant (focussing there on “technical, economic, and policy analysis of energy efficiency and renewable energy systems”) and himself an energy analysis engineer, was able to present the above, truly incredible customer value from current and especially future point of view for Windows 8.1 in geneneral and Surface 2 (ARM based) in particular. It is even more remarkable as nobody, I REPEAT NOBODY, from Microsoft worldwide could do that. I know even a highly professional, true world class Windows 8/Windows 8.1 expert who was not only fascinated himself by the above video, but acknowledged honestly that he was unaware of the speech recognition progress in Windows 8.1. And we are talking about an internal expert who has already been involved in the internal expert network of similar, most devoted Microsoft specialists in Windows 8 and Windows 8.1 for years.

          For me this video is incredibly important because:

          NOT ONLY FOR THE FUTURE OF MICROSOFT BUT FOR THE WHOLE STATE OF COMPUTING
          AS THE MISSING COMMUNICATIONS FROM MICROSOFT, EVEN THE TOTAL INABILITY OF MICROSOFT TO COMMUNICATE THE INHERENT WINDOWS 8.1/SURFACE 2 VALUES, WERE CLEARLY POINTING TO TOTAL LACK OF MARKETING COMPETENCY FOR ITS GAME-CHANGING, MICROSOFT-ONLY, POST PC AREA INNOVATIONS INHERENT IN WINDOWS 8.1/SURFACE 2

          Although these signs (both the positive and negative ones) were coupled with a number of competitive positive changes for Microsoft, such as:

          But a number of competitive negative changes for Microsoft became even more worrisome (than any time before) lately, such as:

          Fortunately we already know:

          Board of directors initiates succession process; Ballmer remains CEO until successor is named.
          Microsoft Corp. today announced that Chief Executive Officer Steve Ballmer has decided to retire as CEO within the next 12 months, upon the completion of a process to choose his successor. In the meantime, Ballmer will continue as CEO and will lead Microsoft through the next steps of its transformation to a devices and services company that empowers people for the activities they value most.
          “There is never a perfect time for this type of transition, but now is the right time,” Ballmer said. “We have embarked on a new strategy with a new organization and we have an amazing Senior Leadership Team. My original thoughts on timing would have had my retirement happen in the middle of our company’s transformation to a devices and services company. We need a CEO who will be here longer term for this new direction.”
          The Board of Directors has appointed a special committee to direct the process. This committee is chaired by John Thompson, the board’s lead independent director, and includes Chairman of the Board Bill Gates, Chairman of the Audit Committee Chuck Noski and Chairman of the Compensation Committee Steve Luczo. The special committee is working with Heidrick & Struggles International Inc., a leading executive recruiting firm, and will consider both external and internal candidates.
          The board is committed to the effective transformation of Microsoft to a successful devices and services company,” Thompson said. “As this work continues, we are focused on selecting a new CEO to work with the company’s senior leadership team to chart the company’s course and execute on it in a highly competitive industry.”
          “As a member of the succession planning committee, I’ll work closely with the other members of the board to identify a great new CEO,” said Gates. “We’re fortunate to have Steve in his role until the new CEO assumes these duties.”
          Founded in 1975, Microsoft (Nasdaq “MSFT”) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
          Outgoing Microsoft CEO Steve Ballmer has always been a speaker and performer like no other — his absolute enthusiasm for his company is electric in person, turning ordinary corporate events into raw displays of emotion that are often criticized but never forgotten. Read more at The Verge: http://www.theverge.com/2013/9/27/4779036/exclusive-video-steve-ballmers-intense-tearful-goodbye-to-microsoft
          Steve Ballmer paced his corner office on a foggy January morning here, listening through loudspeakers to his directors’ voices on a call that would set in motion the end of his 13-year reign as Microsoft Corp.’s MSFT -0.47% chief executive.
          Microsoft lagged behind Apple Inc. AAPL -0.60% and Google Inc. GOOG -0.16% in important consumer markets, despite its formidable software revenue. Mr. Ballmer tried to spell out his plan to remake Microsoft, but a director cut him off, telling him he was moving too slowly.
          “Hey, dude, let’s get on with it,” lead director John Thompson says he told him. “We’re in suspended animation.” Mr. Ballmer says he replied that he could move faster.
          But the contentious call put him on a difficult journey toward his August decision to retire, sending Microsoft into further tumult as it began seeking a successor to a man who has been at its heart for 33 years.
          “Maybe I’m an emblem of an old era, and I have to move on,” the 57-year-old Mr. Ballmer says, pausing as his eyes well up. “As much as I love everything about what I’m doing,” he says, “the best way for Microsoft to enter a new era is a new leader who will accelerate change.”
          Mr. Ballmer, in a series of exclusive interviews tinged with his characteristic bluster and wistfulness, tells of how he came to believe that he couldn’t lead Microsoft forward—that, in fact, Microsoft would not be led by him because of the very corporate culture he had helped instill.
          Mr. Ballmer and his board have been in agreement: Microsoft, while maintaining its strong software business, must shake up its management structure and refocus on mobile devices and online services if it is to find future profit growth and reduce its dependence on the fading PC market.
          The board’s beef was speed. The directors “didn’t push Steve to step down,” says Mr. Thompson, a longtime technology executive who heads the board’s CEO-search committee, “but we were pushing him damn hard to go faster.”
          Investors, too, were pushing for transformation. “At this critical juncture, Wall Street wants new blood to bring fundamental change,” says Brent Thill, a longtime Microsoft analyst at UBS AG. “Steve was a phenomenal leader who racked up profits and market share in the commercial business, but the new CEO must innovate in areas Steve missed—phone, tablet, Internet services, even wearables.”
          The Microsoft board’s list of possible successors includes, among others, former Nokia Corp. NOK1V.HE +0.25% CEO Stephen Elop, Microsoft enterprise-software chief Satya Nadella and Ford Motor Co. F -0.12% CEO Alan Mulally, say people familiar with the search. In conjunction with Microsoft’s annual shareholder meeting Nov. 19, the board plans to meet and will discuss succession, says a person familiar with the schedule.
          Representatives for Mr. Elop and Mr. Nadella say the men have no comment on the search. A Ford spokesman says “nothing has changed” since November 2012, when Ford said Mr. Mulally would remain CEO through at least 2014, adding: “Alan remains absolutely focused on continuing to make progress on our One Ford plan. We do not engage in speculation.”
          Microsoft’s next chief will be only the third in its history. Mr. Ballmer joined in 1980 at the suggestion of his Harvard University pal, co-founder Bill Gates, and is its second-largest individual shareholder and a billionaire.
          After growing up in Detroit, where his father was a Ford manager, Mr. Ballmer roomed down the hall from Mr. Gates at Harvard. He dropped his Stanford M.B.A. studies to become Microsoft’s first business manager.
          He was Mr. Gates’s right-hand man, helping turn Microsoft into a force that redefined how the world used computers. He took the reins in 2000, further solidifying Microsoft’s position in software markets and keeping the profit engine humming. Revenue tripled during his tenure to almost $78 billion in the year ended this June, and profit grew 132% to nearly $22 billion.
          But while profit rolled in from Microsoft’s traditional markets, it missed epic changes, including Web-search advertising and the consumer shift to mobile devices and social media.
          Last year, Mr. Ballmer sought to reboot. In an October shareholder letter, he declared Microsoft would become a provider of “devices and services” for businesses and individuals.
          He told the board he wanted to lead the charge and remain until his youngest son graduated from high school in four years. He began his own succession planning by meeting potential candidates in what he calls “cloak-and-dagger” meetings.
          Mr. Ballmer’s reboot plan required a corporate overhaul. For guidance, he called his longtime friend, Ford’s Mr. Mulally, once a top Boeing Co. BA +0.73% executive. They met Christmas Eve at a Starbucks on Mercer Island near Seattle.
          Mr. Ballmer brought a messenger bag, pulling out onto a table an array of phones and tablets from Microsoft and competitors. He asked Mr. Mulally how he turned around Ford. For four hours, he says, Mr. Mulally detailed how teamwork and simplifying the Ford brand helped him reposition it.
          The Ford spokesman says: “Ford and Microsoft have a long-standing business partnership, and many of our leaders discuss business together frequently.”
          It was a wake-up call for Mr. Ballmer, who had run the software giant with bravado and concedes that “I’m big, I’m bald and I’m loud.”
          Microsoft’s culture included corporate silos where colleagues were often pitted against one another—a competitive milieu that spurred innovation during Microsoft’s heyday but now sometimes leaves groups focused on their own legacies and bottom lines rather than on the big technology picture and Microsoft as a whole.
          He recalls thinking: “I’ll remake my whole playbook. I’ll remake my whole brand.”
          The board liked his new plan. But as Mr. Ballmer prepared to implement it, his directors on the January conference call demanded he expedite it.
          Pushing hardest, say participants, were Mr. Thompson, who had held top jobs at International Business Machines Corp. IBM +0.54% and Symantec Corp. SYMC +0.38%, and Stephen Luczo, CEO of Seagate Technology STX -2.33% PLC. Mr. Luczo declines to comment.
          “But, I didn’t want to shift gears until I shipped Windows,” Mr. Ballmer says he told the directors on the call, explaining that he hadn’t moved faster in late 2012 because he was focused on releasing in October the next generation of Windows, Microsoft’s longtime cash cow.
          Mr. Ballmer swung into gear, drafting a management-reorganization plan to discuss during a March retreat at a Washington mountain resort. He invited Mr. Thompson and another director, to get board perspective on his plan.
          Instead, he got more pressure. Mr. Thompson says he told Mr. Ballmer and his executives: “Either get on the bus or get off.”
          Mr. Ballmer says he took that as an endorsement of his plan. That evening, some of them played poker, drank Scotch and gathered around the lodge’s fireplace.
          The next month, hedge fund ValueAct Capital disclosed a $2 billion Microsoft stake. ValueAct’s CEO Jeffrey Ubben at a conference said Microsoft’s stock was undervalued. Other shareholders were urging it to increase its dividend and shed noncore businesses. A ValueAct spokesman declines further comment. In September, Microsoft increased its dividend but hasn’t sold off businesses investors have urged it to, such as the Bing search engine.
          Mr. Ballmer hewed to Mr. Mulally’s recommendations. For years, he had consulted with Microsoft’s unit chiefs individually, often dispensing marching orders. Now, he began inviting them to sit together in a circle in his office to foster camaraderie.
          It was a lurching corporate-culture change. “It’s not the way we operated at all in Steve’s 30-plus years of leadership of the company,” says Mr. Nadella, an executive vice president.
          Mr. Ballmer says his senior team struggled with the New Steve. Some resisted on matters large—combining engineering teams—and small, such as weekly status reports.
          Qi Lu, an executive vice president, submitted a 56-page report on applications and services. Mr. Ballmer sent it back, insisting on just three pages—part of a new mandate to encourage the simplicity needed for collaboration. Mr. Lu says he retorted: “But you always want the data and detail!”
          Mr. Ballmer says he started to realize he had trained managers to see the trees, not the forest, and that many weren’t going to take his new mandates to heart.
          In May, he began wondering whether he could meet the pace the board demanded. “No matter how fast I want to change, there will be some hesitation from all constituents—employees, directors, investors, partners, vendors, customers, you name it—to believe I’m serious about it, maybe even myself,” he says.
          His personal turning point came on a London street. Winding down from a run one morning during a May trip, he had a few minutes to stroll, some rare spare time for recent months. For the first time, he began thinking Microsoft might change faster without him.
          “At the end of the day, we need to break a pattern,” he says. “Face it: I’m a pattern.”
          Mr. Ballmer says he secretly began drafting retirement letters—ultimately some 40 of them, ranging from maudlin to radical.
          On a plane from Europe in late May, he told Microsoft General Counsel Brad Smith that itmight be the time for me to go.” The next day, Mr. Ballmer called Mr. Thompson, with the same message.
          Mr. Thompson called two other directors, Mr. Luczo and Charles Noski, former Bank of America Corp. BAC +0.84% vice chairman, and says he told them: “If Steve’s ready to go, let’s see if we can get on with this.”
          At the board’s June meeting in Bellevue, Wash., Mr. Ballmer says he told the directors: “While I would like to stay here a few more years, it doesn’t make sense for me to start the transformation and for someone else to come in during the middle.”
          The board wasn’t “surprised or shocked,” says Mr. Noski, given directors’ conversations with Mr. Ballmer. Mr. Thompson says he and others indicated that “fresh eyes and ears might accelerate what we’re trying to do here.”
          Mr. Gates, Microsoft’s chairman, told Mr. Ballmer that he understood from experience how hard it was to leave when Microsoft was your “life,” says someone familiar with Mr. Gates’s thinking. Mr. Gates told the board he supported Mr. Ballmer’s departure if it ensured Microsoft “remains successful,” this person says.
          That night, after Mr. Ballmer watched his son sing at his high-school baccalaureate ceremony—a Coldplay song with the lyrics: “It’s such a shame for us to part; nobody said it was easy; no one ever said it would be this hard”—he says he told his wife and three sons he was probably leaving Microsoft. They all cried.
          On Aug. 21, the board held a conference call to accept Mr. Ballmer’s retirement. Mr. Gates and Mr. Thompson sat with Mr. Ballmer in his office. It was over in less than an hour.
          Mr. Ballmer vows not to be a lame duck.
          “Charge! Charge! Charge!” he bellows, jumping up from an interview and lunging forward while pumping his fist forward like a battering ram. “I’m not going to wimp away from anything!”
          He has remained active, shepherding a $7.5 billion deal to buy Nokia’s mobile businesses and fine-tuning holiday-marketing strategies for Microsoft’s Surface tablets and new Xbox game console. In October, Microsoft reported better-than-expected quarterly earnings.
          At his final annual employee meeting this September, Mr. Ballmer gave high-fives and ran off the stage to the song: “(I’ve Had) The Time of My Life” from the movie “Dirty Dancing.”
          Last month, walking along Lake Washington, Mr. Ballmer bumped into Seattle Seahawks coach Pete Carroll, who was fired from earlier jobs and now is thriving. Mr. Carroll says he told his neighbor he went through “something like this” and predicted it is “going to be great.”
          Mr. Ballmer says he is weighing casual offers as varied as university teaching and coaching his youngest son’s high-school basketball team. He plans no big decisions for at least six months—except that he won’t run another big company. He says he’s open to remaining a Microsoft director.
          At a recent executive meeting, he perched on a stool to review developments. His third slide was labeled “New CEO.”
          “Not a soul in this room doesn’t think we need to go through this transition,” he said. As he stood up, his voice started to crack: “As much as I wish I could stay your CEO, I still own a big chunk of Microsoft, and I’m going to keep it.”
          He walked back toward the stool, then turned around and said in a near-whisper: “Please take good care of Microsoft.”

          You could read also Reporter’s Notebook: Two Days With Steve Ballmer [The Wall Street Journal, Nov 15, 2013] ending this way: 

          … This summer when he was deciding whether to step down, Mr. Ballmer quietly met with big institutional investors in Boston and San Francisco. The head of one big institution told him, “Microsoft would be better served with you gone.” Mr. Ballmer, who’s the second largest individual shareholder, knew the investor might get his wish. Yet, he argued, “Who cares more about Microsoft than I do? I own a lot. It’s my life.”

          And that showed how his emotions alternate between bluster and wistfulness. The deed is done, the decision has been made, a new CEO is imminent. But Mr. Ballmer is struggling because Microsoft has been so much more than a job … as he said, “my life.”

          My closing remarks:

          1. The next CEO problem to be solved is definitely the #1 issue for the future of the Microsoft
          2. The #2 issue is how successfully the Unique Nokia assets (from factories to global device distribution & sales, and the Asha sub $100 smartphone platform etc.) will now empower the One Microsoft devices and services strategy [‘Experiencing the Cloud’, Sept 3, 2013] for which the Microsoft answers to the questions about Nokia devices and services acquisition: tablets, Windows downscaling, reorg effects, Windows Phone OEMs, cost rationalization, ‘One Microsoft’ empowerment, and supporting developers for an aggressive growth in market share [‘Experiencing the Cloud’, Sept 4, 2013] is providing an interim answer, i.e. till the arrival of the new CEO
          3. The #3 isssue is How the device play will unfold in the new Microsoft organization? [‘Experiencing the Cloud’, July 14, 2013]. If Stephen Elop, former CEO of Nokia, and a previous senior executive of Microsoft, will become the next CEO then Minutes of a high-octane but also expert evangelist CEO: Stephen Elop, Nokia [‘Experiencing the Cloud’, July 13, 2013] could provide some clue for changes to be expected as a strategic evolution of the current one described in the already mentioned [‘Experiencing the Cloud’, July 14, 2013]. Even in case when he will not be selected by the Microsoft board as the next CEO he will have very strong influence on the device play for the initial first year integration of the acquired Nokia businesses into Microsoft, for very simple reason, that nobody could do this, and a successfull integration is a higher priority, #2 issue.
          4. Strategically, however, the most important issue is the
          5. Microsoft reorg for delivering/supporting high-value experiences/activities [‘Experiencing the Cloud’, July 11, 2013]

          6. Everything else which might be a crucial issue during this process is highly controversial, without any official clues from Microsoft or any other stakeholder sources. The most controversial among all of them is the issue of non-profitable and/or not necessarily integral to Microsoft businesses. These are the Bing and the Xbox businesses. The range of external opinions is extremely large with investment circles firmly believing that neither Bing nor Xbox are inherently integral to Microsoft, and most of the external development community with an exacly opposite belief of those businesses being inherently internal.

          7. My personal opinion is that with spin-off both extremes could be served sufficiently well, and even open completely new business development opportunities for both Bing and Xbox to grow substantially faster and bigger than otherwise. I would be especially enthusiastic for an Xbox spin-off as that business is already (with upcoming Oct 22 introduction of Xbox One) not a gaming console, but an entertainment ecosystem type of business. As such it would get enormous growth opportunities with its spin-off from the tightly integrated Microsoft mother ship.

          8. The ultimate issue for me, however, is how the currently quite crippled and/or bureaucratic marketing machinery of Microsoft could be completely overhauled as part of Nokia integration, and how fast that could be achieved, if any? I mean a new marketing machinery which is thriving on the huge number of opportunities provided by already delivered game-changing products and technologies, instead of not understanding them at all. I mean not simply an ability to produce videos like the one in the beginning of this post, but a competency to produce whole storyboards for production of such videos and other communication materials. One might call it “high-octane marketing” for simplicity. Even more I envisage such integration of the marketing activities into the whole supply chain management (SCM) as is done in Samsung. See my Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A [‘Experiencing the Cloud’, Nov 11, 2013] post for that, from which I will copy the following illustration here as well:

          Xamarin: C# developers of native “business” and “mobile workforce” applications now can easily work cross-platform, for Android and iOS clients as well

          … while other cross-platform applications, i.e. “applications for consumers only” are prohibited for C# developers by the still high price of Xamarin, which essentially applies to indie and start-up developers only

          The mobile application development technology behind this, from the cloud to the clients, was extensively covered in Windows Phone 8: getting much closer to a unified development platform with Windows 8 [‘Experiencing the Cloud’, Nov 8, 2012] post of mine (including the cross-platform possibilities with Xamarin already), and then continued in Windows Azure becoming an unbeatable offering on the cloud computing market [‘Experiencing the Cloud’, June 28, 2013] and Microsoft partners empowered with ‘cloud first’, high-value and next-gen experiences for big data, enterprise social, and mobility on wide variety of Windows devices and Windows Server + Windows Azure + Visual Studio as the platform [‘Experiencing the Cloud’, July 10, 2013] posts for the cloud part.

          Note: Decide for yourself how that “consumers only applications by indie and start-up developers” type of exclusion will effect the cross platform development needs, after you take a look at the current state of the evolution of smartphone and tablet markets:

           

          Q3’13 smartphone and overall mobile phone markets: Android smartphones surpassed 80% of the market, with Samsung increasing its share to 32.1% against Apple’s 12.1% only; while Nokia achieved a strong niche market position both in “proper” (Lumia) and “de facto” (Asha Touch) smartphones 
          [‘Experiencing the Cloud’, Nov 14, 2013]

          The tablet market in Q1-Q3’13: It was mainly shaped by white-box vendors while Samsung was quite successfully attacking both Apple and the white-box vendors with triple digit growth both worldwide and in Mainland China 
          [‘Experiencing the Cloud’, Nov 14, 2013]


          Details

          For one of the problems solved now by Microsoft see my Obstacles for .NET on other platforms [‘Experiencing the Cloud’, Oct 15, 2013] post.

          To understand what is the situation now I will start with:

          In: Cross Platform .NET Just A Lot Got Better [Haacked blog, Nov 13, 2013]

          Not long ago I wrote a blog post about how platform restrictions harm .NET. This led to a lot of discussion online and on Twitter. At some point David Kean suggested a more productive approach would be to create a UserVoice issue. So I did and it quickly gathered a lot of votes.

          Phil Haack – Customer Feedback for Microsoft http://visualstudio.uservoice.com/users/40986152-phil-haack:

          Remove the platform restriction on Microsoft NuGet packages 4,929 votes
          Phil Haack shared this idea and gave it 3 votes  ·  Sep 26, 2013

          COMPLETED  ·  Visual Studio team (Product Team, Microsoft) responded
          Thanks a lot for this suggestion and all the votes.
          We’re happy to announce that we’ve removed the Windows-only restriction from our license. We’ve applied this new license to most of our packages and will continue to use this license moving forward.
          Here is our announcement:
          http://blogs.msdn.com/b/dotnet/archive/2013/11/13/pcl-and-net-nuget-libraries-are-now-enabled-for-xamarin.aspx
          For reference, the license for stable packages can be found here:
          http://go.microsoft.com/fwlink/?LinkId=329770
          Thanks,
          Immo Landwerth
          Program Manager, .NET Framework Team
          Phil Haack commented  ·  Nov 13, 2013
          Amazing! Thanks! This is great!

          Bravo!

          Serious Kudos to the .NET team for this. It looks like most of the interesting PCL packages are now licensed without platform restrictions. As an example of how this small change sends out ripples of goodness, we can now make Octokit.net depend on portable HttpClient and make Octokit.net itself more cross platform and portable without a huge amount of work.

          I’m also excited about the partnership between Microsoft and Xamarin this represents. I do believe C# is a great language for cross-platform development and it’s good to see Microsoft jumping back on board with this. This is a marked change from the situation I wrote about in 2012.

          • then will go to S. Somasegar, Corporate Vice President of the Developer Division at Microsoft:

          In: Visual Studio 2013 Launch: Announcing Visual Studio Online [Somasegar’s blog, Nov 13, 2013]

          … Microsoft and Xamarin are collaborating to help .NET developers broaden the reach of their applications to additional devices, including iOS and Android …

          Partner News

          With today’s launch of Visual Studio 2013, we have 123 products from 74 partners available already as Visual Studio 2013 extensions.  As part of an ecosystem of developer tools experiences, Visual Studio continues to be a platform for delivering a great breadth of developer experiences.

          Xamarin

          The devices and services transformation is driving developers to think about how they will build applications that reach the greatest breadth of devices and end-user experiences.  We’ve offered great HTML-based cross platform development experiences in Visual Studio with ASP.NET and JavaScript.  But our .NET developers have also asked us how they can broaden the reach of their applications and skills. 

          Today, I am excited to announce a broad collaboration between Microsoft and Xamarin.  Xamarin’s solution enables developers to leverage Visual Studio, Windows Azure and .NET to further extend the reach of their business applications across multiple devices, including iOS and Android.

          The collaboration between Xamarin and Microsoft brings several benefits for developers today.  First, as an initial step in a technical partnership, Xamarin’s next release that is being announced today will support Portable Class Libraries, enabling developers to share libraries and components across a breadth of Microsoft and non-Microsoft platformsSecond, Professional, Premium and Ultimate MSDN subscribers will have access to exclusive benefits for getting started with Xamarin, including new training resources, extended evaluation access to Xamarin’s Visual Studio integration and special pricing on Xamarin products.

          Xamarin, the company that empowers developers to build fully native apps for iOS, Android, Windows and Mac from a single shared code base, today announced a global collaboration with Microsoft that makes it easy for mobile developers to build native mobile apps for all major platforms in Visual Studio. Xamarin is the only solution that unifies native iOS, Android and Windows app development in Visual Studio—bridging one of the largest developer bases in the world to the most successful mobile device platforms.

          A highly competitive app marketplace and the consumerization of IT have put tremendous pressure on developers to deliver high quality mobile user experiences for both consumers and employees. A small bug or crash can lead to permanent app abandonment or poor reviews. Device fragmentation, with hundreds of devices on the market for iOS and Android alone, multiplies testing efforts resulting in a time-consuming and costly development process. This is further complicated by faster release cycles for mobile, necessitating more stringent and efficient regression testing.

          The collaboration spans three areas:

          • A technical collaboration to better integrate Xamarin technology with Microsoft developer tools and services.
            Aligned with this goal, Xamarin is a SimShip partner for Visual Studio 2013, releasing same-day support for Microsoft’s latest Visual Studio release that launched today. In addition, Xamarin has released today full integration for Microsoft’s Portable Library projects in iOS and Android apps, making it easier than ever for developers to share code across devices.
          • Xamarin’s recently launched Xamarin University is now free to MSDN subscribers. The training course helps developers become successful with native iOS and Android development over the course of 30 days. Classes for the $1,995 program kick off in January 2014, with a limited number of seats available at no cost for MSDN subscribers.
          • MSDN subscribers have exclusive trial and pricing options to Xamarin subscriptions for individuals and teams.

            Get a 90-day trial to Xamarin, sign up for Xamarin University for free (normally $1,995), and save 30-50% on Xamarin with special MSDN pricing.
            All the productivity you love in Visual Studio and C#,
            on iOS and Android.

          The broad collaboration between Microsoft and Xamarin which we announced today is targeted at supporting developers interested in extending their applications across multiple devices, said S. Somasegar, Corporate Vice President, Microsoft Corporation. With Xamarin, developers combine all of the productivity benefits of C#, Visual Studio 2013 and Windows Azure with the flexibility to quickly build for multiple device targets.

          According to Gartner, by 2016, 70 percent of the mobile workforce will have a smartphone, half of which will be purchased by the employee, and 90 percent of enterprises will have two or more platforms to support. Faced with high expectations for mobile user experiences and the pressures of BYOD, companies and developers alike are looking for scalable ways to migrate business practices and customer interactions to high-performance, native apps on multiple platforms.

          To meet this need to support heterogeneous mobile environments, Microsoft and Xamarin are making it easy for developers to mobilize their existing skills and code. By standardizing mobile app development with Xamarin and C#, developers are able to share on average 75 percent of their source code across device platforms, while still delivering fully native apps. Xamarin supports 100 percent of both iOS and Android APIsanything that can be done in Objective-C or Java can be done in C# with Xamarin.

          In just two years, Xamarin has amassed a community of over 440,000 developers in 70 countries, more than 20,000 paying accounts and a network of over 120 consulting partners globally.

          We live in a multi-platform world, and by embracing Xamarin, Microsoft is enabling its developer community to thrive as mobile developers, said Nat Friedman, CEO and cofounder, Xamarin. Our collaboration with Microsoft will accelerate enterprise mobility for millions of developers.

          The groundbreaking partnership was announced as part of the Visual Studio Live 2013 launch event in New York City. In addition, Xamarin and Microsoft have teamed up with the popular podcast, .NET Rocks!, for a 20-city nationwide road show featuring live demos on how to use Visual Studio 2013, Xamarin and Windows Azure to build and scale mobile apps for iOS, Android and Windows. For a full list of cities and to sign up for an event, please visit: xamarin.com/modern-apps-roadshow

          About Xamarin
          Xamarin is the new standard for enterprise mobile development. No other platform enables businesses to reach all major devices—iOS, Android, Mac and Windows—with 100 percent fully native apps from a single code base. With Xamarin, businesses standardize mobile app development in C#, share on average 75 percent source code across platforms, and leverage their existing skills, teams, tools and code to rapidly deliver great apps with broad reach. Xamarin is used by over 430,000 developers from more than 100 Fortune 500 companies and over 20,000 paying customers including Clear Channel, Bosch, McKesson, Halliburton, Cognizant, GitHub, Rdio and WebMD, to accelerate the creation of mission-critical consumer and enterprise apps. For more information, please visit: xamarin.com, read our blog, and follow us on Twitter @xamarinhq.

          Earlier today, Soma announced a collaboration between Microsoft and Xamarin. As you probably know, Xamarin’s Visual Studio extension enables developers to use VS and .NET to extend the reach of their apps across multiple devices, including iOS and Android. As part of that collaboration, today, we are announcing two releases around the .NET portable class libraries (PCLs) that support this collaboration:

          Microsoft .NET NuGet Libraries Released

          Today we released the following portable libraries with our new license, on NuGet.org:

          You can now start using these libraries with Xamarin tools, either directly or as the dependencies of portable libraries that you reference.

          We also took the opportunity to apply the same license to Microsoft .NET NuGet libraries, which aren’t fully portable today, like Entity Framework and all of the Microsoft AspNet packages. These libraries target the full .NET Framework, so they’re not intended to be used with Xamarin’s iOS and Android tools (just like they don’t target Windows Phone or Windows Store).

          These releases will enable significantly more use of these common libraries across Windows and non-Windows platforms, including in open source projects.

          Cross-platform app developers can now use PCL

          imagePortable class libraries are a great option for app developers building for Microsoft platforms in Visual Studio, to share key business functionality across Microsoft platforms. Many developers use the PCL technology today, for example, to share app logic across Windows Store and Windows Phone. Today’s announcement enables developers using Xamarin’s tools to share these libraries as well.

          In Visual Studio, you’ll continue to use Portable Class Library projects but will be able to reference them from within Xamarin’s tools for VS. That means that you can write rich cross-platform libraries and take advantage of them from all of your .NET apps.

          The following image demonstrates an example set of .NET NuGet library references that you can use within one of your portable libraries. The .NET NuGet libraries will enable new scenarios and great new libraries built on top of them.

          You can build cross-platform libraries with .NET

          This announcement also benefits .NET developers writing reusable and open source libraries. You’ve probably used some of these libraries, for example Json.NET. These developers have been very vocal about wanting this change. This announcement greatly benefits those library developers, enabling them to leverage our portable libraries in their libraries.

          Getting started with portable libraries and Xamarin

          You can start by building portable libraries in Visual Studio, as you can see in the screenshot above. You can take advantage of the portable libraries that we released today. Write code!

          You’ll need an updated NuGet client, to take advantage of this new scenario. Make sure that you are using NuGet 2.7.2 or higher, or just download the latest NuGet for your VS version from the Installing NuGet page.

          We are working closely with Xamarin to ensure that our NuGet libraries work well with Xamarin tools, as well as PCL generally. Please tell us if you find any issues. We’ll get them resolved and post them to our known issues page.

          Thank You

          Thank you for the feedback on UserVoice. With today’s announcement, we can mark the request to Remove the platform restriction on Microsoft NuGet packages as complete. Thanks to Phil Haack for filing the issue. Coupled with our collaboration with Xamarin, .NET developers have some compelling tools, especially for targeting mobile devices.

          Both Microsoft and Xamarin want to see this scenario succeed. We’d love your feedback. Please tell us how the new features are working for you.

          This post was written by Rich Lander, a Program Manager on the .NET Team.

          [Some] Comments

          Immo Landwerth [MSFT] 13 Nov 2013 1:24 PM

          Thanks a lot for the kind words!

          @Curt: We absolutely understand that PCL support in Visual Studio express editions is super important to many of our developers. That’s why it’s on our list. However, I can’t promise that we actually end up delivering it in the VS 2013 time frame. As you’ve seen today, there is a lot of great stuff going on and resources are always more scarce than one would hope.

          Gz 14 Nov 2013 4:19 AM

          Xamarin is great but their pricing is insane! even with the MSDN discount. We’re a tiny start-up development house that has benefited from the MS BizSpark programme and we simply cannot stretch to paying out a thousand bucks per platform, per year, per developer – mobile isn’t even a revenue generator for us – it would merely be extending some functionality from our main apps to mobile and we’d give it to customers for free. I know they have a free & an indie edition blah blah blah but we wanna work in VS. The good news is that Xamarin will soon have a competitor in this space that could potentially blow them out of the water with full VS support and direct access to native APIs on each platform (iOS, Android & Mac) and their pricing will be less than 1/3rd of Xamarin’s. I’ve been sworn to secrecy about it but expect to have a cost-effective Xamarin alternative before the end of the year. (No I don’t work for the company, just got some info about it recently).

          Stilgar 14 Nov 2013 8:30 AM

          I second the need for PCLs in Express editions. Otherwise your company’s constant claims that the tooling for Windows 8 and Windows Phone development is free is pure hypocrisy.

          TL;DR: You can now (legally) use our .NET OData client and ODataLib on Android and iOS.

          Backstory

          For a while now we have been working with our legal team to improve the terms you agree to when you use one of our libraries (WCF Data Services, our OData client, or ODataLib). A year and a half ago, we announced that our EULA would include a redistribution clause. With the release of WCF Data Services 5.6.0, we introduced portable libraries for two primary reasons:

            1. Portable libraries reduce the amount of duplicate code and #ifdefs in our code base.

            2. Portable libraries increase our reach through third-party tooling like Xamarin (more on that later).

              It took some work to get there, and we had to make some sacrifices along the way, but we are now focused exclusively on portable libraries for client-side code. Unfortunately, our EULA still contained a clause that prevented the redistributable code from being legally used on a platform other than Windows.

              OData and Xamarin: Extending developer reach to many platforms

              We are really excited about Microsoft’s new collaboration with Xamarin. As Soma says, this collaboration will allow .NET developers to broaden the reach of their applications and skills. This has long been the mantra of ODataa standardized ecosystem of services and consumers that enables consumers on any platform to easily consume services developed on any platform. This collaboration will make it much easier to write a shared code base that allows consumption of OData on Windows, Android or iOS.

              EULA change

              To fully enable this scenario, we needed to update our EULA. We, along with several other teams at Microsoft, are rolling out a new EULA today that has relaxed the distribution requirements. Most importantly, we removed the clause that prevented redistributable code from being used on Android and iOS.

              The new EULA is effective immediately for all of our NuGet packages. This means that (even though we already released 5.6.0) you can create a Xamarin project today, take a new dependency on our OData client, and legally run that application on any platform you wish.

              Thanks

              As always, we really appreciate your feedback. It frequently takes us some time to react, but the credit for this change is due entirely to customer feedback. We hear you. Keep it coming.

              Thanks,
              The OData Team

              Q3’13 smartphone and overall mobile phone markets: Android smartphones surpassed 80% of the market, with Samsung increasing its share to 32.1% against Apple’s 12.1% only; while Nokia achieved a strong niche market position both in “proper” (Lumia) and “de facto” (Asha Touch) smartphones

              Details about Samsung’s strengths you can find inside the Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A [‘Experiencing the Cloud’, Nov 11, 2013] post of mine.

              My findings supporting the above title:

              • 205 million Android smartphones were delivered in Q3’13, representing 15.2% growth sequentially (Q/Q) and 67.3% growth relative to the same period of last year (Y/Y)
              • Meanwhile the number of Apple iPhones shipped increased only to 33.8 million, growing by 8.3% sequentially (Q/Q), but still representing a 25.65% growth relative to the same period of last year (Y/Y)
              • The shipment of “proper” smartphones from Nokia (S60/Symbian and Lumia/Windows Phone) increased to 8.8 million units, representing 18.9% growth sequentially (Q/Q) and 39.7% growth relative to the same period of last year (Y/Y)

              image

              Than for the lead smartphone market, i.e. Mainland China I will include here:

              There were 102.66 million handsets sold in the China market during the third quarter of 2013, growing 13.6% on quarter and 54.5% on year, of which 93.08 million units were smartphones, increasing 20.7% on quarter and 89.3% on year, according to China-based consulting company Analysys International.

              image

              While for the worldwide market:

              Lenovo, ZTE, Huawei and Yulong/Coolpad have taken advantage of the surging low-end smartphone market. According to IC Insights, the four major China-based handset companies are forecast to ship 168 million smartphones in 2013 and together hold a 17% share of the worldwide smartphone market.
              Lenovo, ZTE, Huawei and Yulong/Coolpad shipped a combined 98 million smartphones in 2012, a more than 300% surge from the 29 million units shipped in 2011, IC Insights disclosed. It should be noted that the China-based suppliers of smartphones are primarily serving the China and Asia-Pacific marketplace, and offer low-end models that typically sell for less than US$200.
              Low-end smartphones are expected to represent just under one-third (310 million) of the total 975 million smartphones shipped in 2013. IC Insights forecast that by 2017, low-end smartphone shipments will represent 46% of the total smartphone market with China and the Asia-Pacific region to remain the primary markets for these low-end models.
              Samsung Electronics and Apple are set to continue dominating the total smartphone market in 2013. The two vendors are forecast to ship 457 million units and together hold a 47% share of the total smartphone market in 2013, IC Insights said. In 2012, Samsung and Apple shipped 354 million smartphones and took a combined 50% share of the total smartphone market.
              Nokia was third-largest supplier of smartphones behind Samsung and Apple in 2011, but has seen its share of the smartphone market fall. Nokia’s smartphone shipments are forecast to decline by another 4% and grab an only 3% share of the total smartphone market in 2013, IC Insights indicated.
              Other smartphone producers that have fallen on hard times include RIM and HTC. While each of these companies had about a 10% share of the smartphone market in 2011, IC Insights estimated they will have only about 2% shares of the 2013 smartphone market.

              image

              Worldwide mobile phone sales to end users totaled 455.6 million units in the third quarter of 2013, an increase of 5.7 percent from the same period last year, according to Gartner, Inc. Sales of smartphones accounted for 55 percent of overall mobile phone sales in the third quarter of 2013, and reached their highest share to date.

              Worldwide smartphone sales to end users reached 250.2 million units, up 45.8 percent from the third quarter of 2012. Asia/Pacific led the growth in both markets – the smartphone segment with 77.3 percent increase and the mobile phone segment with 11.9 percent growth. The other regions to show an increase in the overall mobile phone market were Western Europe, which returned to growth for the first time this year, and the Americas.

              “Sales of feature phones continued to decline and the decrease was more pronounced in markets where the average selling price (ASP) for feature phones was much closer to the ASP affordable smartphones,” said Anshul Gupta, principal research analyst at Gartner. “In markets such as China and Latin America, demand for feature phones fell significantly as users rushed to replace their old models with smartphones.”

              Gartner analysts said global mobile phone sales are on pace to reach 1.81 billion units in 2013, a 3.4 percent increase from 2012. “We will see several new tablets enter the market for the holiday season, and we expect consumers in mature markets will favor the purchase of smaller-sized tablets over the replacement of their older smartphones” said Mr. Gupta.

              While Samsung’s share was flat in the third quarter of 2013, Samsung increased its lead over Apple in the global smartphone market (see Table 1). The launch of the Samsung Note 3 helped reaffirm Samsung as the clear leader in the large display smartphone market, which it pioneered.
              Lenovo’s sales of smartphones grew to 12.9 million units, up 84.5 percent year-on-year. It constantly raised share in the Chinese smartphone market.
              Apple’s smartphone sales reached 30.3 million units in the third quarter of 2013, up 23.2 percent from a year ago. “While the arrival of the new iPhones 5s and 5c had a positive impact on overall sales, such impact could have been greater had they not started shipping late in the quarter. While we saw some inventory built up for the iPhone 5c, there was good demand for iPhone 5s with stock out in many markets,” said Mr. Gupta.

              image

              In the smartphone operating system (OS) market (see Table 2), Android surpassed 80 percent market share in the third quarter of 2013, which helped extend its leading position. “However, the winner of this quarter is Microsoft which grew 123 percent. Microsoft announced the intent to acquire Nokia’s devices and services business, which we believe will unify effort and help drive appeal of Windows ecosystem,” said Mr. Gupta. Forty-one per cent of all Android sales were in mainland China, compared to 34 percent a year ago. Samsung is the only non-Chinese vendor in the top 10 Android players ranking in China. Whitebox Yulong [Coolpad] is the third largest Android vendor in China with a 9.7 percent market share in the third quarter of 2013. Xiaomi represented 4.3 percent of Android sales in the third quarter of 2013, up from 1.4 percent a year ago.

              image

              Mobile Phone Vendor Perspective

              Samsung: Samsung extended its lead in the overall mobile phone market, as its market share totaled 25.7 percent in the third quarter of 2013 (see Table 3). “While Samsung has started to address its user experience, better design is another area where Samsung needs to focus,” said Mr. Gupta. “Samsung’s recent joint venture with carbon fiber company SGL Group could bring improvements in this area in future products.”
              Nokia: Nokia did better than anticipated in the third quarter of 2013, reaching 63 million mobile phones, thanks to sales of both Lumia and Asha series devices. Increased smartphone sales supported by an expanded Lumia portfolio, helped Nokia move up to the No. 8 spot in the global smartphone market. But regional and Chinese Android device manufacturers continued to beat market demand, taking larger share and creating a tough competitive environment for Lumia devices.
              Apple:  Gartner believes the price difference between the iPhone 5c and 5s is not enough in mature markets, where prices are skewed by operator subsidies, to drive users away from the top of the line model. In emerging markets, the iPhone 4S will continue to be the volume driver at the low end as the lack of subsidy in most markets leaves the iPhone 5c too highly priced to help drive further penetration.
              Lenovo: Lenovo moved to the No. 7 spot in the global mobile phone market, with sales reaching approximately 13 million units in the third quarter of 2013. “Lenovo continues to rely heavily on its home market, which represents more than 95 per cent of its overall mobile phone sales. This could limit its growth after 2014, when the Chinese market is expected to decelerate,” said Mr. Gupta.

              image

              The tablet market in Q1-Q3’13: It was mainly shaped by white-box vendors while Samsung was quite successfully attacking both Apple and the white-box vendors with triple digit growth both worldwide and in Mainland China

              Details about Samsung’s strengths you can find inside the Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A [‘Experiencing the Cloud’, Nov 11, 2013] post of mine.

              Note what was communicated in the 2013 global tablet forecast [Dec 11, 2012]:

              • imageDIGITIMES Research forecasts that global tablet shipments (including both branded and white box models) will overtake notebook shipments in 2013, growing by 38.3% on 2012 levels to hit 210 million units.
              • Shipments of branded tablets alone are forecast to reach 140 million units. That is the shipment of white box tablets is forecast to grow to more than 70 million units in 2013. [NS: Q1-Q3’: 62.6 million]
              • DIGITIMES Research also projects that global shipments of branded and white box tablets will top 300 million by 2015, with branded devices accounting for more than 200 million units and white box tablets for around 100 million.

              My findings behind the title statement:

              • White-box vendors from Mainland China delivered 62.6 million tablets in Q1-Q3’13 vs. 35.4 million a year ago (76.8% growth) per DIGITIMES Research
                (the two latest sources used for that are included in the end)
              • Apple delivered 48.2 million tablets in Q1-Q3’13 vs. 42.8 million a year ago (12.6% growth) per IDC
                (the IDC sources used are the corresponding quarterly press releases)
              • Samsung delivered 27.3 million tablets in Q1-Q3’13 vs. 8.7 million a year ago (214% growth) per IDC (with a H1’13 correction from Samsung itself)
              • IDC’s latest forecast couldn’t take properly into the account the group of white-box vendors (44.6 million in “Others” category vs. 62.6 million), even more than a year ago (25.8 million in “Others” category vs. 35.4 million)
              • With such error for Q1-Q3’13 there was a 142.6 million strong worldwide market by IDC vs. 76.4 million a year ago (86.7% growth)
              • Together the white-box vendors, Apple and Samsung, as the market changing vendors/vendor group delivered 132.7 million tablets in Q1-Q3’13 vs. 86.9 million a year ago (52.7% growth)
              • Meanwhile the “Others” group (with improper inclusion of white-box vendors) by IDC delivered 49.8 million tablets in Q1-Q3’13 vs. 25.8 million a year ago (93% growth)

              image

              • Mainland China had a 4.4 million strong tablet market in Q3’13 vs. the 44.6 million worldwide market as per IDC. Since white-box vendors sold 25 million tablets worldwide (according to DIGITIMES Reasearch) in Q3’13 vs. only 16.8 million sales in the ‘Others’ category by IDC we can safely raise the 49.8 million number by upto 10 million to upto 60 million. This means that in the current quarter Mainland China constituted at least 8.8% of the worldwide tablet market.
              • The sequential (Q/Q) growth rate on the Mainland China market per Analysis Int. is:
                image
              • Meanwhile the sequential (Q/Q) growth rate on the worldwide market per IDC is:
                image
              • This means that Mainland China has much less seasonality than the worldwide market, which is a sign of greater untapped tablet demand than in other markets of the world. Considering the fact that an unusually large group of local tablet vendors are playing the local brand game in China, while the white-box vendor game outside, any global brand tablet vendor should already participate in the Mainland China market in order to succeed worldwide. Lenovo, Samsung and Microsoft have clearly recognised this:


              image
              (the two latest Analysis International sources used for that are indicated later)

              image

              • Samsung has dramatically increased its market penetration efforts in Q3’13 and succeeded quite well. In fact it was able to push back somewhat the growth rate of the group of local brand vendors (from 170% Q/Q growth rate in Q2’13 to 150% in Q3’13) while significantly increased its own growth rate (from 170% to a whopping 220%).

              image

              • Therefore, if things stay as it is (see the above chart) Samsung will outgrow local brand vendors on the Mainland China market within a year.
              • Otherwise, if the group of local brand vendors will be able to withstand Samsung’s local efforts and significantly improve the value of their own brands, then the outlook may return to a view which could have been forecasted after Q2’13 (see the below chart):

              image

              • Meanwhile two local brands, Teclast (台电) and Onda (昂达) each were able to beat two other global brands, Asus and Acer, on the Mainland China market in the last two quarters.
              • The group of ‘Others’, i.e. other local brands taken together were able to grow by similar rate in the last two quarters which shows that with an ongoing consolidation of the local brands (details ommitted here) a few local brands may join Teclast and Onda as the strongest local vendors which will have an opportunity to change their white-box vendor status abroad (and grow globally under their own brand as well).

              image

              image

              The Q3’13 and Q2’13 Analysys International sources:
              Nov 8, 2013: http://www.enfodesk.com/SMinisite/maininfo/articledetail-id-389539.html
              Aug 28, 2013: http://www.enfodesk.com/SMinisite/maininfo/articledetail-id-376953.html

              The Q3’13 and Q2’13 DIGITIMES Research sources:

              China white-box tablet shipments reached about 25 million units in the third quarter of 2013, up 56.3% sequentially and 40.4% on year thanks to strong overseas shipments, which accounted for 80% of the total volume. Among white-box tablet shipments, 7-inch models accounted for the largest share, while 8-inch models, which were originally expected to become new star products, were unable to do so because of high costs from the bezel design and limited supply of 8-inch panels.
              Although white-box tablets are expected to see extraordinary growth in 2013, they are also expected to face more obstacles and challenges in the future. First, they will see strong price competition from large brand vendors, which will offer Android-based products at price levels similar to those of white-box models. Second, the tablet market will gradually reach saturation and should no longer see demand as strong as before.
              Third, white-box tablet costs have already hit the bottom margin, causing related assembly service providers and component suppliers to see limited profits. Several unhealthy players were already been eliminated from the market at the end of the second quarter, while the remaining players will need to rely on pumping up their shipments to support their profitability. However, such a strategy is unlikely to sustain for long, Digitimes Research noted.
              Digitimes Research also found that white-box tablets in Europe or North America are mostly used as gifts in product promotions or bundling deals and therefore specifications are not as high as those of regular tablets. As for emerging markets such as Eastern Europe, Southeast Asia and Latin America, most consumers are buying white-box tablets with a single-core processor, because of limited purchasing power.
              As for application processors (APs), 70% of white-box tablets with phone functions adopted solutions from MediaTek in the third quarter, replacing the solutions from China-based Allwinner, the original favorite. Digitimes Research estimates that the proportion of white-box Wi-Fi-only tablets using MediaTek’s solution will also increase dramatically starting the fourth quarter, further impacting China-based Allwinner and Rockchip’s AP shipments. In addition to low prices, China-based AP suppliers will also need to consider how to create additional value for their APs to survive the competition.
              White-box tablet shipments reached only 15.9 million units in the second quarter of 2013, down 26.3% sequentially due to weakening tablet demand in May and June. Many smaller white-box players were also forced to quit the market, according to Digitimes Research’s latest figures.
              Although white-box tablet shipments peaked in April 2013, increasing component costs and the fact that consumers are becoming more sensitive over tablet pricing, are impacting white-box players’ profitability.
              For component supply, China-based chipmakers’ competition is gradually becoming fierce for both single-core and dual-core processors. In August 2013, some single-core processor prices were as low as US$5. By the end of 2013, dual-core processor will become the basic specification for entry-level white-box tablets, while mid-range models will turn to quad-core processor completely, Digitimes Research noted.
              DRAM and NAND Flash remained at high price points in the second quarter of 2013, but as related players are increasing their supplies in the third quarter, prices are dropping.
              As for panels, an entry-level 7-inch TN panel was priced at about US$10-11 at the beginning of the third quarter, and the price has been rising. Although the industry is seeing tight panel supply, the issue is expected to be eased as more panel players will open up new production lines to manufacture small-to-medium size panels in the first half of 2014.
              White-box vendors’ over-optimism about demand in the first half created high tablet inventories for the vendors. Weak demand in Europe and North America has affected sales of both first-tier brand vendors and white-box players.
              As for China, local first-tier brand vendors’ increasing sales have impacted white-box models’ demand in the country. Emerging markets such as India, Russia, countries in Eastern Europe, Latin America and Southeast Asia, are only providing limited contributions to white-box tablet players because shipments to these countries have just recently started.
              Currently, strengthening their inventory management and expanding into overseas emerging markets will be important tasks for white-box tablet players to survive in the tablet market.

              Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A

              Crisis Message of Aug 29, 2015 from Hunbiased: Immigration which I very much felt to share here before anything else of my own: “ Immigration is *the* topic in the news in Hungary. It’s what all newscasts lead with and it’s the issue that dominates the front pages. How bad is the situation?  I take a look at some basic figures to see whether or not the current EU policies regarding immigration are fair and answer the question, “if Hungary is expected to absorb 140,000 people without batting an eyelid, how many people should Germany and the UK take?”


              Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A

              This is what people with software engineering background cannot understand at all and therefore significantly overestimate Microsoft’s chances to succeed in the consumer device space.

              Previously I discussed on the ‘Experiencing the Cloud’:

              which clearly indicated quite a number of exceptional corporate qualities of Samsung.

              Now I will have a discussion heavily focussed on Samsung’s extraordinary strengths (from SCM to the Samsung Memory business), as well as on the company’s most pressing weaknesses (software and M&A) based on Samsung Analyst Day 2013, Nov 6, 2013, reflecting the below presentations and their reports in the worldwide media:image
              See as well: As It Happened: Samsung’s Analyst Day [live blog on The Wall Street Journal Asia, Nov 6, 2013] and an analytic reflection of that Across Fonblets and Phablets Samsung Has 63% Share of all Android Mobile Devices [Localystics, Nov 7, 2013].

              Accordingly this post contains the following sections:

              1. Samsung Supply Chain Management (SCM) information
                1. Historic Samsung SCM information
              2. Market/Business-specific current and strategic information
                1. Smartphones
                2. Phablets (‘Fonblets’ per Samsung)
                3. Tablets
                4. Wearable devices
                5. New [mobile/device] Market: The Next Big Thing
                6. Samsung System LSI
                7. Samsung Display
                8. Samsung Memory Business
                9. Software
                10. Mergers and Acquisitions (M&As)

              1. Samsung Supply Chain Management (SCM) information

              image

              Supply Chain Management (SCM) [Samsung SDS, Aug 27, 2013]

              Supply Chain Management (SCM) is a comprehensive and innovative activity, including process, system, and governance, which optimizes marketing, sales, development, manufacturing, purchasing, logistics, and service over the entire supply chain. We support the successful SCM innovation of your business by offering globally competitive services such as SCM diagnosis, Process Innovation (PI), integration establishment, Cello [Supply Chain LogisticsSCL] solution.
              image
              • Demand Satisfaction
                Increase in demand forecast accuracy and supply ability index
              • Increased Market Response Ability
                Improved adherence to deadlines and shortened lead time in setting up plans
              • Global SCM Establishment and Integration
                Setting up and carrying out Global Single Plan in the Governance system

              image

              image

              We are Samsung SDS! [SamsungSDSA (Samsung SDS America) YouTube channel, June 24, 2013]

              From Samsung SDS leads in ‘shared growth’ [The Korea Times, Oct 30, 2013]

              In July this year, it realigned structures into the following six smart town, smart manufacturing, smart convergence, smart security, smart logistics and smart ICT outsourcing for customized approaches to existing and future clients, according to the statement.

              Service Overview [Samsung SDS, March 29, 2013] (see also: OverviewVisionHistoryGlobal Network >> Samsung Data System, established in May 1985)

              image
              image
              image
              image
              image
              image
              image
              image
              image
              image
              image
              image
              image

              image

              1/A Historic Samsung SCM information:

              The Samsung Group of companies is recognized as a leading global manufacturing, financial, and services conglomerate. It was founded in 1938 and focused its businesses on areas such as textiles, shipbuilding, machinery, and chemicals. Since the 1980s, the group has made enormous efforts and investment in the electronics and semiconductor industry. As a result, the Samsung Group has experienced a dramatic growth in net profits since the 1990s. The flagship unit, Samsung Electronics Company (SEC), was one of only two manufacturing companies worldwide to post profits of more than $10bn in 2004 (Toyota Motors being the other). Many regard these successes as reflecting a continuous and relentless effort at Samsung to improve the way it conducts business. For the last few years, SCM and six sigma have been two pillars of business innovation at Samsung.
              The Samsung Group of companies has large, complex, global supply chains in most of the products it manufactures and makes extensive use of SCM solutions and process innovations to support and improve its operations. Most notably, at SEC, advanced planning and scheduling (APS) systems have been adopted since the 1990s and have brought the company many successes in terms of operational excellence. Recently, Samsung Electronics was ranked seventh in a respected analyst’s ranking of the global top 25 companies in supply chain excellence.
              Six sigma has been a key enabler for the group’s success. The Chairman of the Group proclaimed the adoption of a business innovation approach called “new management” in 1993. “New management” is the pursuit of quality-oriented management in business operations as well as in manufacturing. Along with the “quality movement” in industry, new management evolved from initial product quality assurance but later shifted its focus to include the quality of the entire business process, which is the rationale behind six sigma. The outcomes were high-quality, innovative product developments, and consequently an increase in customer satisfaction and profits, and are well demonstrated by many of the world’s best technological resources.

              Samsung’s SCM Business Team (SBT) researched six sigma approaches at General Electric (GE), DuPont and Honeywell to get perspectives on how other companies have innovatively applied six sigma to similar needs: … Each of the above approaches was analyzed and the following conclusions drawn, which fed into the subsequent development of the Samsung SCM six sigma methodology: …

              Future direction
              Today, there are various approaches and systems available for process innovation. Six sigma and supply chain management (SCM) are among those techniques aiming for process and quality improvement, and synchronization of a company’s value chain, from inbound logistics to sales and customer services.
              At Samsung, SCM and six sigma have been two important enablers for the group’s management innovation and growth. However, Samsung realize that there is significant room for improvement in its SCM operation. Thus, the effort has been synthesizing SCM and six sigma and developing a unique six-sigma based methodology to improve its SCM operation.
              Samsung’s effort and investment has turned out to be fruitful. Their SCM six sigma program has produced highly qualified and talented SCM specialists, who are currently training the methodology to other members in their organizations and leading SCM projects. SCM projects are being prepared and conducted in a more disciplined way and their outcomes are continuously monitored and shared through Samsung’s repository for six sigma. Samsung’s endeavour for global optimum is continuing and SCM six sigma is expected to play an enabling role.
              imageSamsung Electronics, a leading Korean company as well as a symbol of the IT industry, carried out an innovative project to strengthen its global Supply Chain Management (SCM) execution ability, gaining the industry’s interest. Samsung Electronics placed its emphasis on the business management scenario of predicting and preparing for future environmental changes and competitiveness, which is one of the survival strategies of an industry with an unpredictable future. The company is aggressively establishing the foundation for enhancing business management speed and efficiency-oriented business management innovations since early this year. In accordance with this type of scenario, Microsoft’s Business Intelligence (BI) Platform provided life to Samsung Electronics’ SCM system. Samsung Electronics decided to implement an action-oriented BI solution that enables on-demand changes of business management plans and reflects these adjustments. As such, it decided to deploy SQL Server 2008, which can satisfy all three major requirements of BI solution, including ‘performance and reliability’, ‘cube write-back’ and ‘user convenience’, and the company is thoroughly experiencing the benefits of this IT innovation. In the face of enterprise-wide application, it has completed application in only its video display business division, so it is still too early to mention any fixed quantity of benefits. However, with this system implementation, Samsung Electronics expects to increase its forecast accuracy for product demands by more than 20%.

              2. Market/Business-specific current and strategic information

              2/A Smartphones:

              imageSamsung executives said the biggest growth in smartphones would come in developing countries, where smartphone penetration remains lower. Worldwide, the company said, there are still three billion more basic “feature phones” in use.

              “We believe there is substantial room for smartphone demand to grow,” said J.K. Shin, head of Samsung’s mobile division.

              Mr. Shin said the company also intended to increase its market share in tablet computers, where it still trails Apple. Other executives painted a bullish picture even on televisions and home appliances, areas in which sales have been growing slowly or shrinking in recent years.
              imageAt a rare analyst day event held in Seoul today, Samsung’s JK Shin announced that the company had sold more than 100 million Galaxy smartphones and Note phablets this year alone. … While the industry is expecting the high-end smartphone segment to slow down, Samsung is anticipating that the premium smartphone segment will outgrow market forecasts and is also gearing up for ultra premium smartphones. The company is rumored to launch a Galaxy F range of ultra-premium smartphones next year. … Overall, Shin believes that Samsung’s smartphone division still has room to grow with upcoming LTE deployments and the company’s innovations around bendable displays and companion devices.
              Samsung’s stock price plunged 15 percent in June after JPMorgan Chase and Morgan Stanley cut their profit outlooks, citing weaker-than-expected demand for its flagship smartphone, the Galaxy S4. However, the company is rebounding, having sold more than 40 million Galaxy S4s as of last month, according to executives. … It sold about 120 million handsets in the third quarter, researcher Strategy Analytics said on Oct. 29.
              image… “People say the growth of the premium smartphone market will slow, but we don’t think so,” said Shin. “There are lots of opportunities for growth in various areas.” Shin said the market for Long-Term Evolution (LTE) smartphones, the fastest broadband devices, will grow 30 percent on average through 2017. About 680 million smartphones will be shipped in 2017, half of them LTE enabled, he said. [correctly from ZDNet: “The expansion of new LTE services, including LTE Advanced, will be the key growth driver,” said Jong-Kyun Shin, president and CEO of Samsung IT & Mobile Communication at an analyst event in Seoul on Wednesday. “Until 2017, we expect an annual average growth of near 30 percent in the LTE smartphone market, reaching 680 million units.” Shin said that come 2017, half [45%] of all phones sold will be LTE phones.]
              imageThe craziest announcement was that 5.2-inch 560 PPI AMOLED smartphone displays are due in 2014, with 3840×2160 displays following in 2015. Assuming a screen size of around five inches, 3840×2160 (UHD, 4K) works out to be around 880 pixels per inch. By virtue of being based on OLED tech rather than LCD, Samsung says that the next few years will see lots of flexible displays being used in curved and bent devices, with foldable devices arriving around 2016. (Read: 8K UHDTV: How do you send a 48Gbps TV signal over terrestrial airwaves?)
              … Is it really beneficial to keep pushing pixel densities as quickly as Moore’s law allows? The higher the pixel count, the more energy a display consumes. Considering our eyes have a tough time seeing the difference between 200 and 300 PPI, let alone 441 (current 5-inch smartphones) and next year’s 560 PPI, it seems a little counterintuitive to intentionally reduce battery life for negligible gain. Yes, Samsung and its users get to wave their huge PPIs in the face of the Apple opposition — but is that really what the smartphone market has come to?
              imageJK Shin, Samsung’s president and chief executive of IT & Mobile (the business segment of Samsung Electronics that compares closely with Apple), outlined his outlook for the smartphone and tablet markets, promising that the company would “play a key role in the premium smartphone market.” He stated that from Samsung’s perspective, the premium market will continue to outgrow market forecasts, an apparent reversal of the company’s warnings from the beginning of the year about increasing competition in the plateauing market for premium Android smartphones.
              That also seems to contradict Samsung’s sales results throughout the year. The company just stated that in its September quarter, premium smartphone sales “stayed about the same” rather than keeping pace with Apple’s growth, which comes entirely from premium smartphones.
              imageJK Shin added that the global smartphone penetration rate is only at 21 percent so far, meaning there’s plenty of room for growth. Worldwide, about one billion smartphones will ship this year, with data from Strategy Analytics suggesting that’ll grow to 1.5 billion by 2015.

              2/B Phablets (‘Fonblets’ per Samsung):

              imageBy introducing its Galaxy Note product, Samsung highlighted its status as the creator of‘Fonblet’ market with large display, portability and handwriting technology. We believe that Samsung has a high hope for the big-sized smartphone market with over 5 inch display, which we define as phablet. Also it made us predict that Samsung may be working on a completely new type of ‘Fonblet’ to target both smartphone and tablet segments at the same time in around 2015 or 2016 timeframe.

              2/C Tablets:

              imageA top executive, Shin Jong-kyun, told analysts on Wednesday that Samsung’s tablet business is growing rapidly and the company will become the biggest maker of tablet computers. He didn’t give a timeframe. Shin said Samsung’s tablet sales will exceed 40 million units this year, more than double sales in 2012. “Samsung tablet shipments started to grow remarkably since the second half of last year,” he said.
              Research group IDC estimates that Samsung sold 16.6 million tablets in image2012, lagging far behind Apple Inc. which sold 65.7 million iPads. But Samsung is on the rise, capturing 20 percent market share in the July-September quarter while Apple, which led the commercialization of tablet computing, fell to 30 percent. Apple previously had more than half of the global tablet market but its dominance has eroded as Samsung boosted sales with cheaper Galaxy Tab computers that offer many different screen sizes.
              Source: http://www.idc.com/getdoc.jsp?containerId=prUS24420613

              according to which the Q3’13 Samsung tablet sales is 9.7 million, i.e. with H1’13 17.6 million the Q1-Q3’13 Samsung tablet sales are already 27.3 million units.

              2/D Wearable devices:

              imageSpeaking at the company’s Analyst Day, Samsung Vice Chairman and CEO Kwon Oh-hyun said Wednesday that his company has been dedicating significant resources to several technologies, including “wearables,” according to the Wall Street Journal, which was in attendance at the event. The slide to accompany his comment showed the Galaxy Gear smartwatch and also eyeglasses that might compete with Google Glass.
              Rumors have been swirling that Samsung is at work on smart eyewear. Last month, a patent filing surfaced in Korea for Samsung eyewear. That application indicated that the device would be connected directly to a smartphone and feature built-in earphones.
              Samsung has not announced any plans to launch a Google Glass competitor, but Kwon’s comments seem to indicate such a device is coming.
              Samsung surprised attendees at its analyst day by announcing it will be bringing fully foldable screens to the market “sometime in 2015” and even teased the product with a chintzy promo video. Although the video’s focus was on phone and tablet combinations, the real opportunity here is in wearable techApple and Google should be on notice. Samsung could have a game changer with its foldable screen.
              As the market for smartphones and tablets continues to become more contested, tech companies are increasingly looking at new growth opportunities. They may have found it in wearable tech: According to Juniper Research, worldwide imagespending on wearable tech will hit $1.4 billion this year and increase to $19 billion by 2018. Of these companies, Samsung has the most recent commercial product launch of these new generation of wearable tech products with its Galaxy Gear smart watch. So far, the product has witnessed tepid demand and modest reviews—mostly due to the fact it must be tethered to other Galaxy products for full functionality.

              2/E New [mobile/device] Market: The Next Big Thing

              imageInteresting to note here that, in tandem with talk of shareholder-friendly dividend increases, Samsung is also talking up growth, growth, growth. Mr. Shin just ticked off wearable devices, flexible devices, big data, the Internet of things [, and convergence]– “and much more” — as growth opportunities for the mobile division. “Therefore, we expect another huge growth in the mobile market in the near future,” Mr. Shin says.

              Mr. Shin touches on big data, saying that the company will encorporate big data technology in providing software features for its devices. He says the company aims for a “fully integrated” user experience across all Samsung devices.

              2/F Samsung System LSI:

              imageAlluding to Apple’s custom 64-bit A7 Application Processor (which Samsung is manufacturing), [Dr. Namsung Stephen] Woo[, president of Samsung’s System LSI] said “many people were thinking ‘why do we need 64-bit for mobile devices?’ People were asking that question until three months ago, and now I think nobody is asking that question. Now people are asking ‘when can we have that? And will software run correctly on time?'”
              Woo told his audience, “let me just tell you, we are… we have planned for it, we are marching on schedule. We will offer the first 64-bit AP based on ARM’s own core [reference design]. “For the second product after that we will offer even more optimized 64-bit based on our own optimization. So we are marching ahead with the 64-bit offering, and even though it’s a little too early, I think we are at the leader group in terms of 64-bit offerings.” … Woo … offered no comment on how Samsung planned to support existing software on its planned 64-bit offerings, nor even whether such a chip would get custom Android support or use Samsung’s own Tizen or some other operating system.

              2/G Samsung Display:

              image

              According to ZDNet Korea, it looks like Samsung is going to focus on a particular type of tablets, AMOLED ones. So far, the tech giant has released only a handful of AMOLED display devices, so it will be pretty interesting to see what else gets produced.

              A patent of a foldable mobile device filed with authorities in South Korea last month gave some clues as to the future of Samsung mobile devices.
              But at an analyst day on Wednesday, some investors saw prototypes of a range of foldable mobile devices that Samsung is testing,  giving more details  on what they would actually do and look like. Reporters were banned from the conference and were not given access to see the prototypes, while the attendees were not permitted to take any photos inside the venue.
              “The first one they showed us was the size of a [Galaxy] S3 smartphone which can be folded in half from top to bottom. So like a compact powder used by women,” said Jae H. Lee, an analyst with Daiwa Securities who attended the event.
              “There was also one in the size of a lengthy wallet which can be unfolded on both sides into the size of a tablet computer,” Mr. Lee said, adding that both devices looked pretty good.
              Other analysts  also seemed to be impressed.
              Such devices “would further expand Samsung’s competitive advantage in premium smartphones,” Sundeep Bajikar, an analyst with Jefferies LLC who flew in to attend the event, wrote in a research note.
              A spokesman for Samsung Display Co., which makes screens for Galaxy smartphones, said that designs displayed yesterday were “concept versions,” that do not have all the components needed to make a working smartphone.
              The products are likely years away from commercialization; Samsung Chief Executive Kwon Oh-hyun, said that “foldable displayswould be presented in 2015.

              2/H Samsung Memory Business:

              Samsung Electronics, the world’s largest memory chipmaker, vowed to take a solid lead in the global memory market with its advanced vertical NAND flash memory technology, based on plans to unveil 36-layer V-NAND flash memory chips next year.
              “Samsung will definitely, if we can, enjoy an 80 percent market share,” said Robert Myung Yi, senior vice president of Samsung Electronics’ investor relations team, on Wednesday at Samsung Analyst Day 2013, where the company laid out its mid- and long-term strategies to investors and analysts.
              A top executive from Samsung told The Korea Herald that “3-D NAND flash memory stacking 36 layers of memory cells will be mass produced by the latter half of next year.”
              Samsung is currently the sole producer of V-NAND flash memory chips with 24 layers of cells.
              This level of stacking is deemed sufficient to make the product profitable, according to Samsung.
              In terms of V-NAND market share, Yi said the firm would not just pursue higher market share, but also make efforts to secure a high profit margin as well as balance supply between the planar NAND flash memory and V-NAND flash memory. V-NAND chips’ 3-D structure gives them a higher density and capacity than their 2-D rivals.
              image
              The Korean electronics giant expects the 3-D NAND market to grow 105 percent every year until 2017, and its market size to exceed that for planar NAND flash chips next year.
              Stacking memory cells is a core technological issue for chipmakers, including Samsung’s local rival SK Hynix and U.S. chipmaker Micron Technology.
              Despite their technology for the V-NAND, other chipmakers have yet to start mass producing 3-D memory chips due in part to underachievement in cell stacking.
              SK Hynix CEO Park Sung-wook said in October that his firm, the world’s second-largest memory chipmaker, would be able to stack as many as 24 layers next year, adding, “We can do as well as Samsung.”
              In an earnings conference call later in the month, the firm announced that it would be able to start producing 3-D NAND flash memory either in the second half of next year or in 2015.
              Global competitors have also announced they would jump into the race for V-NAND production.
              Micron CEO Mark Durcan told tech news outlet CNET in August that his company would start providing samples of 3-D NAND to customers in the first quarter of 2014.
              Producers are competing to scale down planar NAND flash memory, still the top product in the chip market.
              After the technology proceeded to the 10 nanometer-class chip and beyond, the chipmakers faced more cell-to-cell interference, which risks the reliability of NAND flash memories.
              The 3-D NAND could be used for a wide range of equipment and devices including enterprise servers and solid-state drives.
              Samsung launched a V-NAND-based enterprise solid-state drive in August.

              2/I Software:

              Samsung today admitted it needs to work on software, an area it’s “not as good” at as hardware. Samsung vice chairman & CEO Kwon Oh-hyun compares the company’s software efforts to the World Series-winning Boston Red Sox’s pitching performance. Kwon notes the Red Sox led the pack in batting this year, but were only an average pitching team. His conclusion? “Even though we’re doing the software business, we’re not as good as we are in hardware.” The Red Sox still won the World Series, though, with the implication being that Samsung is “winning” at technology right now.
              It’s true that software imperfections have yet to hamper Samsung’s march to global dominance. 2013 has seen the Korean company post consecutive profit records and improve its marketshare in key areas, including strengthening its grip on the number-one spot in the smartphone market. That said, Samsung isn’t taking any chances; Kwon says that half of his Research and Development (R&D) workforce is focused on software, and the efforts to improve software are likely to grow moving forward. Given the company is currently spending over $3 billion per quarter on R&D, that represents a colossal investment in software.
              imageCompany president Lee Sang-hoon reaffirms Samsung’s focus on getting software right. “Industry-wide tech development is shifting from hardware to software.” Lee says the company’s recent efforts to acquire fresh talent from startups— including the establishment of overseas R&D centers —  are an effort to “address region-specific needs.”
              … Samsung Electronics says that around 40,000 of its 326,000 employees worldwide are software developers – roughly half of them based in South Korea.
              Samsung customises the user experience on its Android-based phones and tablets like the Galaxy Note 3 with software called TouchWiz, which is often heavily criticised for being cluttered, confusing and detracting from the standard Android experience.
              Additional features in its handsets such as “air gesture” (to move pages without touching the screen), “air view” (to enlarge previews without touching the screen) and “smart scroll” (to scroll through pages using eye movement) have been dismissed as gimmicks by some reviewers, who don’t see them bringing any value to users.
              “Industry-wide tech development is shifting from hardware to software,” said Lee Sang-hoon, Samsung’s president and chief financial officer.
              In response Samsung will aim to “reinforce our competitiveness in software platform, design and IT” through hiring more software experts, and through the use of overseas research and development centres “to address region-specific needs,” Lee said.
              South Korean Giant Weighs Software Deals to Better Compete With Apple, Google
              Samsung Electronics Co. 005930.SE -1.88% is stepping up its hunt for acquisitions and building out its presence in Silicon Valley to try and overcome its key weakness: software.
              The South Korea-based company became the world’s largest maker of smartphones by manufacturing attractive devices that hit the market quickly and cheaply.
              But to thrive in a mobile-device market increasingly dominated by software specialists likeApple Inc., AAPL +1.57% Google Inc. GOOG +0.80% and Microsoft Corp. MSFT +0.75%, which acquired Nokia Corp.’s NOK1V.HE -1.22% phone business last month, Samsung is aiming to become a software power in its own right.
              Earlier this year, Samsung was among the bidders for Israeli mobile-mapping service Waze Ltd., according to people familiar with the matter. Google eventually bought Waze for about $1.1 billion in July, a deal that is under review by the Federal Trade Commission. According to one person, Samsung had approached Waze in hopes of making a large investment and forming a partnership, before acquisition talks kicked off.
              imageSamsung has plenty of other Silicon Valley software startups in its sights, particularly in games, mobile search, social media and mapping-related services, according to employees and an internal document reviewed by the Journal.

              The document, a mergers and acquisitions presentation prepared in February by Samsung’s Media Solution Center, the arm that works on software initiatives, lays out the company’s rationale for bulking up in each category and lists potential acquisition and investment targets.

              According to the document, Samsung has evaluated startups such as Unity Technologies, a San Francisco-based developer of gaming platforms, and Green Throttle Games Inc., a Santa Clara, Calif.-based company that makes game controllers and software that connects mobile devices to televisions. It has also considered gaming pioneer Atari Inc., which Samsung could have used to offer classic games like Asteroids and Pong exclusively on its mobile phones. Atari auctioned off some of its properties this year as part of a bankruptcy filing after rejecting preliminary bids from several companies for its portfolio of games.
              Samsung has also looked closely at Glympse, a Seattle-based company that allows users to share their location with their friends—a service that Samsung says could be integrated into their phones’ native calendar and contacts functions, differentiating it from competitors.
              Samsung first reached out to Glympse in early 2012, and has raised the prospect of an equity investment, though discussions remain ongoing, according to a person familiar with the matter. Last month, Glympse unveiled an app for Samsung’s Galaxy Gear smartwatch.
              Elsewhere in the document, Samsung named Tel Aviv-based mobile search engine Everything.me as a possible target. It has also looked at video-chat app Rounds, another Israeli startup, that would help Samsung compete with Apple’s FaceTime and Google’s Hangouts.
              Samsung declined to comment on its acquisition plans—but it has made no secret of what it calls its “embracing the culture of Silicon Valley.”
              In recent months, the Suwon, South Korea-based company has broken ground on a major research facility near Apple’s offices and launched a software startup accelerator with locations in Palo Alto, Calif., and Manhattan’s Chelsea neighborhood. It will make early-stage investments in startups, especially developers of software for Samsung devices.
              Samsung, which has $1.1 billion set aside for early-stage startup and venture capital investments in the U.S., is also poaching software engineers from its U.S. rivals and, at a hotel in San Francisco later this month, will host its first ever developers’ conference, an important step toward creating an “ecosystem” of applications unique to its devices.
              “The kind of things that happen in the Valley are really exciting to Samsung,” said David Eun, the head of Samsung’s Open Innovation Center, which operates the software-startup accelerator.
              The aggressive move into its rivals’ backyard is unusual for Samsung, a company that has historically kept its operations heavily centralized and shied away from outside deals. The emphasis on self-reliance runs so deep that Samsung manufactures some 90% of its products within its own factories.
              Privately, company executives portray the recent shift not as a repudiation of its long-term strategy, but rather as a complement to its own research and development efforts, which remain substantial.
              The company spent $10.8 billion on R&D last year, with 67,000 employees devoted to helping Samsung maintain its edge in the global television, semiconductor and home-appliance markets.
              So far, though, its attempts at developing a proprietary-software hit for its mobile phones—which account for two-thirds of Samsung’s operating profits—have fallen flat.
              Among Samsung’s recent efforts are an abandoned mobile operating system, a mobile chat service that has struggled to gain traction and coolly received technologies that anticipate hand gestures and eye movements.
              In November 2009, Samsung launched Bada, an open-source mobile operating system that it hoped could challenge Google’s Android platform. But Bada’s unfriendly user interface and poor syncing with other devices proved unpopular with consumers.
              Earlier this year, Samsung pulled the plug on Bada, rolling those efforts into a new operating system known as Tizen. There too, Silicon Valley plays a key role: Samsung is codeveloping Tizen with Intel Corp. The company has yet to release a Tizen-powered smartphone.
              If Samsung’s new operating system catches on, it could relieve the company’s reliance on Android, which powers the vast majority of Samsung’s mobile devices, including its new smartwatch.
              Breaking through with a proprietary “must-have” software application could also bolster Samsung’s position at a time when the company is vulnerable to competition from Chinese hardware makers, including Lenovo Group Ltd., Huawei Technologies Co. and Xiaomi Inc. In the most recent quarter, Samsung’s mobile business saw its operating profit margin fall to 17.7%, from 19.8% in the previous quarter amid pricing pressure from rivals and increased spending on advertising.
              Meanwhile, Google’s tie-up with Motorola Mobility in 2011, and Microsoft’s move to acquire Nokia’s mobile-phone business last month, mean that Samsung will face heightened competition from companies that, like Apple, can compete in both hardware and software.
              Samsung’s software success is far from assured. Unlike Apple, Google and Microsoft, the Korean electronics giant doesn’t have a history of software achievements. Instead, Samsung cut its teeth in the world of hardware, where efficiency, flexibility and supply-chain management are paramount.
              Acquiring its way to software dominance is no easier than building up its software capabilities organically. While Samsung has about $50 billion in cash on hand, the company has struggled in the past with deal-making. Even today, some in Silicon Valley say, Samsung has developed a reputation for kicking the tires on a range of potential deals, only rarely pulling the trigger.
              One reason for such caution is Samsung’s purchase of AST Research Inc. in the mid-1990s, an experience that still weighs heavily on company executives.
              The two-part, $840 million acquisition of Irvine, Calif.-based AST, once the world’s fifth-largest computer maker, was conceived as an attempt to break into the U.S. personal-computer market.
              Samsung sustained heavy losses in AST before ultimately giving up on the deal, which remains Samsung’s largest overseas acquisition to date. Even now, upper management remains wary of big acquisitions, in large part because of AST, employees say.
              Samsung’s recent acquisitions have been small, and focused on software developers that can help distinguish Samsung’s phones from others built on the Android platform.
              Last May, Samsung—seeking to create a credible rival to Apple’s iTunes platform—snapped up mSpot Inc., a Palo Alto, Calif.-based mobile-software developer with hopes of creating a one-stop media platform that would allow users to stream and download music on their Samsung devices.
              In the process, Samsung hoped to rival not only iTunes, but also online music-streaming services such as those offered by Sweden’s Spotify AB and Oakland, Calif.-based Pandora Media Inc.
              Earlier this year, Samsung moved mSpot into a new office with plans to double its staff by the end of 2013. Since then, however, the company’s attempts to develop the product, initially called Samsung Music Hub, have foundered.

              2/J Mergers and Acquisitions (M&As):

              Vice Chairman Kwon Oh-hyun admitted that it needs to work on software, where it is currently heavily investing to transform itself into a solutions provider from a manufacturing firm.
              Sources say Samsung prefers “Google style” expansion centered on small-sized mergers and acquisitions (M&As). It is interested in buying patents, marketing and human resources in target companies. “Samsung was passive in pursuing M&A deals. But we will become aggressive. Therefore, I don’t think our current cash-holdings are too high,” said the CFO Lee.
              Vice Chairman Kwon insisted that its edge in “vertical alignment” between components and parts will enable it create over $400 billion in annual sales in 2020. … But what’s equally interesting is that Samsung is also eager to develop components. Sharpening components-related technologies is something that really matters to it because of its plan to share confidential data with software giants such as Google and others to develop innovative products and secure advanced chips and flat-screens.

              Samsung Electronics will push for more mergers and acquisitions and increase its presence in health care and smart car industries for future growth, top executives said on Wednesday. … “Convergence (among technologies in different industries) is occurring right now, but not enough. We can create new industries, for example, health care and smart cars,” said Kwon Oh-hyun, vice chairman and CEO of the electronics firm.
              “(By converging Samsung’s information technology with cars) there are a lot (of opportunities) for us to supply to our customers.” Samsung SDI, a battery maker and an affiliate of Samsung Group, has invested in electric car batteries since 2008. It has successfully developed the products and is supplying them to BMW and Chrysler’s Fiat.
              … The vice chairman noted, “Even though our health care business is small, within the coming decade we want to be a strong player in the area,” hinting that the electronics firm will roll out more advanced, small and easy-to-handle equipment such as high-resolution CT and MRI scanners.
              Samsung Electronics wants to invest more money for new growth technologies, and part of that will come from being more aggressive in mergers and acquisitions as well as R&D.
              M&A will aim to reinforce current businesses, secure talent and find new opportunities, said Lee Sang-hoon, president and CFO of Samsung Electronics. The company has already spent about US$1 billion investing in 14 companies since 2010, which has been “somewhat conservative”.

              Samsung currently has a cash pile of around US$50 billion, which is about 20 percent of its market capitalization and has attracted complaints from investors of being at a level too high at their expense. According to Lee, the war chest will now being prepared for “significant investment” in strategic technologies, mergers or acquisitions.
              “We plan to allocate a significant portion of our annual cash flow into capex and R&D to secure future growth and shareholder return,” Lee said.
              Lee said the $50 billion war chest was being prepared for “significant investment” in strategic technologies, mergers or acquisitions, suggesting the company could loosen its purse strings as it chases the next big thing in mobile technology.
              The change of tack is aimed at responding to an innovation shift in the information technology business to software from hardware, Samsung’s traditional speciality. “I know we have been somewhat conservative in M&A but it may be different in the future. Based on this, I don’t believe the current level of net cash balance is excessive,” he said. We plan to allocate a significant portion of our annual cash flow into capex and R&D to secure future growth and shareholder return.”