Home » Cloud Computing strategy (Page 5)

Category Archives: Cloud Computing strategy

Intel’s HPC-like exascale approach to next-gen of Big Data as well

or we’ll need 1000x more compute (Exascale) than we have today, and we can do that via a proper exascale architecture for general purpose computing (i.e. without the special purpose computing approaches proposed by Intel competitors) – this is the latest message from Intel.

Just two recent headlines from the media:

Then other two headlines which are reflecting another aspect of Intel’s move:

Referring to: Chip Shot: Intel Reveals More Details of Its Next Generation Intel® Xeon Phi™ Processor at SC’13 [Intel Newsroom, Nov 19, 2013]

Today at the Supercomputing Conference in Denver, Intel discussed form factors and memory configuration details of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing”). The new revolutionary design will be based on the leading edge 14nm manufacturing technology and will be available as a host CPU with high-bandwidth memory on a processor package. This first-of-a-kind, highly integrated, many-core CPU will be more easily programmable for developers and improve performance by removing “off-loading” to PCIe devices, and increase cost effectiveness by reducing the number of components compared to current solutions. The company has also announced collaboration with the HPC community designed to deliver customized products to meet the diverse needs of customers, and introduced new Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools to bring the benefits of Big Data analytics and HPC together. View the tech briefing.

image

High-bandwidth In-Package Memory:
Performance for memory-bound workloads
Flexible memory usage modelsimage

image

From: Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

  • Intel discloses form factors and memory configuration details of the CPU version of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing”), to ease programmability for developers while improving performance.

During the Supercomputing Conference (SC’13), Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing“), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will significantly reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking.

Knights Landing will also offer developers three memory options to optimize performance. Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.

In addition, Intel and Fujitsu recently announced an initiative that could potentially replace a computer’s electrical wiring with fiber optic links to carry Ethernet or PCI Express traffic over an Intel® Silicon Photonics link. This enables Intel Xeon Phi coprocessors to be installed in an expansion box, separated from host Intel Xeon processors, but function as if they were still located on the motherboard. This allows for much higher density of installed coprocessors and scaling the computer capacity without affecting host server operations.

Several companies are already adopting Intel’s technology. For example, Fovia Medical*, a world leader in volume rendering technology, created high-definition, 3D models to help medical professionals better visualize a patient’s body without invasive surgery. A demonstration from the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) showed a 2D simulation of an F4 tornado, and addressed how a forecaster will be able to experience an immersive 3D simulation and “walk around a storm” to better pinpoint its path. Both applications use Intel® Xeon® technology.

Intel @ SC13 [HPCwire YouTube channel, Nov 22, 2013]

Intel presents technical computing solutions from SC13 in Denver, CO. [The CAPS demo is from [4:00] on]

From: Exascale Challenges and General Purpose Processors [Intel presentation, Oct 24, 2013]

CERN Talk 2013 presentation by Avinash Sodani, Chief Architect, Knights Landing Processor, Intel Corporation

The demand for high performance computing will continue to grow exponentially, driving to Exascale in 2018/19. Among the many challenges that Exascale computing poses, power and memory are two important ones. There is a commonly held belief that we need special purpose computing to meet these challenges. This talk will dispel this myth and show how general purpose computing can reach the Exascale efficiencies without sacrificing the benefits of general purpose programming. It will talk about future architectural trends in Xeon-Phi and what it means for the programmers.
About the speaker
Avinash Sodani is the chief architect of the future Xeon-Phi processor from Intel called Knights Landing. Previously, Avinash was one of the primary architects of the first Core i7/i5 processor (called Nehalem). He also worked as a server architect for Xeon line of products. Avinash has a PhD in Computer Architecture from University of Wisconsin-Madison and a MS in Computer Science from the same university. He has a B.Tech in Computer Science from Indian Institute of Technology, Kharagpur in India.

image

Summary

  • Many challenges to reach Exascale – Power is one of them 
  • General purpose processors will achieve Exascale power efficiencies
    – Energy/op trend show bridgeable gap of ~2x to Exascale (not 50x)
  • General purpose programming allows use of existing tools and programming methods. 
  • Effort needed to prepare SW to utilize Xeon-Phi’s full compute capability. But optimized code remains portable for general purpose processors.
  • More integration over time to reduce power and increase reliability

From: Intel Formally Introduces Next-Generation Xeon Phi “Knights Landing” [X-bit labs, Nov 19, 2013]

According to a slide from an Intel presentation that leaked to the web earlier this year, Intel Xeon Phi code-named Knights Landing will be released sometimes in late 2014 or in 2015.

image

The most important aspect about the Xeon Phi “Knights Landing” product is its performance, which is expected to be around or over double precision 3TFLOPS, or 14 – 16GFLOPS/w; up significantly from ~1TFLOPS per current Knights Corner chip (4 – 6GFLOPS/w). Keeping in mind that Knights Landing is 1.5 – 2 years away, three times performance increase seem significant and enough to compete against its rivals. For example, Nvidia Corp.’s Kepler has 5.7GFLOPS/w DP performance, whereas its next-gen Maxwell (competitor for KNL) will offer something between 8GFLOPS/w and 16GFLOPS/w.

More from: Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

  • New Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools bring the benefits of Big Data analytics and HPC together.
  • Collaboration with HPC community designed to deliver customized products to meet the diverse needs of customers.

High Performance Computing for Data-Driven Discovery
Data intensive applications including weather forecasting and seismic analysis have been part of the HPC industry from its earliest days, and the performance of today’s systems and parallel software tools have made it possible to create larger and more complex simulations. However, with unstructured data accounting for 80 percent of all data, and growing 15 times faster than other data1, the industry is looking to tap into all of this information to uncover valuable insight.

Intel is addressing this need with the announcement of the Intel® HPC Distribution for Apache Hadoop* software (Intel® HPC Distribution) that combines the Intel® Distribution for Apache Hadoop software with Intel® Enterprise Edition of Lustre* software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.

The Intel® Cloud Edition for Lustre* software is a scalable, parallel file system that is available through the Amazon Web Services Marketplace* and allows users to pay-as-you go to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud.

With numerous vendors announcing pre-configured and validated hardware and software solutions featuring the Intel Enterprise Edition for Lustre, at SC’13, Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. Partners announcing these appliances include Advanced HPC*, Aeon Computing*, ATIPA*, Boston Ltd.*, Colfax International*, E4 Computer Engineering*, NOVATTE* and System Fabric Works*.

* Other names and brands may be claimed as the property of others.

1 From IDC Digital Universe 2020 (2013)

Mark Seager: Approaching Big Data as a Technical Computing Usage Model [ieeeComputerSociety YouTube channel, recorded on October 29, published on November 12, 2013]

Mark Seager, CTO for technical computing at Intel, discusses the amazing new capabilities that are spreading across industries and reshaping the world. Watch him describe the hardware and software underlying much of the parallel processing that drives the big data revolution in his talk at the IEEE Computer Society’s “Rock Stars of Big Data” event, which was held 29 October 2013 at the Computer History Museum in Santa Clara, CA. Mark leads the HPC strategy for Intel’s High Performance Computing division. He is working on an ecosystem approach to develop and build HPC systems for Exascale and new storage paradigms Big Data systems. Mark managed the Platforms portion of the Advanced Simulation and Computing (ASC) program at Lawrence Livermore National Laboratory (LLNL) and successfully developed with industry partners and deployed the five generations of TOP1 systems. In addition, Mark developed the LLNL Linux strategy and award winning industry partnerships in storage and Linux systems developments. He has won numerous awards including the prestigious Edward Teller Award for “Major Contributions to the State-of-the-Art in High Performance Computing.”

From: Discover Your Parallel Universe [The Data Stack blog from Intel, Nov 18, 2013]

That’s Intel’s theme at SC’13 this week at the 25th anniversary of the Supercomputing Conference. We’re using it to emphasize the importance of modernizing codes and algorithms to take advantage of modern processors (think lots of cores and threads and wide vector units found in Intel Xeon processors and Intel Xeon Phi coprocessors). Or simply put, “going parallel” as we like to call it. We have a fantastic publication called Parallel Universe Magazine for more on the software and hardware side of going parallel.

But we’re also using it as inspiration for the researchers, scientists, and engineers that are changing the world every day. We’re asking them to envision the universe we’ll live in if the supercomputing community goes parallel. A few examples:

  1. In a parallel universe there is a cure
  2. In a parallel universe natural disasters are predicted
  3. In a parallel universe ideas become reality

Pretty lofty huh? But also inevitable. We will find a 100% cure to all forms of cancer according to the National Cancer Institute. We will be able to predict the weather 28-days in advance according to National Oceanic and Atmospheric Association. And everyone will eventually use computing to turn their ideas into products.

The only problem is it’ll be the year 2190 before we have a cure to pancreatic cancer, we’ll need 1000x more compute (Exascale) than we have today to predict the weather 28-days in advance, and the cost and learning curve of technical computing will need to continue to drop before everyone has access.

That’s our work here at Intel. We solve these problems. We drive more performance at lower cost which gives people more compute. The more compute, the better cancer researchers will understand the disease. We’ll shift that 2190 timeline left. We’ll also solve the challenges to reaching Exascale levels of compute which will make weather forecast more accurate. And we’ll continue to drive open standards. This will create a broad ecosystem of hardware and software partners which drives access on a broad scale.

From: Criteria for a Scalable Architecture 2013 OFA Developer Workshop, Monterey, CA [keynote on 2013 OpenFabrics International Developer Workshop, April 21-24, 2013]
By
Mark Seager, CTO for the HPC Ecosystem, Intel Technical Computing Group

In this video from the 2013 Open Fabrics Developer Workshop, Mark Seager from Intel presents: Criteria for a Scalable Achitecture. Learn more at: https://www.openfabrics.org/press-room/2013-intl-developer-workshop.html

image………………………………………………………..
Exascale Systems Challenges are both Interconnect and SAN

• Design with system focus that enables end-user applications
• Scalable hardware
– Simple, Hierarchal
– New storage hierarchy with NVRAM
• Scalable Software
– Factor and solve
– Hierarchal with function shipping
• Scalable Apps
– Asynchronous coms and IO
– In-situ, in-transit and post processing/visualization

Summary

• Integration of memory and network into processor will help keep us on the path to Exascale
• Energy is the overwhelming challenge. We need a balanced attack that optimizes energy under real user conditions
• B:F and memory/core while they have their place, they can also result in impediments to progress
• Commodity interconnect can deliver scalability through improvements in Bandwidth, Latency and message rates
………………………………………………………..

SAN: Storage Area Network     Ci: Compute nodes     NVRAM: Non-Volatile RAM
OSNj: ?Operating System and Network?    SNk: ?Storage Node?

Lustre: the dominant parallel file system for HPC and ‘Big Data’

Moving Lustre Forward: Status & Roadmap [RichReport YouTube channel, Dec 2, 2013]

In this video from the DDN User Meeting at SC13, Brent Gorda from the Intel High Performance Data Division presents: “Moving Lustre Forward: Status & Roadmap.” Learn more: http://www.whamcloud.com/about/ and http://ddn.com

Intel Expands Software Portfolio for Big Data Solutions [press release, June 12, 2013]

New Intel® Enterprise Edition for Lustre* Software Designed to Simplify Big Data Management, Storage

NEWS HIGHLIGHTS

  • Intel® Enterprise Edition for Lustre* software helps simplify configuration, monitoring, management and storage of high volumes of data.
  • With Intel® Manager for Lustre* software, Intel is able to extend the reach of Lustre into new markets such as financial services, data analytics, pharmaceuticals, and oil and gas.
  • When combined with the Intel® Distribution for Apache Hadoop* software, Hadoop users can access Lustre data files directly, saving time and resources.
  • New software offering furthers Intel’s commitment to drive new levels of performance and features through continuing contributions the open source community.

SANTA CLARA, Calif., June. 12, 2013 – The amount of available data is growing at exponential rates and there is an ever-increasing need to move, process and store it to help solve the world’s most important and demanding problems. Accelerating the implementation of big data solutions, Intel Corporation announced the Intel® Enterprise Edition for Lustre* software to make performance-based storage solutions easier to deploy and manage.

Businesses and organizations of all sizes are increasingly turning to high-performance computing (HPC) technologies to store and process big data workloads due to its performance and scalability advantages. Lustre is an open source parallel distributed file system and key storage technology that ties together data and enables extremely fast access. Lustre has become the popular choice for storage in HPC environments for its ability to support tens of thousands of client systems and tens of petabytes of storage with access speeds well over 1 terabyte per second. That is the equivalent to downloading all “Star Wars”* and all “Star Trek”* movies and television shows in Blu-Ray* format in one-quarter of a second.

“Enterprise users are looking for cost-effective and scalable tools to efficiently manage and quickly access large volumes of data to turn valuable information into actionable insight,” said Boyd Davis, vice president and general manager of Intel’s Datacenter Software Division. “The addition of the Intel Enterprise Edition for Lustre to our big data software portfolio will help make it easier and more affordable for businesses to move, store and process data quickly and efficiently.”

The Intel Enterprise Edition for Lustre software is a validated and supported distribution of Lustre featuring management tools as well as a new adaptor for the Intel® Distribution for Apache Hadoop*. This new offering provides enterprise-class reliability and performance to take full advantage of storage environments with worldwide service, support, training and development provided experienced Lustre engineers at Intel.

The Intel® Manager for Lustre provides a consistent view of what is happening inside the storage system regardless of where the data is stored or what type of hardware is used. This tool enables IT administrators to easily manage tasks and reporting, provides real-time system monitoring as well as the ability to quickly troubleshoot. IT departments are also able to streamline management, shorten the learning curve and lower operational expenses resulting in time and resource savings, better risk mitigation and improved business decision-making.

When paired with the Intel® Distribution for Apache Hadoop, the Intel Enterprise Edition for Lustre software allows Hadoop to be run on top of Lustre, significantly improving speed in which data can be accessed and analyzed. This allows users to access data files directly from the global file system at faster rates and speeds up analytics time, providing more productive use of storage assets as well as simpler storage management.

As part of the company’s commitment to drive innovation and enable the open source community, Intel will contribute development and support as well as community releases to the development of Lustre. With veteran Lustre engineers and developers working at Intel contributing to the code, Lustre will continue its growth in both high-performance computing and commercial environments and is poised to enter new enterprise markets including financial services, data analytics, pharmaceuticals, and oil and gas.

The Intel Enterprise Edition for Lustre will be available in early in the third quarter of this year.

Microsoft products for the Cloud OS

Part of: Microsoft Cloud OS vision, delivery and ecosystem rollout

1. The Microsoft way
2. Microsoft Cloud OS vision
3. Microsoft Cloud OS delivery and ecosystem rollout
4. Microsoft products for the Cloud OS

4. 1 Windows Server 2012 R2 & System Center 2012 R2
4.2 Unlock Insights from any Data – SQL Server 2014
4.3 Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights
4.4 Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI)
4.5 Microsoft talking about Cloud OS and private clouds: starting with Ray Ozzie in November, 2009 (separate post)

4.5.1 Tiny excerpts from official executive and/or corporate communications
4.5.2 More official communications in details from executives and/or corporate

4. 1 Windows Server 2012 R2 & System Center 2012 R2 [MPNUK YouTube channel, Nov 18, 2013]

Hosting technical training overview.

Windows Server 2012 R2: 0:00
Server Virtualization: 4:40
Storage: 11:07
Networking: 17:37
Server Management and Automation: 23:14
Web and Application Platform: 27:05

System Center 2012 R2: 31:14
Infrastructure Provisioning: 36:15
Infrastructure Monitoring: 42:48
Automation and Self-service: 45:30
Application Performance Monitoring: 48:50
IT Service Management: 51:05

More information is in the What’s New in 2012 R2 [Windows Server 2012 R2, System Center 2012 R2] series of “In the Cloud” articles by Brad Anderson:

Over the last three weeks, Microsoft has made an exciting series of announcements about its next wave of products, including Windows Server 2012 R2, System Center 2012 R2, SQL Server 2014, Visual Studio 2013, Windows Intune and several new Windows Azure services. The preview bits are now available, and the customer feedback has been incredible!

The most common reaction I have heard from our customers and partners is that they cannot believe how much innovation has been packed into these releases – especially in such a short period of time. There is a truly amazing amount new value in these releases and, with this in mind, we want to help jump-start your understanding of the key scenarios that we are enabling.

As I’ve discussed this new wave of products with customers, partners, and press, I’ve heard the same question over and over: “How exactly did Microsoft build and deliver so much in such a short period of time?” My answer is that we have modified our own internal processes in a very specific way: We build for the cloud first.

A cloud-first design principle manifests itself in every aspect of development; it means that at every step we architect and design for the scale, security and simplicity of a high-scale cloud service. As a part of this cloud-first approach, we assembled a ‘Scenario Focus Team’ that identified the key user scenarios we needed to support – this meant that our engineers knew exactly what needed to be built at every stage of development, thus there was no time wasted debating what happened next. We knew our customers, we knew our scenarios, and that allowed all of the groups and stakeholders to work quickly and efficiently.

The cloud-first design approach also means that we build and deploy these products within our own cloud services first and then deliver them to our customers and partners. This enables us to first prove-out and battle-harden new capabilities at cloud scale, and then deliver them for enterprise use. The Windows Azure Pack is a great example of this: In Azure we built high-density web hosting where we could literally host 5,000 web servers on a single Windows Server instance. We exhaustively battle-hardened that feature, and now you can run it in your datacenters.

At Microsoft we operate more than 200 cloud services, many of which are servicing 100’s of millions of users every day. By architecting everything to deliver for that kind of scale, we are sure to meet the needs of enterprise anywhere and in any industry.

Our cloud-first approach was unique for another reason: It was the first time we had common/unified planning across Windows Client, Windows Server, System Center, Windows Azure, and Windows Intune. I know that may sound crazy, but it’s true – this is a first. We spent months planning and prioritizing the end-to-end scenarios together, with the goal of identifying and enabling all the dependencies and integration required for an effort this broad. Next we aligned on a common schedule with common engineering milestones.

The results have been fantastic. Last week, within 24 hours, we were able to release the previews bits of Windows Client 8.1, Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.

By working together throughout the planning and build process, we established a common completion and Release to Manufacturing date, as well as a General Availability date. Because of these shared plans and development milestones, by the time we started the actual coding, the various teams were well aware of each dependency and the time to build the scenarios was much shorter.

The bottom-line impact of this Cloud-first approach is simple:  Better value, faster.

This wave of products shows that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and those scenarios are all delivered at a higher quality.

This wave of products demonstrates that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and each of those scenarios are all delivered at a higher quality.  This cloud-first approach also helps us deliver the Cloud OS vision that drives the STB business strategy.

The story behind the technologies that support the Cloud OS vision is an important part of how we enable customers to embrace cloud computing concepts.  Over the next eight weeks, we’ll examine in great detail the three core pillars (see the table below) that support and inspire these R2 products:  Empower People-centric IT, Transform the Datacenter, and Enable Modern Business Apps.  The program managers who defined these scenarios and worked within each pillar throughout the product development process, have authored in-depth overviews of these pillars and their specific scenarios, and we’ll release those on a weekly basis.

Pillar

Scenarios

Empower People-centric IT

People-centric IT (PCIT) empowers each person you support to work virtually anywhere on PCs and devices of their choice, while providing IT with an easy, consistent, and secure way to manage it all. Microsoft’s approach helps IT offer a consistent self-service experience for people, their PCs, and their devices while ensuring security. You can manage all your client devices in a single tool while reducing costs and simplifying management.

Transform the Datacenter

Transforming the datacenter means driving your business with the power of a hybrid cloud infrastructure. Our goal is to help you leverage your investments, skills and people by providing a consistent datacenter and public cloud services platform, as well as products and technologies that work across your datacenter, and service provider clouds.

Enable Modern Business Apps

Modern business apps live and move wherever you want, and Microsoft offers the tools and resources that deliver industry-leading performance, high availability, and security. This means boosting the impact of both new and existing applications, and easily extending applications with new capabilities – including deploying across multiple devices.

The story behind these pillars and these products is an important part of our vision for the future of corporate computing and the modern datacenter, and in the following post, David B. Cross, the Partner Director of Test and Operations for Windows Server, shares some of the insights the Windows Server & System Center team have applied during every stage of our planning, build, and deployment of this awesome new wave of products.

People want access to information and applications on the devices of their choice. IT needs keep data protected and without breaking the budget. Learn how the Microsoft People-centric IT vision helps businesses address their consumerization of IT challenges. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/pcit.aspx
Hear from Dell and Accenture how Microsoft Windows Server 2012 R2 and System Center 2012 R2 enable a more flexible workstyle and people-centric IT through virtual desktop infrastructure (VDI). Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.

If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.

Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT(PCIT).

In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.

The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:

  1. The identity of the user
  2. The user’s specific device
  3. The network the user is working from

What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.

In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.

This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.

People want access to corporate applications from anywhere, on whatever device they choose—laptop, smartphone, tablet, or PC. IT departments are challenged to provide consistent, rich experiences across all these device types, with access to native, web, and remote applications or desktops. In this video we take a look at how IT can enable people to choose their devices, reduce costs and complexity, as well as maintain security and compliance by protecting data and having comprehensive settings management across platforms.

In today’s post, we tackle a common question I get from customers: “Why move to the cloud right now?” Recently, however, this question has changed a bit to, “What should I move to the cloud first?

An important thing to keep in mind with either of these questions is that every organization has their own unique journey to the cloud. There are a lot of different workloads that run on Windows Server, and the reality is that these various workloads are moving to the cloud at very different rates. Web servers, e-mail and collaboration are examples of workloads moving to the cloud very quickly. I believe that management, and the management of smart devices, will be one of the next workloads to make that move to the cloud – and, when the time comes, that move will happen fast.

Using a SaaS solution is a move to the cloud, and taking this approach is a game changer because of its ability to deliver an incredible amount of value and agility without an IT pro needing to manage any of the required infrastructure.

Cloud-based device management is a particularly interesting development because it allows IT pros to manage this rapidly growing population of smart, cloud-connected devices, and manage them “where they live.” Today’s smart phones and tablets were built to consume cloud services, and this is one of the reasons why I believe that a cloud-based management solution for them is so natural. As you contemplate your organization’s move to the cloud, I suggest that managing all of your smart devices from the cloud should be one of your top priorities.

I want to be clear, however, about the nature of this kind of management: We believe that there should be one consistent management experience across PC’s and devices.

Achieving this single management experience was a major focus of these 2012 R2 releases, and I am incredibly proud to say we have successfully engineered products which do exactly that. The R2 releases deliver this consistent end-user experience through something we call the “Company Portal.” The Company Portal is already deployed here at Microsoft, and it is what we are currently using to upgrade our entire workforce to Windows 8.1. I’ve personally used it to upgrade my desktop, laptop, and Surface – and the process could not have been easier.

In this week’s post, Paul Mayfield, the Partner Program Manager for System Center Configuration Manager/Windows Intune, and his team return to discuss in deep technical detail some of the specific scenarios our PCIT [“People Centric IT”] team has enabled (cloud-based management, Company Portal, etc.).

Cloud computing is bringing new opportunities and new challenges to IT. Learn how Microsoft can help transform your datacenter to take advantage of the vast possibilities of the cloud while leveraging your existing resources. Learn more: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-data-center.aspx
  • Part 4, July 24, 2013: Enabling Open Source Software

image

There are a lot of great surprises in these new R2 releases – things that are going to make a big impact in a majority of IT departments around the world. Over the next four weeks, the 2012 R2 series will cover the 2nd pillar of this release:Transform the Datacenter. In these four posts (starting today) we’ll cover many of the investments we have made that better enable IT pros to transform their datacenter via a move to a cloud-computing model.

This discussion will outline the ambitious scale of the functionality and capability within the 2012 R2 products. As with any conversation about the cloud, however, there are key elements to consider as you read. Particularly, I believe it’s important in all these discussions – whether online or in person – to remember that cloud computing is a computing model, not a location. All too often when someone hears the term “cloud computing” they automatically think of a public cloud environment. Another important point to consider is that cloud computing is much more than just virtualization – it is something that involves change: Change in the tools you use (automation and management), change in processes, and a change in how your entire organization uses and consumes its IT infrastructure.

Microsoft is extremely unique in this perspective, and it is leading the industry with its investments to deliver consistency across private, hosted and public clouds. Over the course of these next four posts, we will cover our innovations in the infrastructure (storage, network, compute), in both on-premise and hybrid scenarios, support for open source, cloud service provider & tenant experience, and much, much more.

As I noted above, it simply makes logical sense that running the Microsoft workloads in the Microsoft Clouds will deliver the best overall solution. But what about Linux? And how well does Microsoft virtualize and manage non-Windows platforms, in particular Linux?  Today we’ll address these exact questions.

Our vision regarding other operating platforms is simple: Microsoft is committed to being your cloud partner. This means end-to-end support that is versatile, flexible, and interoperable for any industry, in any environment, with any guest OS. This vision ensures we remain realistic – we know that users are going to build applications on open source operating systems, so we have built a powerful set of tools for hosting and managing them.

A great deal of the responsibility to deliver the capabilities that enable the Microsoft Clouds (private, hosted, Azure) to effectively host Linux and the associated open source applications falls heavily on the shoulders of the Windows Server and System Center team. In today’s post Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will detail how building the R2 wave with an open source environment in mind has led to a suite of products that are more adaptable and more powerful than ever.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors. I repeat: 50%. The bulk of the savings comes from our storage innovations and the low costs of our licenses.

At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.

Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer. If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro.

It really all comes down to the friction-free movement of applications and VMs across clouds. Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.

We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.

And speaking of today’s post – this IaaS topic will be published in two parts, with the second half appearing tomorrow morning.

In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

I recently had an opportunity to speak with a number of leaders from the former VMWare User Group (VMUG), and it was an incredibly educational experience. I say “former” because many of the VMUG user group chapters are updating their focus/charter and are renaming themselves the Virtual Technology User Group (VTUG). This change is a direct result of how they see market share and industry momentum moving to solutions like the consistent clouds developed by Microsoft.

In a recent follow up conversation with these leaders, I asked them to describe some common topics they hear discussed in their meetings. One of the leaders commented that the community is saying something really specific: “If you want to have job security and a high paying job for the next 10 years, you better be on your way to becoming an expert in the Microsoft clouds. That is where this industry is going.” 

When I look at what is delivered in these R2 releases, the innovation is just staggering. This industry-leading innovation – the types of technical advances that VTUG groups are confidently betting on – is really exciting.

With this innovation in mind, in today’s post I want to discuss some of the work we are doing around the user experience for the teams creating the services that are offered, and I want to examine the experience that can be offered to the consumer of the cloud (i.e. the tenants). While we were developing R2, we spent a lot of time ensuring that we truly understood exactly who would be using our solutions. We exhaustively researched their needs, their motivations, and how various IT users and IT teams relate to each other. This process was incredibly important because these individuals and teams all have very different needs – and we were committed to supporting all of them.

The R2 wave of products have been built with this understanding.  The IT teams actually building and operating a cloud(s) have very different needs than individuals who are consuming the cloud (tenants).  The experience for the infrastructure teams will focus on just that – the infrastructure; the experience for the tenants will focus on the applications/ services and their seamless operation and maintenance.

In yesterday’s post we focused heavily on the innovations in these R2 releases in the infrastructure – storage, network, and compute – and, in this post, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will provide an in-depth look at Service Provider and Tenant experience and innovations with Windows Server 2012 R2, System Center 2012 R2, and the new features in Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

Today, people want to work anywhere, on any device and have access to all the resources they need to do their job. How do you enable your users to be productive on the device of their choice, yet retain control of information and meet compliance requirements? In this video we take a look at how the Microsoft access and information protection solutions allow you to enable your users to be productive, provide them with a single identity to access all resources, and protect your data. Learn more: http://www.microsoft.com/aip

In the 13+ years since the original Active Directory product launched with Windows 2000, it has grown to become the default identity management and access-control solution for over 95% of organizations around the world.  But, as organizations move to the cloud, their identity and access control also need to move to the cloud. As companies rely more and more on SaaS-based applications, as the range of cloud-connected devices being used to access corporate assets continue to grow, and as more hosted and public cloud capacity is used companies must expand their identity solutions to the cloud.

Simply put, hybrid identity management is foundational for enterprise computing going forward.

With this in mind, we set out to build a solution in advance of these requirements to put our customers and partners at a competitive advantage.

To build this solution, we started with our “Cloud first” design principle. To meet the needs of enterprises working in the cloud, we built a solution that took the power and proven capabilities of Active Director and combined it with the flexibility and scalability of Windows Azure. The outcome is the predictably named Windows Azure Active Directory.

By cloud optimizing Active Directory, enterprises can stretch their identity and access management to the cloud and better manage, govern, and ensure compliance throughout every corner of their organization, as well as across all their utilized resources.

This can take the form of seemingly simple processes (albeit very complex behind the scenes) like single sign-on which is a massive time and energy saver for a workforce that uses multiple devices and multiple applications per person.  It can also enable the scenario where a user’s customized and personalized experience can follow them from device to device regardless of when and where they’re working. Activities like these are simply impossible without a scalable, cloud-based identity management system.

If anyone doubts how serious and enterprise-ready Windows Azure AD already is, consider these facts:

  • Since we released Windows Azure AD, we’ve had over 265 billion authentications.
  • Every two minutes Windows Azure AD services over 1,000,000 authentication requests for users and devices around the world (that’s about 9,000 requests per second).
  • There are currently more than 420,000 unique domains uploaded and now represented inside of Azure Active Directory.

Windows Azure AD is battle tested, battle hardened, and many other verbs preceded by the word “battle.”

But, perhaps even more importantly, Windows Azure AD is something Microsoft has bet its own business on: Both Office 365 (the fastest growing product in Microsoft history) and Windows Intune authenticate every user and device with Windows Azure AD.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center), Alex Simons (Director of Program Management for Active Directory), Sam Devasahayam (Principle Program Management Lead for Windows Azure AD), and Mark Wahl (Principle Program Manager for Active Directory) take a look at one of R2’s most innovative features, Hybrid Identity Management.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.

One of the key elements in delivering hybrid cloud is networking. Learn how software-defined networking helps make hybrid real. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/software-defined-networking.aspx
[so called Application Centric Infrastructure (ACI)] Microsoft and Cisco will deliver unique customer value through new integrated networking solutions that will combine software-enabled flexibility with hardware-enabled scale/performance. These solutions will keep apps and workloads front and center and have the network adapt to their needs. Learn more by visiting: http://www.cisco.com/web/learning/le21/onlineevts/acim/index.html

One of the foundational requirements we called out in the 2012 R2 vision document was our promise to help you transform the datacenter. A core part of delivering on that promise is enabling Hybrid IT.

By focusing on Hybrid IT we were specifically calling out the fact that almost every customer we interacted with during our planning process believed that in the future they would be using capacity from multiple clouds. That may take the form of multiple private clouds an organization had stood up, or utilizing cloud capacity from a service provider [i.e. managed cloud] or a public cloud like Azure, or using SaaS solutions running from the public cloud.

We assumed Hybrid IT would really be the norm going forward, so we challenged ourselves to really understand and simplify the challenges associated with configuring and operating in a multi-cloud environment. Certainly one of the biggest challenges associated with operating in a hybrid cloud environment is associated with the network – everything from setting up the secure connection between clouds, to ensuring you could use your IP addresses (BYOIP) in the hosted and public clouds you chose to use.

The setup, configuration and operation of a hybrid IT environment is, by its very nature incredibly complex – and we have poured hundreds of thousands of hours into the development of R2 to solve this industry-wide problem.

With the R2 wave of products – specifically Windows Server 2012 R2 and System Center 2012 R2 – enterprises can now benefit from the highly-available and secure connection that enables the friction-free movement of VMs across those clouds. If you want or need to move a VM or application between clouds, the transition is seamless and the data is secure while it moves.

The functionality and scalability of our support for hybrid IT deployments has not been easy to build, and each feature has been methodically tested and refined in our own datacenters. For example, consider that within Azure there are over 50,000 network changes every day, and every single one of them is fully automated. If even 1/10 of 1% of those changes had to be done manually, it would require a small army of people working constantly to implement and then troubleshoot the human errors. With R2, the success of processes like these, and our learnings from Azure, come in the box.

Whether you’re a service provider or working in the IT department of an enterprise (which, in a sense, is like being a service provider to your company’s workforce), these hybrid networking features are going to remove a wide range of manual tasks, and allow you to focus on scaling, expanding and improving your infrastructure.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center) and Bala Rajagopalan(Principle Program Manager for Windows Server & System Center), provide a detailed overview of 2012 R2’s hybrid networking features, as well as solutions for common scenarios like enabling customers to create extended networks spanning clouds, and enabling access to virtualized networks.

Don’t forget to take a look at the “Next Steps” section at the bottom of this post, and check back tomorrow for the second half of this week’s hybrid IT content which will examine the topic of Disaster Recovery.

As business becomes more dependent on technology, business continuity becomes increasingly vital for IT. Learn how Micosoft is making it easier to build out business continuity plans. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/business-continuity.aspx

With Windows Server 2012 R2, with Hyper-V Replica, and with System Center 2012 R2 we have delivered a DR solution for the masses.

This DR solution is a perfect example of how the cloud changes everything

Because Windows Azure offers a global, highly available cloud platform with an application architecture that takes full advantage of the HA capabilities – you can build an app on Azure that will be available anytime and anywhere.  This kind of functionality is why we made the decision to build the control plane or administrative console for our DR solution on Azure. The control plane and all the meta-data required to perform a test, planned, or unplanned recovery will always be available.  This means you don’t have to make the huge investments that have been required in the past to build a highly-available platform to host your DR solution – Azure automatically provides this.

(Let me make a plug here that you should be looking to Azure for all the new application you are going to build – and we’ll start covering this specific topic in next week’s R2 post.)

With this R2 wave of products, organizations of all sizes and maturity, anywhere in the world, can now benefit from a simple and cost-effective DR solution.

There’s also another other thing that I am really proud of here: Like most organizations, we regularly benchmark ourselves against our competition.   We use a variety of metrics, like: ‘Are we easier to deploy and operate?’ and ‘Are we delivering more value and doing it a lower price?’  Measurements like these have provided a really clear answer: Our competitors are not even in the same ballpark when it comes to DR.

During the development of R2, I watched a side-by-side comparison of what was required to setup DR for 500 VMs with our solution compared to a competitive offering, and the contrast was staggering. The difference in simplicity and the total amount of time required to set everything up was dramatic.  In a DR scenario, one interesting unit of measurement is total mouse clicks. It’s easy to get carried away with counting clicks (hey, we’re engineers after all!), but, in the side-by-side comparison, the difference was 10’s of mouse clicks compared to 100’s. It is literally a difference of minutes vs. days.

You can read some additional perspectives I’ve shared on DR here.

In yesterday’s post we looked at the new hybrid networking functionality in R2 (if you haven’t seen it yet, it is a must-read), and in this post Vijay Tewari (Principal Program Manager for Windows Server & System Center) goes deep into the architecture of this DR solution, as well this solution’s deployment and operating principles.

As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

A revolution is taking place, impacting the speed at which Business Apps need to be built, and the jaw dropping capabilities they need to deliver. Ignoring these trends isn’t an option and yet you have no time to hit the reset button. Learn how to deliver revolutionary benefits in an evolutionary way. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-business-apps.aspx
Hear from Accenture and Hostway how Microsoft Windows Azure enables the development and deployment of modern business applications faster and more cost effectively through cloud computing. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The future of the IT Pro role will require you to know how applications are built for the cloud, as well as the cloud infrastructures where these apps operate, is something every IT Pro needs in order to be a voice in the meetings that will define an organization’s cloud strategy. IT pros are also going to need to know how their team fits in this cloud-centric model, as well as how to proactively drive these discussions.

These R2 posts will get you what you need, and this “Enable Modern Business Apps” pillar will be particularly helpful.

Throughout the posts in this series we have spoken about the importance of consistency across private, hosted and public clouds, and we’ve examined how Microsoft is unique in its vision and execution of delivering consistent clouds. The Windows Azure Pack is a wonderful example of Microsoft innovating in the public cloud and then bringing the benefits of that innovation to your datacenter.

The Windows Azure Pack is – literally speaking – a set of capabilities that we have battle-hardened and proven in our public cloud. These capabilities are now made available for you to enhance your cloud and ensure that “consistency across clouds” that we believe is so important.

A major benefit of the Windows Azure Pack is the ability to build an application once and then deploy and operate it in any Microsoft Cloudprivate, hosted or public.

This kind of flexibility means that you can build an application, initially deploy it in your private cloud, and then, if you want to move that app to a Service Provider or Azure in the future, you can do it without having to modify the application. Making tasks like this simple is a major part of our promise around cloud consistency, and it is something only Microsoft (not VMware, not AWS) can deliver.

This ability to migrate an app between these environments means that your apps and your data are never locked in to a single cloud. This allows you to easily adjust as your organization’s needs, regulatory requirements, or any operational conditions change.

A big part of this consistency and connection is the Windows Azure Service Bus which will be a major focus of today’s post.

The Windows Azure Service Bus has been a big part of Windows Azure since 2010. I don’t want to overstate this, but Service Bus has been battle-hardened in Azure for more than 3 years, and now we are delivering it to you to run in your datacenters. To give you a quick idea of how critical Service Bus is for Microsoft, consider this: Service Bus is used in all the billing for Windows Azure, and it is responsible for gathering and posting all the scoring and achievement data to the Halo 4 leaderboards (now that is really, really important – just ask my sons!). It goes without saying that the people in charge of Azure billing and the hardcore gamers are not going to tolerate any latency or downtime getting to their data.

With today’s topic, take the time to really appreciate the app development and app platform functionality in this R2 wave. I think you’ll be really excited about how you can plug into this process and lead your organization.

This post, written by Bradley Bartz (Principal Program Manager from Windows Azure) and Ziv Rafalovich (Senior Program Manager in Windows Azure), will get deep into these new features and the amazing scenarios that the Windows Azure Pack and Windows Azure Service Bus enable. As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this for links to additional information about the topics covered in this post.

A major promise underlying all of the 2012 R2 products is really simple: Consistency.

Consistency in the user experiences, consistency for IT professionals, consistency for developers and consistency across clouds. A major part of delivering this consistency is the Windows Azure Pack (WAP). Last week we discussed how Service Bus enables connections across clouds, and in this post we’ll examine more of the PaaS capabilities built and tested in Azure data centers and now offered for Windows Server. With the WAP, Windows Server 2012 R2, and System Center IT pros can make their data center even more scalable, flexible, and secure.

Throughout the development of this R2 wave, we looked closely at what organizations needed and wanted from the cloud. A major piece of feedback was the desire to build an app once and then have that app live in any data center or cloud. For the first time this kind of functionality is now available. Whether your app is in a private, public, or hosted cloud, the developers and IT Professionals in you organization will have consistency across clouds.

One of the elements that I’m sure will be especially popular is the flexibility and portability of this PaaS. I’ve had countless customers comment that they love the idea of PaaS, but don’t want to be locked-in or restricted to only running it in specific data centers. Now, our customers and partners can build a PaaS app and run it anywhere. This is huge! Over the last two years the market has really began to grasp what PaaS has to offer, and now the benefits (auto-scale, agility, flexibility, etc.) are easily accessible and consistent across the private, hosted and public clouds Microsoft delivers.

This post will spend a lot of time talking about Web Sites for Windows Azure and how this high density web site hosting delivers a level of power, functionality, and consistency that is genuinely next-gen.

Microsoft is literally the only company offering these kinds of capabilities across clouds – and I am proud to say that we are the only ones with a sustained track record of enterprise-grade execution.

With the features added by the WAP [Windows Azure Pack], organizations can now take advantage of PaaS without being locked into a cloud. This is, at its core, the embodiment of Microsoft’s commitment to make consistency across clouds a workable, viable reality.

This is genuinely PaaS for the modern web.

Today’s post was written by Bradley Bartz, a Principal Program Manager from Windows Azure. For more information about the technology discussed here, or to see demos of these features in action, check out the “Next Steps” at the bottom of this post.

More information: in the Success with Hybrid Cloud series blog posts [Brad Anderson, Nov 12, Nov 14, Nov 20, Dec 2, Dec 5, and 21 upcoming blogs posts] which “will examine the building/deployment/operation of Hybrid Clouds, how they are used in various industries, how they manage and deliver different workloads, and the technical details of their operation.”


4.2 Unlock Insights from any Data – SQL Server 2014:

With growing demand for data, you need database scale with minimal cost increases. Learn how SQL Server 2014 provides speed and scalability with in-memory technologies to support your key data workloads, including OLTP, data warehousing, and BI. Learn more: http://www.microsoft.com/sqlserver2014
Hosting technical training overview.

Microsoft SQL Server 2014 CTP2 was announced by Quentin Clark during the Microsoft SQL PASS 2013 keynote.  This second public CTP is essentially feature complete and enables you to try and test all of the capabilities of the full SQL Server 2014 release. Below you will find an overview of SQL Server 2014 as well as key new capabilities added in CTP2:

SQL Server 2014 helps organizations by delivering:

  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel

Thank you to those that have already downloaded SQL Server 2014 CTP1 and started seeing first hand the performance gains that in-memory capabilities deliver along with better high availability with AlwaysOn enhancements.  CTP2 introduces additional mission critical capabilities with further enhancements to the in-memory technologies along with new hybrid cloud capabilities.

What’s new in SQL Server 2014 CTP2?

New Mission Critical Capabilities and Enhancements

  • Enhanced In-Memory OLTP, including new tools which will help you identify and migrate the tables and stored procedures will benefit most from In-Memory OLTP, as well as greater T-SQL compatibility and new indexes which enables more customers to take advantage of our solution.
  • High Availability for In-Memory OLTP Databases:  AlwaysOn Availability Groups are supported for In-Memory OLTP, giving you in-memory performance gains with high availability.  IO Resource Governance, enabling customers to more effectively manage IO across multiple databases and/or classes of databases to provide more predictable IO for your most critical workloads.  Customers today can already manage CPU and memory.
  • Improved resiliency with Windows Server 2012 R2 by taking advantage of Cluster Shared Volumes (CSVs).  CSV’s provide improved fault detection and recovery in the case of downtime.
  • Delayed Durability, providing the option for increased transaction throughput and lower latency for OLTP applications where performance and latency needs outweigh the need for 100% durability.

New Hybrid Cloud Capabilities and Enhancements

By enabling the above in-memory performance capabilities for your SQL Server instances running in Windows Azure Virtual Machines, you will see significant transaction and query performance gains.  In addition there are new capabilities listed below that will allow you to unlock new hybrid scenarios for SQL Server.

  • Managed Backup to Windows Azure, enabling you to backup on-premises SQL Server databases to Windows Azure storage directly in SSMS.  Managed Backup also optimizes backup policy based on usage, an advantage over the manual Backup to Windows Azure.
  • Encrypted Backup, offering customer the ability to encrypt both on-premises backup and backups to Windows Azure for enhance security.
  • Enhanced disaster recovery to Windows Azure with simplified UI, enabling customers to more easily add Windows Azure Virtual Machines as AlwaysOn secondaries in SQL Server Management Studio for greater cost-effective data protection and disaster recovery solution.  Customers may also use the secondaries in Windows Azure for to scale and offload reporting and backups.
  • SQL Server Data Files in Windows Azure – New capability to store large databases (>16TB) in Windows Azure and the ability to stream the database as a backend for SQL Server applications running on-premises or in the cloud.

Learn more and download SQL Server 2014 CTP2

SQL Server 2014 helps address key business challenges of ever growing data volumes, the need to transact and process data faster, the scalability and efficiency of cloud computing and an ever growing hunger for business insights.   With SQL Server 2014 you can now unlock real-time insights with mission critical and cloud performance and take advantage of one of the most comprehensive BI solutions in the marketplace today.

Many customers are already realizing the significant benefits of the new in-memory technologies in SQL Server 2014 including: Edgenet, Bwin, SBI Liquidity, TPP and Ferranti.  Stay tuned for an upcoming blog highlighting the impact in-memory had to each of their businesses.

Learn more about SQL Server 2014 and download the datasheet and whitepapers here.  Also if you would like to learn more about SQL Server In-Memory best practices, check out this SQL Server 2014 in-memory blog series compilation. There is also a SQL Server 2014 hybrid cloud scenarios blog compilation for learning best practices.

Also if you haven’t already download SQL Server 2014 CTP 2 and see how much faster your SQL Server applications run!  The CTP2 image is also available on Windows Azure, so you can easily develop and test the new features of SQL Server 2014.

To ensure that its customers received timely, accurate product data, Edgenet decided to enhance its online selling guide with In-Memory OLTP in Microsoft SQL Server 2014.

At the SQL PASS conference last November, we announced the In-memory OLTP (project code-named Hekaton) database technology built into the next release of SQL Server. Microsoft’s technical fellow Dave Campbell’s blog provides a broad overview of the motivation and design principles behind this project codenamed In-memory OLTP.

In a nutshell – In-memory OLTP is a new database engine optimized for memory resident data and OLTP workloads. In-memory OLTP is fully integrated into SQL Server – not a separate system. To take advantage of In-memory OLTP, a user defines a heavily accessed table as memory optimized. In-memory OLTP tables are fully transactional, durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both In-memory OLTP tables and regular tables, and a transaction can update data in both types of tables. Expensive T-SQL stored procedures that reference only In-memory OLTP tables can be natively compiled into machine code for further performance improvements. The engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out mid-tier. To achieve this it uses latch-free data structures and a new optimistic, multi-version concurrency control technique. The end result is a selective and incremental migration into In-memory OLTP to provide predictable sub-millisecond low latency and high throughput with linear scaling for DB transactions. The actual performance gain depends on many factors but we have typically seen 5X-20X in customer workloads.

In the SQL Server product group, many years ago we started the investment of reinventing the architecture of the RDBMS engine to leverage modern hardware trends. This resulted in PowerPivot and In-memory ColumnStore Index in SQL2012, and In-memory OLTP is the new addition for OLTP workloads we are introducing for SQL2014 together with the updatable clustered ColumnStore index and (SSD) bufferpool extension. It has been a long and complex process to build this next generation relational engine, especially with our explicit decision of seamlessly integrating it into the existing SQL Server instead of releasing a separate product – in the belief that it provides the best customer value and onboarding experience.

Now we are releasing SQL2014 CTP1 as a public preview, it’s a great opportunity for you to get hands-on experience with this new technology and we are eager to get your feedback and improve the product. In addition to BOL (Books Online) content, we will roll out a series of technical blogs on In-memory OLTP to help you understand and leverage this preview release effectively.

In the upcoming series of blogs, you will see the following in-depth topics on In-memory OLTP:

  • Getting started – to walk through a simple sample database application using In-memory OLTP so that you can start experimenting with the public CTP release.
  • Architecture – to understand at a high level how In-memory OLTP is designed and built into SQL Server, and how the different concepts like memory optimized tables, native compilation of SPs and query inter-op fit together under the hood.
  • Customer experiences so far – we had many TAP customer engagements since about 2 years ago and their feedback helped to shape the product, and we would like to share with you some of the learnings and customer experiences, such as typical application patterns and performance results.
  • Hardware guidance – it is apparent that memory size is a factor, but since most of the applications require full durability, In-memory OLTP still requires log and checkpointing IO, and with the much higher transactional throughput, it can put actually even higher demand on the IO subsystem as a result. We will also cover how Windows Azure VMs can be used with In-memory OLTP.
  • Application migration – how to get started with migrating to or building a new application with In-memory OLTP. You will see multiple blog posts covering the AMR tool, Table and SP migrations and pointers on how to work around some unsupported data types and T-SQL surface area, as well as the transactional model used. We will highlight the unique approach on SQL server integration which supports a partial database migration.
  • Managing In-memory OLTP – this will cover the DBA considerations, and you will see multiple posts ranging from the tooling supporting (SSMS) to more advanced topics such as how memory and storage are managed.
  • Limitations and what’s coming – explain what limitations exist in CTP1 and new capabilities expected to be coming in CTP2 and RTM, so that you can plan your roadmap with clarity.

In addition – we will also have blog coverage on what’s new with In-memory ColumnStore and introduction to bufferpool extension. 

SQL2014 CTP1 is available for download here or you can read the complete blog series here:

bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012, gaining significant in-memory benefit using xVelocity Column Store. Here, bwin takes their systems one step further by using the technology preview of SQL Server 2014 In-memory OLTP (formerly known as Project “Hekaton”). Prior to using OLTP their online gaming systems were handling about 15,000 requests per second. Using OLTP the fastest tests so far have scaled to 250,000 transactions per second.

Recently I posted a video about how the SQL Server Community was looking into emerging trends in BI and Database technologies – one of the key technologies mentioned in that video was in-memory.

Many Microsoft customers have been using in-memory technologies as part of SQL Server since 2010 including xVelocity Analytics, xVelocity Column Store and Power Pivot, something we recently covered in a blog post following the ‘vaporware’ outburst from Oracle SVP of Communications, Bob Evans. Looking forward, Ted Kummert recently announced project codenamed “Hekaton,” available in the next major release of SQL Server. “Hekaton” will provide a full in-memory transactional engine, and is currently in private technology preview with a small set of customers. This technology will provide breakthrough performance gains of up to 50 times.

For those who are keen to get a first view of customers using the technology, below is the video of online gaming company bwin using “Hekaton”.

Bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012 – a story you can read here. Bwin had already gained significant in-memory benefit using xVelocity Column Store, for example – a large report that used to take 17 minutes to render now takes only three seconds.

Given the benefits, they had seen with in-memory technologies, they were keen to trial the technology preview of “Hekaton”. Prior to using “Hekaton”, their online gaming systems were handling about 15,000 requests per second, a huge number for most companies. However, bwin needed to be agile and stay at ahead of the competition and so they wanted access to the latest technology speed.

Using “Hekaton” bwin were hoping they could at least double the number of transactions. They were ‘pretty amazed’ to see that the fastest tests so far have scaled to 250,000 transactions per second.

So how fast is “Hekaton” – just ask Rick Kutschera, the Database Engineering Manager at bwin – in his words it’s ‘Wicked Fast’! However, this is not the only point that Rick highlights, he goes on to mention that “Hekaton” integrates seamlessly into the SQL Server engine, so if you know SQL Server, you know “Hekaton”.

— David Hobbs-Mallyon, Senior Product Marketing Manager

Quentin Clark
Corporate Vice President, Data Platform Group

This morning, during my keynote at the Professional Association of SQL Server (PASS) Summit 2013, I discussed how customers are pushing the boundaries of what’s possible for businesses today using the advanced technologies in our data platform. It was my pleasure to announce the second Community Technology Preview (CTP2) of SQL Server 2014 which features breakthrough performance with In-Memory OLTP and simplified backup and disaster recovery in Windows Azure.

Pushing the boundaries

We are pushing the boundaries of our data platform with breakthrough performance, cloud capabilities and the pace of delivery to our customers. Last year at PASS Summit, we announced our In-Memory OLTP project “Hekaton” and since then released SQL Server 2012 Parallel Data Warehouse and public previews of Windows Azure HDInsight and Power BI for Office 365. Today we have SQL Server 2014 CTP2, our public and production-ready release shipping a mere 18 months after SQL Server 2012. 

Our drive to push the boundaries comes from recognizing that the world around data is changing.

  • Our customers are demanding more from their data – higher levels of availability as their businesses scale and globalize, major advancements in performance to align to the more real-time nature of business, and more flexibility to keep up with the pace of their innovation. So we provide in-memory, cloud-scale, and hybrid solutions. 
  • Our customers are storing and collecting more data – machine signals, devices, services and data from outside even their organizations. So we invest in scaling the database and a Hadoop-based solution. 
  • Our customers are seeking the value of new insights for their business. So we offer them self-service BI in Office 365 delivering powerful analytics through a ubiquitous product and empowering users with new, more accessible ways of gaining insights.

In-memory in the box for breakthrough performance

A few weeks ago, one of our competitors announced plans to build an in-memory column store into their database product some day in the future. We shipped similar technology two years ago in SQL Server 2012, and have continued to advance that technology in SQL Server 2012 Parallel Data Warehouse and now with SQL Server 2014. In addition to our in-memory columnar support in SQL Server 2014, we are also pushing the boundaries of performance with in-memory online transaction processing (OLTP). A year ago we announced project “Hekaton,” and today we have customers realizing performance gains of up to 30x. This work, combined with our early investments in Analysis Services and Excel, means Microsoft is delivering the most complete in-memory capabilities for all data workloads – analytics, data warehousing and OLTP. 

We do this to allow our customers to make breakthroughs for their businesses. SQL Server is enabling them to rethink how they can accelerate and exceed the speed of their business.

image

  • TPP is a clinical software provider managing more than 30 million patient records – half the patients in England – including 200,000 active registered users from the UK’s National Health Service.  Their systems handle 640 million transactions per day, peaking at 34,700 transactions per second. They tested a next-generation version of their software with the SQL Server 2014 in-memory capabilities, which has enabled their application to run seven times faster than before – all of this done and running in half a day. 
  • Ferranti provides solutions for the energy market worldwide, collecting massive amounts of data using smart metering. With our in-memory technology they can now process a continuous data flow up to 200 million measurement channels making the system fully capable of meeting the demands of smart meter technology.
  • SBI Liquidity Market in Japan provides online services for foreign currency trading. By adopting SQL Server 2014, the company has increased throughput from 35,000 to 200,000 transactions per second. They now have a trading platform that is ready to take on the global marketplace.

A closer look into In-memory OLTP

Previously, I wrote about the journey of the in-memory OLTP project Hekaton, where a group of SQL Server database engineers collaborated with Microsoft Research. Changes in the ratios between CPU performance, IO latencies and bandwidth, cache and memory sizes as well as innovations in networking and storage were changing assumptions and design for the next generation of data processing products. This gave us the opening to push the boundaries of what we could engineer without the constraints that existed when relational databases were first built many years ago. 

Challenging those assumptions, we engineered for dramatically changing latencies and throughput for so-called “hot” transactional tables in the database. Lock-free, row-versioning data structures and compiling T-SQL and queries into native code, combined with making the programming semantics consistent with SQL Server means our customers can apply the performance benefits of extreme transaction processing without application rewrites or the adoption of entirely new products. 

image

The continuous data platform

Windows Azure fulfills new scenarios for our customers – transcending what is on-premises or in the cloud. Microsoft is providing a continuous platform from our traditional products that are run on-premises to our cloud offerings. 

With SQL Server 2014, we are bringing the cloud into the box. We are delivering high availability and disaster recovery on Windows Azure built right into the database. This enables customers to benefit from our global datacenters: AlwaysOn Availability Groups that span on-premises and Windows Azure Virtual Machines, database backups directly into Windows Azure storage, and even the ability to store and run database files directly in Windows Azure storage. That last scenario really does something interesting – now you can have an infinitely-sized hard drive with incredible disaster recovery properties with all the great local latency and performance of the on-premises database server. 

We’re not just providing easy backup in SQL Server 2014, today we announced backup to Windows Azure would be available for all our currently supported SQL Server releases. Together, the backup to Windows Azure capabilities in SQL Server 2014 and via the standalone tool offer customers a single, cost-effective backup strategy for secure off-site storage with encryption and compression across all supported versions of SQL Server.

By having a complete and continuous data platform we strive to empower billions of people to get value from their data. It’s why I am so excited to announce the availability of SQL Server 2014 CTP2, hot on the heels of the fastest-adopted release in SQL Server’s history, SQL Server 2012. Today, more businesses solve their data processing needs with SQL Server than any other database. It’s about empowering the world to push the boundaries.


4.3 Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights:

Data is being generated faster than ever before, so what can it do for your business? Learn how to unlock insights on any data by empowering people with BI and big data tools to go from raw data to business insights faster and easier. Learn more: http://www.microsoft.com/datainsights
With the abundance of information available today, BI shouldn’t be confined to analysts or IT. Learn how to empower all with analytics through familiar Office tools, and how to manage all your data needs with a powerful and scalable data platform. Learn more: http://www.microsoft.com/BI
With data volumes exploding by 10x every five years, and much of this growth coming from new data types, data warehousing is at a tipping point. Learn how to evolve your data warehouse infrastructure to support variety, volume, and velocity of data. Learn more: http://www.microsoft.com/datawarehousing
Hear from HP, Dell and Hortonworks how Microsoft SQL Server Parallel Data Warehouse and Windows Azure HDInsights can unlock data insights and respond to business opportunities through big data analytics. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ
The idea that big data will transform businesses and the world is indisputable, but are there enough resources to fully embrace this opportunity? Join Quentin Clark, Microsoft Corporate Vice President, who will share Microsoft’s bold goal to consumerize big data — simplifying the data science process and providing easy access to data with everyday tools. This keynote is sponsored by Microsoft
Quentin Clark discusses the ever-changing big data market and how Microsoft is meeting its demands.

Announcing Windows Azure HDInsight: Where big data meets the cloud [The Official Microsoft Blog, Oct 28, 2013]

post is from Quentin Clark, Corporate Vice President of the Data Platform Group at Microsoft

I am pleased to announce that Windows Azure HDInsight – our cloud-based distribution of Hadoop – is now generally available on Windows Azure. The GA of HDInsight is an important milestone for Microsoft, as its part of our broader strategy to bring big data to a billion people.

On Tuesday at Strata + Hadoop World 2013, I will discuss the opportunity of big data in my keynote, “Can Big Data Reach One Billion People?” Microsoft’s perspective is that embracing the new value of data will lead to a major transformation as significant as when line of business applications matured to the point where they touched everyone inside an organization. But how do we realize this transformation? It happens when big data finds its way to everyone in business – when anyone with a question that can be answered by data, gets their answer. The impact of this is beyond just making businesses smarter and more efficient. It’s about changing how business works through both people and data-driven insights. Data will drive the kinds of changes that, for example, allow personalization to become truly prevalent. People will drive change by gaining insights into what impacts their business, enabling them to change the kinds of partnerships and products they offer.

Our goal to empower everyone with insights is the reason why Microsoft is investing, not just in technology like Hadoop, but the whole circuit required to get value from big data. Our customers are demanding more from the data they have – not just higher availability, global scale and longer histories of their business data, but that their data works with business in real time and can be leveraged in a flexible way to help them innovate. And they are collecting more signals – from machines and devices and sources outside their organizations.

Some of the biggest changes to businesses driven by big data are created by the ability to reason over data previously thought unmanageable, as well as data that comes from adjacent industries. Think about the use of equipment data to do better operational cost and maintenance management, or a loan company using shipping data as part of the loan evaluation. All of this data needs all forms of analyticsand the ability to reach the people making decisions. Organizations that complete this circuit, thereby creating the capability to listen to what the data can tell them, will accelerate.

Bringing Hadoop to the enterprise

Hadoop is a cornerstone of how we will realize value from big data. That’s why we’ve engineered HDInsight as 100 percent Apache Hadoop offered as an Azure cloud service. The service has been in public production preview for a number of months now – the reception has been tremendous and we are excited to bring it to full GA status in Azure. 

Microsoft recognizes Hadoop as a standard and is investing to ensure that it’s an integral part of our enterprise offerings. We have invested through real contributions across the project – not just to make Hadoop work great on Windows, but even in projects like Tez, Stinger and Hive. We have put in thousands of engineering hours and tens of thousands of lines of code. We have been doing this in partnership with Hortonworks, who will make HDP (Hortwonworks Data Platform) 2.0 for Windows Server generally available next month, giving the world access to a supported Apache-pure Hadoop v2 distribution for Windows Server. Working with Hortonworks, we will support Hadoop v2 in a future update to HDInsight.

Windows Azure HDInsight combines the best of Hadoop open source technology with the security, elasticity and manageability that enterprises require. We have built it to integrate with Excel and Power BI – our business intelligence offering that is part of Office 365 – allowing people to easily connect to data through HDInsight, then refine and do business analytics in a turnkey fashion. For the developer, HDInsight also supports choice of languages: .NET, Java and more.

We have key customers currently using HDInsight, including:

  • The City of Barcelona uses Windows Azure HDInsight to pull in data about traffic patterns, garbage collection, city festivals, social media buzz and more to make critical decisions about public transportation, security and overall spending.
  • A team of computer scientists at Virginia Tech developed an on-demand, cloud-computing model using the Windows Azure HDInsight Service, enabling  easier, more cost-effective access to DNA sequencing tools and resources.
  • Christian Hansen, a developer of natural ingredients for several industries, collects electronic data from a variety of sources, including automated lab equipment, sensors and databases. With HDInsight in place, they are able to collect and process data from trials 100 time times faster than before.

End-to-end solutions for big data

These kinds of uses of Hadoop are examples of how big data is changing what’s possible. Our Hadoop-based solution HDInsight is a building block – one important piece of the end-to-end solutions required to get value from data.

All this comes together in solutions where people can use Excel to pull data directly from a range of sources, including SQL Server (the most widely-deployed database product), HDInsight, external Hadoop clusters and publicly available datasets. They can then use our business intelligence tools in Power BI to refine that data, visualize it and just ask it questions. We believe that by putting widely accessible and easy-to-deploy tools in everyone’s hands, we are helping big data reach a billion people. 

I am looking forward to tomorrow. The Hadoop community is pushing what’s possible, and we could not be happier that we made the commitment to contribute to it in meaningful ways.

Quentin Clark, Microsoft, at Big Data NYC 2013 with John Furrier and Dave Vellante

“We’re here here because we’re super committed to Hadoop,” Clark said, explaining that Microsoft is dedicated to help its customers embrace the benefits Big Data can provide them with. “Hadoop is the cornerstone of Big Data but not the entire infrastructure,” he added. Microsoft is focusing around adding security and tool integration, with thousands of hours of development put into Hadoop, to make it ready for the enterprise. “There’s a foundational piece where customers are starting,” which they can build upon and Microsoft focuses on helping them embrace Hadoop as part of the IT giant’s business goals.

Asked to compare the adoption of traditional Microsoft products with the company’s Hadoop products, Clark said, “a big part of our effort was to get to that enterprise expectations.” Security and tools integration, getting Hadoop to work on Windows is part of that effort. Microsoft aims to help people “have a conversation and dialogue with the data. We make sure we funnel all the data to help them get the BI and analytics” they need.

Commenting on Microsoft’s statement of bringing Big Data to its one billion Office users, Vellante asked if the company’s strategy was to put the power of Big Data into Excel. Clark explained it was about putting Big Data in the Office suite, going on to explain that there is already more than a billion people who are passively using Big Data. Microsoft focuses on those actively using it.

Clark mentions Microsoft has focused on the sports arena, helping major sports leagues use Big Data to power fantasy teams. “We actually have some models, use some data sets. I have a fantasy team that I’m doing pretty well with, partly because of my ability to really have a conversation with the data. On the business side, it’s transformational. Our ability to gain insight in real time and interact is very different using these tools,” Clark stated.

Why not build its own Hadoop distro?

Asked why Microsoft decided not to have its own Hadoop distribution, Clark explained that “primarily our focus has been in improving the Apache core, make Hadoop work on Windows and work great. Our partnership with Hortonworks just made sense. They are able to continue to push and have that cross platform capability, we are able to offer our customers a solution.”

Explaining there were great discrepancies in how different companies in the same industries made use of the benefits Big Data, he advised our viewers to “look at what the big companies are doing” embracing the data, and to look what they are achieving with it.

As far as the future of the Big Data industry is concerned, Clark stated: “There’s a consistent meme of how is this embraced by business for results. Sometimes with the evolution of technology, everyone is exploring what it’s capable of.” Now there’s a focus shift of the industry towards what greater purpose it leads to, what businesses can accomplish.

@thecube

#BigDataNYC


4.4 Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI):

Microsoft Virtual Desktop Infrastructure (VDI) enables IT to deliver desktops and applications to users that employees can access from anywhere on both personal and corporate devices . Centralizing and controlling applications and data through a virtual desktop enables your people to get their work done on the devices they choose while helping maintain compliance. Learn more: http://www.microsoft.com/msvdi
With dramatic growth in the number of mobile users and personal devices at work, and mounting pressure to comply with governmental regulations, IT organizations are increasingly turning to Microsoft Virtual Desktop Infrastructure (VDI) solutions. This session will provide an overview of Microsoft’s VDI solutions and will drill into some of the new, exciting capabilities that Windows Server 2012 R2 offers for VDI solutions.

In October, we announced Windows Server 2012 R2 which delivers several exciting improvements for VDI solutions. Among the benefits, Windows Server 2012 R2 reduces the cost per seat for VDI as well as enhances your end user’s experience. The following are just some of the features and benefits of Windows Server 2012 R2 for VDI:

  • Online data deduplication on actively running VMs reduces storage capacity requirements by up to 90% on persistent desktops.
  • Tiered storage spaces manage your tiers of storage (fast SSDs vs. slower HDDs) intelligently so that the most frequently accessed data blocks are automatically moved onto faster-tier drives. Likewise, older or seldom-accessed files are moved onto the cheaper and slower SAS drives.
  • The Microsoft Remote Desktop App provides easy access to a variety devices and platforms including Windows, Windows RT, iOS, Mac OS X and Android. This is good news for your end users and your mobility/BYOD strategy!
  • Your user experience is also enhanced due to improvements on several fronts including RemoteFX, DirectX 11.1 support, RemoteApp, quick reconnect, session shadowing, dynamic monitor and resolution changes.

If your VDI solutions run on Dell servers or if you are looking at deploying new VDI infrastructure, we are excited to let you know about the work we have been doing in partnership with Dell around VDI. Dell recently updated their Desktop Virtualization Solution (DVS) for Windows Server to support Windows Server 2012 R2, and DVS now delivers all of the benefits mentioned above. Dell is also delivering additional enhancements into Dell DVS for Windows Server so it will also support:

  • Windows 8.1 with touch screen devices and new Intel Haswell processors
  • Unified Communication with Lync 2013, via an endpoint plug-in that enables P2P audio and video. (Dell Wyse has certified selected Windows thin clients to this effect, such as the D90 and Z90.)
  • Virtualized shared graphics on NVidia GRID K1/K2 and AMD FirePro cards using Microsoft RemoteFX technology
  • Affordable persistent desktops
  • Highly-secure and dual/quad core Dell Wyse thin clients, for a true end-to-end capability, even when using high-end server graphics cards or running UC on Lync 2013
  • Optional Dell vWorkspace software, also supporting Windows Server 2012 R2, that brings scalability to tens of thousands of seats, advanced VM provisioning, IOPS efficiency to reduce storage requirement and improve performance, diagnostics and monitoring, flexible resource assignments, support for multi-tenancy and more.
  • Availability in more than 30 countries

Depending on where you stand in the VDI deployment cycle in your organization, Dell DVS for Windows Server is already supported today on multiple Dell PowerEdge server platforms:

  • The T110 for a pilot/POC up to 10 seats
  • The VRTX for implementation in a remote or branch office of up to about 500 users
  • The R720 for a traditional enterprise-like, flexible and scalable implementation to several thousand seats. It supports flexible deployments such as application virtualization, RDSH, pooled and persistent VMs.

This week, Microsoft and Dell will present a technology showcase at Dell World in Austin (TX), USA. If you happen to be at the show, you will be able to see for yourself how well Windows Server 2012 R2 and Windows 8.1 integrate into Dell DVS. We will show:

  • The single management console of Windows Server 2012 installed on a Dell PowerEdge VRTX, demonstrating how easy it can be for an IT administrator to manage VDI workloads based on Hyper-V in a remote or branch office environment
  • How users can chat, talk, share, meet, transfer files and conduct video conferencing within virtualized desktops set up for unified communication
  • That you can watch HD multimedia and 3D graphics files on multiple virtual desktops sharing a graphic card installed remotely in a server
  • How affordable it is to run persistent desktops with DVS and Windows Server 2012 R2

We are excited about the work that we are doing with Dell around VDI and hope you have a chance to come visit our joint VDI showcase in Austin. We will be located in the middle of the Dell booth in show expo hall. Also, we will show a VDI demo as part of the Microsoft Cloud OS breakout session at noon on Thursday (December 12th ) in room 9AB. Finally, we will show a longer VDI demo in the show expo theater (next to the Microsoft booth) at 10am on Friday (December 13th ) morning. We are looking forward to seeing you there.

With the Microsoft Remote Desktop app, you can connect to a remote PC and your work resources from almost anywhere. Experience the power of Windows with RemoteFX in a Remote Desktop client designed to help you get your work done wherever you are.

Post from Brad Anderson,
Corporate Vice President of Windows Server & System Center at Microsoft.

As of yesterday afternoon, the Microsoft Remote Desktop App is available in the Android, iOS, and Mac stores (see screen shots below). There was a time, in the very recent past, when many thought something like this would never happen.

If your company has users who work on iPads, Android, and Windows RT devices, you also likely have a strategy (or at least of point-of-view) for how you will deliver Windows applications to those devices. With the Remote Desktop App and the 2012 R2 platforms made available earlier today, you now have a great solution from Microsoft to deliver Windows applications to your users across all the devices they are using.

As I have written about before, one of the things I am actively encouraging organizations to do is to step back and look at their strategy for delivering applications and protecting data across all of their devices. Today, most enterprises are using different tools for enabling users on PCs, and then they deploy another tool for enabling users on their tablets and smart phones. This kind of overheard and the associated costs are unnecessary – but, even more important (or maybe I should say worse), is that your end-users therefore have different and fragmented experiences as they transition across their various devices. A big part of an IT team’s job must be to radically simplify the experience end users have in accomplishing their work – and users are doing that work across all their devices.

I keep bolding “all” here because I am really trying to make a point:  Let’s stop thinking about PCs and devices in a fragmented way. What we are trying to accomplish is pretty straightforward: Enable users to access the apps and datathey need to be productive in a way that can ensure the corporate assets are secure. Notice that nowhere in that sentence did I mention devices. We should stop talking about PC Lifecycle management, Mobile Device Management and Mobile Application Management – and instead focus our conversation on how we are enabling users. We need a user-enablement Magic Quadrant!

OK – stepping off my soapbox. Smile

Delivering Windows applications in a server-computing model, through solutions like Remote Desktop Services, is a key requirement in your strategy for application access management. But keep in mind that this is only one of many ways applications can be delivered – and we should consider and account for all of them.

For example, you also have to consider Win32 apps running in a distributed model, modern Windows apps, iOS native apps (side-loaded and deep-linked), Android native apps (side-loaded and deep-linked), SaaS applications, and web applications.

Things have really changed from just 5 years ago when we really only had to worry about Windows apps being delivered to Windows devices.

As you are rethinking your application access strategy, you need solutions that enable you to intelligently manage all these applications types across all the devices your workforce will use.

You should also consider what the Remote Desktop Apps released yesterday are proof of Microsoft’s commitment to enable you to have a single solution to manage all the devices your users will use.

Microsoft describes itself as a “devices and services company.” Let me provide a little more insight into this.

Devices: We will do everything we can to earn your business on Windows devices.

Services: We will light up those Windows devices with the cloud services that we build, and these cloud services will alsolight-up all (there’s that bold again) your other devices.

The funny thing about cloud services is that they want every device possible to connect to them – we are working to make sure the cloud services that we are building for the enterprise will bring value to all (again!) the devices your users will want to use – whether those are Windows, iOS, or Android.

The RDP clients that we released into the stores yesterday are not v1 apps. Back in June, we acquired IP assets from an organization in Austria (HLW Software Development GMBH) that had been building and delivering RDP clients for a number of years. In fact, there were more than 1 million downloads of their RDP clients from the Apple and Android stores.  The team has done an incredible job using them as a base for development of our Remote Desktop App, creating a very simple and compelling experience on iOS, Mac OS X and Android. You should definitely give them a try!

Also: Did I mention they are free?

To start using the Microsoft Remote Desktop App for any of these platforms, simply follow these links:

setup: – Windows 8.1 Pro run on a slow Netbook, BenQ Joybook Lite U101 with Aton N270! – HTC One X running Android 4.2.2 – HTC Fly running Android 3.2.1 How to: http://android-er.blogspot.com/2013/10/basic-setup-for-microsoft-remote.html

Satya Nadella’s (?the next Microsoft CEO?) next ten years’ vision of “digitizing everything”, Microsoft opportunities and challenges seen by him with that, and the case of Big Data

… as one of the crucial issues for that (in addition to the cloud, mobility and Internet-of-Things), via the current tipping point as per Microsoft, and the upcoming revolution in that as per Intel

Satya Nadella, Cloud & Enterprise Group, Microsoft and Om Malik, Founder & Senior Writer, GigaOM [LeWeb YouTube channel, Dec 10, 2013]

Satya is responsible for building and running Microsoft’s computing platforms, developer tools and cloud services. He and his team deliver the “Cloud OS.” Rumored to be on the short list for CEO, he shares his views on the future. [Interviewed during the “Plenary I” devoted to “The Next 10 years” at Day 1 on Dec 10, 2013.]

And why I will present Big Data after that? For very simple reason: IMHO exactly in Big Data Microsoft’s innovations came to a point at which its technology has the best chances to become dominant and subsequently define the standard for the IT industry—resulting in “winner-take-all” economies of scale and scope. Whatever Intel is going to add to that in terms of “technologies for the next Big Data revolution” is going only to help Microsoft with its currently achieved innovative position even more. But for this reason I will include here the upcoming Intel innovations for Big Data as well.

In this next-gen regard it is highly recommended to read also: Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others [‘Experiencing the Cloud’. Dec 12, 2013] !

Now the detailed discussion of Big Data:

Microsoft® makes Big Data work for you! [HP Discover YouTube channel, recorded on Dec 11; published on Dec 12, 2013]

[Doug Smith, Director, Emerging Technologies, Microsoft] Come and join our Innovation Theatre session to hear how customers are solving Big Data challenges in big ways jointly with HP!

The Garage Series: Unleashing Power BI for Office 365 [Office 365 technology blog, Nov 20, 2013]

In this week’s show, host Jeremy Chapman is joined by Michael Tejedor from the SQL Server team to discuss Power BI and show it in action. Power BI for Office 365 is a cloud based solution that reduces the barriers to deploying a self-service Business Intelligence environment for sharing live Excel based reports and data queries as well as new features and services that enable ease of data discover and information access from anywhere. Michael draws up the self-service approach to Power BI as well as how public data can be queried and combined in a unified view within Excel. Then they walk through an end-to-end demo of Excel and Power BI componentsPower Query [formerly known as “Data Explorer], Power Pivot, Power View, Power Map [formerly known as product codename “Geoflow] and Q&A–as they optimize profitability of a bar and rein in bartenders with data.

Last week Mark Kashman and I went through the administrative controls of managing user access and mobile devices, but this week I’m joined by Michael Tejedor and we shift gears completely to talk data, databases and business intelligence. Back in July we announced Power BI for Office 365 and how this new service along with the  using the familiar tools within Excel, enables you can to discover, analyze, visualize and share data in powerful ways. Power BIThe solution includes Power Query, Power Pivot, Power View, Power Map and as well as a host of Power BI features including Q&A.  and how using the familiar tools within Excel, you can discover, analyze, visualize and share data in powerful ways. Power BI includes Power Query, Power Pivot, Power View, Power Map and Q&A.  

  • Power Query [formerly known as “Data Explorer] is a data search engine allowing you to query data from within your company and from external data sources on the Internet, all within Excel.
  • Power Pivot lets you create flexible models within Excel that can process large data sets quickly using SQL Server’s in-memory database.
  • Power View allows you to manipulate data and compile it into charts, graphs and other visualizations. It’s great for presentations and reports
  • Power Map [formerly known as product codename “Geoflow] is a 3D data visualization tool for mapping, exploring and interacting with geographic and temporal data.
  • Q&A is a natural language query engine that lets users easily query data using common terms and phrases.

In many cases, the process to get custom reports and dashboards from the people running your databases, sales or operations systems is something like submitting a request to your database administrator and a few phone calls or meetings to get what you want. I came from an logistics and operations management background, it could easily take 2 or 3 weeks to even make minor tweaks to an operational dashboard. Now you can use something familiar–Excelin a self-service way to hook into your local databases, Excel flat files, modern data sources like Hadoop or public data sources via Power Query and the data catalogue. All of these data sources can be combined create powerful insights and data visualizations, all can be easily and securely shared with the people you work with through the Power BI for Office 365 service.

Of course all of this sounds great, but you can’t really get a feel for it until you see it. Michael and team built out a great demo themed after a bar and using data to track alcohol profitability, pour precision per bartender and Q&A to query all of this using normal query terms. You’ll want to watch the show to see how everything turns out and of course to see all of these power tools in action. Of course if you want to kick the tires and try Power BI for Office 365, you can register for the preview now.

Intel: technologies for the next Big Data revolution [HP Discover YouTube channel, recorded on Dec 11; published on Dec 12, 2013]

[Patrick Buddenbaum, Director, Enterprise Segment, Intel Corporation at HP Discover Barcelona 2013 on Dec 11, 11:40 AM – 12:00 PM] HP and Intel share the belief that every organization and individual should be able to unlock intelligence from the world’s ever increasing set of data sources—the Internet of Things.

 

Related “current tipping point” announcements from Microsoft:

From: Organizations Speed Business Results With New Appliances From HP and Microsoft [joint press release, Jan 18, 2011]

New solutions for business intelligence, data warehouse, messaging and database consolidation help increase employee productivity and reduce IT complexity.

… The HP Business Decision Appliance is available now to run business intelligence services ….

Delivering on the companies’ extended partnership announced a year ago, the new converged application appliances from HP and Microsoft are the industry’s first systems designed for IT, as well as end users. They deliver application services such as business intelligence, data warehousing, online transaction processing and messaging. The jointly engineered appliances, and related consulting and support services, enable IT to deliver critical business applications in as little as one hour, compared with potentially months needed for traditional systems.3 One of the solutions already offered by HP and Microsoft — the HP Enterprise Data Warehouse Appliance — delivers up to 200 times faster queries and 10 times the scalability of traditional Microsoft SQL Server deployments.4

With the HP Business Decision Appliance, HP and Microsoft have greatly reduced the time and effort it takes for IT to configure, deploy and manage a comprehensive business intelligence solution, compared with a traditional business intelligence solution where applications, infrastructure and productivity tools are not pre-integrated. This appliance is optimized for Microsoft SQL Server and Microsoft SharePoint and can be installed and configured by IT in less than one hour.

The solution enables end users to share data analyses built with Microsoft’s award-winning5 PowerPivot for Excel 2010 and collaborate with others in SharePoint 2010. It allows IT to centrally audit, monitor and manage solutions created by end users from a single dashboard.

Availability and Pricing6

  • The HP Business Decision Appliance with three years of HP 24×7 hardware and software support services is available today from HP and HP/Microsoft Frontline channel partners for less than $28,000 (ERP). Microsoft SQL Server 2008 R2 and Microsoft SharePoint 2010 are licensed separately.

  • The HP Enterprise Data Warehouse Appliance with services for site assessment, installation and startup, as well as three years of HP 24×7 hardware and software support services, is available today from HP and HP/Microsoft Frontline channel partners starting at less than $2 million. Microsoft SQL Server 2008 R2 Parallel Data Warehouse is licensed separately.

3 Based on HP’s experience with customers using HP Business Decision Appliance.
4 SQL Server Parallel Data Warehouse (PDW) has been evaluated by 16 early adopter customers in six different industries. Customers compared PDW with their existing environments and saw typically 40x and up to 200x improvement in query times.
5 Messaging and Online Collaboration Reviews, Nov. 30, 2010, eWEEK.com.
6 Estimated retail U.S. prices. Actual prices may vary.

From: HP Delivers Enterprise Agility with New Converged Infrastructure Solutions [press release, June 6, 2011]

HP today announced several industry-first Converged Infrastructure solutions that improve enterprise agility by simplifying deployment and speeding IT delivery.

Converged Systems accelerate time to application value

HP Converged Systems speed solution deployment by providing a common architecture, management and security model across virtualization, cloud and dedicated application environments. They include:

  • HP AppSystem maximizes performance while simplifying deployment and application management. These systems offer best practice operations with a standard architecture that lowers total cost of ownership. Among the new systems are HP Vertica Analytics System, as well as HP Database Consolidation Solution and HP Business Data Warehouse Appliance, which are both optimized for Microsoft SQL Server 2008 R2.

From: Microsoft Expands Data Platform With SQL Server 2012, New Investments for Managing Any Data, Any Size, Anywhere [press release, Oct 12, 2011]

New technologies will give businesses a universal platform for data management, access and collaboration.

… Kummert described how SQL Server 2012, formerly code-named “Denali,” addresses the growing challenges of data and device proliferation by enabling customers to rapidly unlock and extend business insights, both in traditional datacenters and through public and private clouds. Extending on this foundation, Kummert also announced new investments to help customers manage “big data,” including an Apache Hadoop-based distribution for Windows Server and Windows Azure and a strategic partnership with Hortonworks Inc. …

The company also made available final versions of the Hadoop Connectors for SQL Server and Parallel Data Warehouse. Customers can use these connectors to integrate Hadoop with their existing SQL Server environments to better manage data across all types and forms.

SQL Server 2012 delivers a powerful new set of capabilities for mission-critical workloads, business intelligence and hybrid IT across traditional datacenters and private and public clouds. Features such as Power View (formerly Project “Crescent,”) and SQL Server Data Tools (formerly “Juneau”) expand the self-service BI capabilities delivered with PowerPivot, and provide an integrated development environment for SQL Server developers.

From: Microsoft Releases SQL Server 2012 to Help Customers Manage “Any Data, Any Size, Anywhere” [press release, March 6, 2012]

Microsoft’s next-generation data platform releases to manufacturing today.

REDMOND, Wash. — March 6, 2012 — Microsoft Corp. today announced that the latest version of the world’s most widely deployed data platform, Microsoft SQL Server 2012, has released to manufacturing. SQL Server 2012 helps address the challenges of increasing data volumes by rapidly turning data into actionable business insights. Expanding on Microsoft’s commitment to help customers manage any data, regardless of size, both on-premises and in the cloud, the company today also disclosed additional details regarding its plans to release an Apache Hadoop-based service for Windows Azure.

Tackling Big Data

IT research firm Gartner estimates that the volume of global data is growing at a rate of 59 percent per year, with 70 to 85 percent in unstructured form.* Furthering its commitment to connect SQL Server and rich business intelligence tools, such as Microsoft Excel, PowerPivot for Excel 2010 and Power View, with unstructured data, Microsoft announced plans to release an additional limited preview of an Apache Hadoop-based service for Windows Azure in the first half of 2012.

To help customers more cost-effectively manage their enterprise-scale workloads, Microsoft will release several new data warehousing solutions in conjunction with the general availability of SQL Server 2012, slated to begin April 1. This includes a major software update and new half-rack form factors for Microsoft Parallel Data Warehouse appliances, as well as availability of SQL Server Fast Track Data Warehouse reference architectures for SQL Server 2012.

Microsoft Simplifies Big Data for the Enterprise [press release, Oct 24, 2012]

New Apache Hadoop-compatible solutions for Windows Azure and Windows Server enable customers to easily extract insights from big data.

NEW YORK — Oct. 24, 2012 — Today at the O’Reilly Strata Conference + Hadoop World, Microsoft Corp. announced new previews of Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows, the company’s Apache Hadoop-based solutions for Windows Azure and Windows Server. The new previews, available today athttp://www.microsoft.com/bigdata, deliver Apache Hadoop compatibility for the enterprise and simplify deployment of Hadoop-based solutions. In addition, delivering these capabilities on the Windows Server and Azure platforms enables customers to use the familiar tools of Excel, PowerPivot for Excel and Power View to easily extract actionable insights from the data.

“Big data should provide answers for business, not complexity for IT,” said David Campbell, technical fellow, Microsoft. “Providing Hadoop compatibility on Windows Server and Azure dramatically lowers the barriers to setup and deployment and enables customers to pull insights from any data, any size, on-premises or in the cloud.”

The company also announced today an expanded partnership with Hortonworks, a commercial vendor of Hadoop, to give customers access to an enterprise-ready distribution of Hadoop with the newly released solutions.

“Hortonworks is the only provider of Apache Hadoop that ensures a 100 percent open source platform,” said Rob Bearden, CEO of Hortonworks. “Our expanded partnership with Microsoft empowers customers to build and deploy on platforms that are fully compatible with Apache Hadoop.”

More information about today’s news and working with big data can be found at http://www.microsoft.com/bigdata.

Choose the Right Strategy to Reap Big Value From Big Data [feature article for the press, Nov 13, 2012]

From devices to storage to analytics, technologies that work together will be key for business’ next information age.

REDMOND, Wash. — Nov. 13, 2012 — It seems the gigabyte is going the way of the megabyte — another humble unit of computational measurement that is becoming less and less relevant. Long live the terabyte, impossibly large, increasingly common.
Consider this: Of all the data that’s been collected in the world, more than 90 percent has been gathered in the last two years alone. According to a June 2011 report from the McKinsey Global Institute, 15 out of 17 industry sectors of the U.S. have more data stored — per company — than the U.S. Library of Congress.
The explosion in data has been catalyzed by several factors. Social media sites such as Facebook and Twitter are creating huge streams of unstructured data in the form of opinions, comments, trends and demographics arising from a vast and growing worldwide conversation.
And then there’s the emerging world of machine-generated information. The rise of intelligent systems and the Internet of Things means that more and more specialized devices are connected to information technology — think of a national retail chain that is connected to every one of its point-of-sale terminals across thousands of locations or an automotive plant that can centrally monitor hundreds of robots on the shop floor.
Combine it all and some industry observers are predicting that the amount of data stored by organizations across industries will increase ten-fold every five years, much of it coming from new streams that haven’t yet been tapped.
It truly is a new information age, and the opportunity is huge. The McKinsey Global Instituteestimates that the U.S. health care system, for example, could save as much as $300 billion from more effective use of data. In Europe, public sector organizations alone stand to save 250 billion euros.
In the ever-competitive world of business, data strategy is becoming the next big competitive advantage. According to analyst firm Gartner Group,* “By tapping a continual stream of information from internal and external sources, businesses today have an endless array of new opportunities for: transforming decision-making; discovering new insights; optimizing the business; and innovating their industries.”
According to Microsoft’s Ted Kummert, corporate vice president of the Business Platforms Division, companies addressing this challenge today may wonder where to start. How do you know which data to store without knowing what you want to measure? But then again, how do you know what insights the data holds without having it in the first place?
“There is latent value in the data itself,” Kummert says. “The good news is storage costs are making it economical to store the data. But that still leaves the question of how to manage it and gain value from it to move your business forward.”
With new data services in the cloud such as Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows and Microsoft’s Apache Hadoop-based solutions for Windows Azure and Windows Server, organizations can afford to capture valuable data streams now while they develop their strategy — without making a huge financial bet on a six-month, multimillion-dollar datacenter project.
Just having access to the data, says Kummert, can allow companies to start asking much more complicated questions, combining information sources such as geolocation or weather information with internal operational trends such as transaction volume.
“In the end, big data is not just about holding lots of information,” he says. “It’s about how you harness it. It’s about insight, allowing end users to get the answers they need and doing so with the tools they use every day, whether that’s desktop applications, devices at the network edge or something else.”
His point is often overlooked with all the abstract talk of big data. In the end, it’s still about people, so making it easier for information workers to shift to a new world in which data is paramount is just as important as the information itself. Information technology is great at providing answers, but it still doesn’t know how to ask the right questions, and that’s where having the right analytics tools and applications can help companies make the leap from simply storing mountains of data to actually working with it.
That’s why in the Windows 8 world, Kummert says, the platform is designed to extend from devices and phones to servers and services, allowing companies to build a cohesive data strategy from end to end with the ultimate goal of empowering workers.
“When we talk about the Microsoft big data platform, we have all of the components to achieve exactly that,” Kummert says. “From the Windows Embedded platform to the Microsoft SQL Server stack through to the Microsoft Office stack. We have all the components to collect the data, store it securely and make it easier for information workers to find it — and, more importantly, understand what it means.”
For more information on building intelligent systems to get the most out of business data, please visit the Windows Embedded home page.
* Gartner, “Gartner Says Big Data Creates Big Jobs: 4.4 Million IT Jobs Globally to Support Big Data By 2015,” October 2012

Which data management solution delivers against today’s top six requirements? [The HP Blog Hub, March 25, 2013]

By Manoj Suvarna – Director, Product Management, HP AppSystems

In my last post I talked about the six key requirements I believe a data management

solution should deliver against today, namely:

1.      High performance

2.      Fast time to value

3.      Built with Big Data as a priority

4.      Low cost

5.      Simplified management

6.      Proven expertise

Today, 25th March 2013, HP has announced the HP AppSystem for Microsoft SQL Server 2012 Parallel Data Warehouse, a comprehensive data warerehouse solution jointly engineered with Microsoft, with a wide array of complementary tools, to effectively manage, store, and unlock valuable business insights.

Let’s take a look at how the solution delivers against each of the key requirements in turn:

1  High performance

With its MPP (Massively Parallel Processing) engine, and ‘shared nothing’ architecture, to effectively manage, store, and unlock valuable business insights, the HP AppSystem for Parallel Data Warehouse can deliver linear scale starting from a configuration to support small terabyte requirements all the way up to configurations supporting six Petabytes of data. 

The solution features the latest HP ProLiant Gen8 servers, with InfiniBand FDR networking, and uses the xVelocity in-memory analytics engine and the xVelocity memory-optimized columnstore index feature in Microsoft SQL Server 2012 to greatly enhance query performance. 

The combination of Microsoft software with HP Converged Infrastructure means HP AppSystem for Parallel Data Warehouse offers leading performance for complex workloads, with up to 100x faster query performance and a 30% faster scan rate than previous generations.

2  Fast time to value

HP AppSystem for Parallel Data Warehouse is a factory built, turn-key system, delivered complete from HP’s factory as an integrated set of hardware and software including servers, storage, networking, tools, software, services, and support.   Not only is the solution pre-integrated, but it’s backed by unique, collaborative HP and Microsoft support with onsite installation and deployment services to smooth implementation.  

3  Built with Big Data as a priority

Designed to integrate with Hadoop, HP AppSystem for Parallel Data Warehouse is ideally suited for “Big Data” environments. This integration allows customers to perform comprehensive analytics on unstructured, semi-structured and structured data, to effectively gain business insights and make better, faster decisions.

4  Simplified management

Providing the optimal management environment has been a critical element of the design, and is delivered through HP Support Pack Utility Suite.  This set of tools simplifies updates and several other maintenance tasks across the system to ensure that it is continually running at optimal performance.  Unique in the industry, HP Support Pack Utility Suite can deliver up to 2000 firmware updates with the click of a button.  In addition, the HP AppSystem for Parallel Data Warehouse is manageable via the Microsoft System Center console, leveraging deep integration with HP Insight Control.

5  Low cost

The HP AppSystem for Parallel Data Warehouse has been designed as part of an end to end stack for data management, integrating data warehousing seamlessly with BI solutions to minimize the cost of ownership.

It has also been re-designed with a new form factor to minimize space and maximize ease of expansion, which means the entry point for a quarter rack system is approximately 35% less expensive than the previous generation solution.    It is expandable in modular increments up to 64 nodes, which means no need for the type of fork-lift upgrade that might be needed with a proprietary solution, and is targeted to be approximately half the cost per TB of comparable offerings in the market from Oracle, IBM, and EMC*.

6 Proven expertise

Together HP and Microsoft have over 30 years experience delivering integrated solutions from desktop to datacenter.  HP AppSystem for Parallel Data Warehouse completes the portfolio for HP Data Management solutions, which give customers the ability to deliver insights on any data, of any size, combining best in class Microsoft software with HP Converged Infrastructure.

For customers, our ability to deliver on the requirements above ultimately provides agility for faster, lower risk deployment of data management in the enterprise, helping them make key business decisions more quickly and drive more value to the organization.

If you’d like to find out more, please go to www.hp.com/solutions/microsoft/pdw.

http://www.valueprism.com/resources/resources/Resources/PDW%20Compete%20Pricing%20FINAL.pdf

HP AppSystem for SQL 2012 Parallel Data Warehouse [HP product page, March 25, 2013]

Overview

Rapid time-to-value data warehouse solution

The HP AppSystem for Microsoft SQL Server 2012 Parallel Data Warehouse, jointly engineered, built and supported with Microsoft, is for customers who realize limitations and inefficiencies of their legacy data warehouse infrastructure. This converged system solution delivers significant advances over the previous generation solution including:

Enhanced performance and massive scalability

  • Up to 100x faster query performance and a 30% faster scan rate
  • Ability to start from small terabyte requirements that can  linearly scale out to 6 Petabytes for mission critical needs

Minimize costs and management complexity

  • Redesigned form factor minimizes space  and allows ease of expansion with significant up-front acquisition savings as well as reduce OPEX heating, cooling and real estate cost requirements
  • Appliance  solution is pre-built and tested as a complete, end-to-end stack — easy to deploy and minimal technical resources required
  • Extensive integration of Microsoft and 3rd party tools  allow users to work with familiar tools like Excel as well as within heterogeneous BI environments
  • Unique HP Support Pack Utility Suite set of tools significantly simplifies updates and  other maintenance tasks to ensure system is running at optimal performance

Reduce risks and manage change

  • Services delivered jointly under a unique collaborative support agreement, integrated across hardware and software, to help avoid IT disruptions and deliver faster resolution to issues
  • Backed by more than 48,000 Microsoft professionals—with more than 12,000 Microsoft Certified—one of the largest, most specialized forces of consultants and support professionals for Microsoft environments in the world

Solution Components

HP Products

    HP Services

    HP Software

      Partner’s Software

        HP Support

        [also available with Dell Parallel Data Warehouse Appliance]
        Appliance: Parallel Data Warehouse (PDW) [Microsoft PDW Software product page, Feb 27, 2013]

        PDW is a massively parallel processing data warehousing appliance built for any volume of relational data (with up to 100x performance gains) and provides the simplest integration to Hadoop.

        Unlike other vendors who opt to provide their high-end appliances for a high price or provide a relational data warehouse appliance that is disconnected from their “Big Data” and/or BI offerings, Microsoft SQL Server Parallel Data Warehouse provides both a high-end massively parallel processing appliance that can improve your query response times up to 100x over legacy solutions as well as seamless integration to both Hadoop and with familiar business intelligence solutions. What’s more, it was engineered to lower ongoing costs resulting in a solution that has the lowest price/terabyte in the market.

        What’s New in SQL Server 2012 Parallel Data Warehouse

        Key Capabilities

        • Built For Big Data with PolyBase

          SQL Server 2012 Parallel Data Warehouse introduces PolyBase, a fundamental breakthrough in data processing used to enable seamless integration between traditional data warehouses and “Big Data” deployments.

          • Use standard SQL queries (instead of MapReduce) to access and join Hadoop data with relational data.
          • Query Hadoop data without IT having to pre-load data first into the warehouse.
          • Native Microsoft BI Integration allowing analysis of relational and non-relational data with familiar tools like Excel.
        • Next-Generation Performance at Scale

          Scale and perform beyond your traditional SQL Server deployment with PDW’s massively parallel processing (MPP) appliance that can handle the extremes of your largest mission critical requirements of performance and scale.

          • Up to 100x faster than legacy warehouses with xVelocity updateable columnstore.
          • Massively Parallel Processing (MPP) architecture that parallelizes and distributes computing for high query concurrency and complexity.
          • Rest assured with built-in hardware redundancies for fault tolerance.
          • Rely on Microsoft as your single point of contact for hardware and software support.
        • Engineered For Optimal Value

          Unlike other vendors in the data warehousing space who deliver a high-end appliance at a high price, Microsoft engineered PDW for optimal value by lowering the cost of the appliance.

          • Resilient, scalable, and high performance storage features built into software lowering hardware costs.
          • Compress data up to 15x with the xVelocity updateable columnstore saving up to 70% of storage requirements.
          • Start small with a quarter rack allowing you to right-size the appliance rather than over-acquiring capacity.
          • Use the same tools and knowledge as SQL Server without retaining new tools or knowledge for scale-out DW or Big Data.
          • Co-engineered with hardware partners offering highest level of product integration and shipped to your door offering fastest time to value.
          • The lowest price/terabyte than overall appliance market (and 2.5x lower than SQL 2008 R2 PDW).

          PolyBase [Microsoft page, Feb 26, 2013]

          PolyBase is a fundamental breakthrough in data processing used in SQL Server 2012 Parallel Data Warehouse to enable truly integrated query across Hadoop and relational data.

          Complementing Microsoft’s overall Big Data strategy, PolyBase is a breakthrough new technology on the data processing engine in SQL Server 2012 Parallel Data Warehouse designed as the simplest way to combine non-relational data and traditional relational data in your analysis. While customers would normally burden IT to pre-populate the warehouse with Hadoop data or undergo an extensive training on MapReduce in order to query non-relational data, PolyBase does this all seamlessly giving you the benefits of “Big Data” without the complexities.

          Key Capabilities

          • Unifies Relational and Non-relational Data

            PolyBase is one of the most exciting technologies to emerge in recent times because it unifies the relational and non-relational worlds at the query level. Instead of learning a new query like MapReduce, customers can leverage what they already know (T-SQL)

            • Integrated Query: Accepts a standard T-SQL query that joins tables containing a relational source with tables in a Hadoop cluster without needing to learn MapReduce.
            • Advanced query options: Apart from simple SELECT queries, users can perform JOINs and GROUP BYs on data in the Hadoop cluster.
          • Enables In-place Queries with Familiar BI Tools

            Microsoft Business Intelligence (BI) integration enables users to connect to PDW with familiar tools such as Microsoft Excel, to create compelling visualizations and make key business decisions from structured or unstructured data quickly.

            • Integrated BI tools: End users can connect to both relational or Hadoop data with Excel abstracting the complexities of both.
            • Interactive visualizations: Explore data residing in HDFS using Power View for immersive interactivity and visualizations.
            • Query in-place: IT doesn’t have to pre-load or pre-move data from Hadoop into the data warehouse and pre-join the data before end users do the analysis.
          • Part of an Overall Microsoft Big Data Story

            PolyBase is part of an overall Microsoft “Big Data” solution that already includes HDInsight (a 100% Apache Hadoop compatible distribution for Windows Server and Windows Azure), Microsoft Business Intelligence, and SQL Server 2012 Parallel Data Warehouse.

            • Integrated with HDInsight: PolyBase can source the non-relational analysis from Microsoft’s 100% Apache compatible Hadoop distribution, HD Insights.
            • Built into PDW: PolyBase is built into SQL Server 2012 Parallel Data Warehouse to bring “Big Data” benefits within the power of a traditional data warehouse.
            • Integrated BI tools: PolyBase has native integration with familiar BI tools like Excel (through Power View and PowerPivot).

          Announcing Power BI for Office 365 [Office News, July 8, 2013]

          Today, at the Worldwide Partner Conference, we announced a new offering–Power BI for Office 365. Power BI for Office 365 is a cloud-based business intelligence (BI) solution that enables our customers to easily gain insights from their data, working within Excel to analyze and visualize the data in…

          Exciting new BI features in Excel [Excel Blog, July 9, 2013]

          Yesterday during the Microsoft’s Worldwide Partner Conference we announced some exciting new Business Intelligence (BI) features available for Excel. Specifically, we announced the expansion of the BI offerings available as part of Power BIa cloud-based BI solution that enables our customers to easily gain insights from their data, working within Excel to analyze and visualize the data in a self-service way.

          Power BI for Office 365 now includes:

          • Power Query, enabling customers to easily search and access public data and their organization’s data, all within Excel (formerly known as “Data Explorer).  Download details here
          • Power Map, a 3D data visualization tool for mapping, exploring and interacting with geographic and temporal data (formerly known as product codename “Geoflow).  Download details here.
          • Power Pivot for creating and customizing flexible data models within Excel. 
          • Power View for creating interactive charts, graphs and other visual representations of data.

          Head on over to the Office 365 Technology Blog, Office News Blog, and Power BI site to learn more.

          Clearing up some confusion around the Power BI “Release” [A.J. Mee’s Business Intelligence and Big Data Blog, Aug 13, 2013]

          Hey folks.  Thanks again for checking out my blog.
          Yesterday (8/12/2013), Power BI received some attention from the press.  Here’s one of the articles that I had seen talking about the “release” of Power BI:
          http://www.neowin.net/news/microsoft-releases-power-bi-office-365-for-windows-8rt

          Some of us inside Microsoft had to address all sorts of questions around this one.  For the most part, the questions revolved around the *scope* of what was actually released.  You have to remember that Power BI is a broad brand name that takes into account:

          * Power Pivot/View/Query/Map (which is available now, for the most part)

          * The Office 365 hosting of Power BI applications with cloud-to-on-premise data refresh, Natural Language query, data stewardship, etc..

          * The Mobile BI app for Windows and iOS devices

          Net-net: we announced the availability of the Mobile app (in preview form).  At present, it is only available on Windows 8 devices (x86 or ARM) – no iOS just yet.  The rest of the O365 / Power BI offering is yet to come.  Check out this article to find out how to sign up.
          http://blogs.msdn.com/b/ajmee/archive/2013/07/17/how-can-i-check-out-power-bi.aspx
          So, the headline story is really all around the Mobile app.  You can grab it today from the Store – just search on “Power BI” and it should be the first app that shows up.

          From: Power Map for Excel earns new name with significant updates to 3D visualizations and storytelling [Excel Blog, Sept 25, 2013]

          We are announcing a significant update to Power Map Preview for Excel (formerly Project codename “GeoFlow” Preview for Excel) on the Microsoft Download Center. Just over five months ago, we launched the preview of Project codename “GeoFlow” amidst a passionately announced “tour” of global song artists through the years by Amir Netz (see 1:17:00 in the keynote) at the first ever PASS Business Analytics conference in April. The 3D visualization add-in has now become a centerpiece visualization (along with Power View) within the business intelligence capabilities of Microsoft Power BI in Excel, earning the new name Power Map to align with other Excel features (Power Query, Power Pivot, and Power View).

          Information workers with their data in Excel have realized the potential of Power Map to identify insights in their geospatial and time-based data that traditional 2D charts cannot. Digital marketers can better target and time their campaigns while environmentally-conscious companies can fine-tune energy-saving programs across peak usage times. These are just a few of the examples of how location-based data is coming alive for customers using Power Map and distancing them from their competitors who are still staring blankly at a flat table, chart, or map. Feedback from customers like this lead us to introduce Power Map with some new features across experience of mapping data, discovering insights, and sharing stories.

          From: Microsoft unleashes fall wave of enterprise cloud solutions [press release, Oct 7, 2013]

          New Windows Server, System Center, Visual Studio, Windows Azure, Windows Intune, SQL Server, and Dynamics solutions will accelerate cloud benefits for customers.

          REDMOND, Wash. — Oct. 7, 2013 — Microsoft Corp. on Monday announced a wave of new enterprise products and services to help companies seize the opportunities of cloud computing and overcome today’s top IT challenges. Complementing Office 365 and other services, these new offerings deliver on Microsoft’s enterprise cloud strategy.

          Data platform and insights

          As part of its vision to help more people unlock actionable insights from big data, Microsoft next week will release a second preview of SQL Server 2014. The new version offers industry-leading in-memory technologies at no additional cost, giving customers 10 times to 30 times performance improvements without application rewrites or new hardware. SQL Server 2014 also works with Windows Azure to give customers built-in cloud backup and disaster recovery.

          For big data analytics, later this month Microsoft will release Windows Azure HDInsight Service, an Apache Hadoop-based service that works with SQL Server and widely used business intelligence tools, such as Microsoft Excel and Power BI for Office 365. With Power BI, people can combine private and public data in the cloud for rich visualizations and fast insights.

          How to take full advantage of Power BI in Excel 2013 [News from Microsoft Business UK, Oct 14, 2013]

          The launch of Power BI features in Excel 2013 gives users an added range of options for data analysis and gaining business intelligence (BI). Power Query, Power Pivot, Power View, and Power Map work seamlessly together, making it much simpler to discover and visualise data. And for small businesses looking to take advantage of self-service intelligence solutions, this is a major stride forwards.

          Power Query

          With Power Query, users can search the entire cloud for data – both public and private. With access to multiple data sources, users can filter, shape, merge, and append the information, without the need to physically bring it in to Excel.

          Once your query is shaped and filtered how you want it, you can download it into a worksheet in Excel, into the Data Model, or both. When you have the dataset you need, shaped and formed and properly merged, you can save the query that created it, and share it with other users.

          Power Pivot

          Power Pivot enables users to create their own data models from various sources, structured to meet individual needs. You can customise, extend with calculations and hierarchies, and manage the powerful Data Model that is part of Excel.

          The solution works seamlessly and automatically with Power Query, and with other features of Power BI, allowing you to manage and extend your own custom database in the familiar environment of Excel. The entire Data Model in Power Pivot – including tables, columns, calculations and hierarchies – exist as report-ready elements in Power View.

          Power View

          Power View allows users to create engaging, interactive, and insightful visualisations with just a few clicks of their mouse. The tool brings the Data Model alive, turning queries into visual analysis and answers. Data can be presented in a variety of different forms, with the reports easily shareable and open for interactive analysis.

          Power Map

          A relatively new addition to ExcelPower Map is a geocentric and temporal mapping feature of Power BI. It brings location data into powerful, engaging 3D map visualisations. This allows users to create location-based reports, visualised over a time continuum, that tour the available data.

          Using the features together

          Power BI offers a collection of services which are designed to make self-service BI intuitive and collaborative. The solution combines the power and familiarity of Excel with collaboration and cloud-based functionality. This vastly increases users’ capacity to gather, manage and draw insights from data, ensuring they can make the most of business intelligence.

          The various feature of BI can add value independently, but the real value is in integration. When used in conjunction with one another – rather than in silo – the services become more than the sum of their parts. They are designed to work seamlessly together in Excel 2013, supporting users as they look to find data, process it and create visualisations which add value to the decision making process.

          Posted by Alex Boardman

          Related upcoming technology announcements from Intel:

          GraphBuilder: Revealing hidden structure within Big Data [Intel Labs blog, Dec 6, 2012]

          By Ted Willke, Principal Engineer with Intel and the General Manager of the Graph Analytics Operation in Intel Labs.

          Big Data.  Big.  Data.  We hear the term frequently used to describe data of unusual size or generated at spectacular velocity, like the amount of social data that Facebook has amassed on us (30 PB in one cluster) or the rate at which sensors at the Large Hadron Collider collect information on subatomic particles (15 PB/year).  And it’s often deemed “unstructured or semi-structured” to describe its lack of apparent, well, structure.  What’s meant is that this data isn’t organized in a way that can directly answer questions, like a database can if you ask it how many widgets you sold last week.

          But Big Data does have structure; it just needs to be discovered from within the raw text, images, video, sensor data, etc., that comprise it.  And, companies, led by pioneers like Google, have been doing this for the better part of a decade, using applications that churn through the information using data-parallel processing and convenient frameworks for it, like Hadoop MapReduce.  Their systems chop the incoming data into slices, farm it out to masses of machines, which subsequently filter it, order it, sum it, transform it, and do just about anything you’d want to do with it, within the practical limits of the readily available frameworks.

          But until recently, only the wizards of Big Data were able to rapidly extract knowledge from a different type of structure within the data, a type that is best modeled by tree or graph structures.  Imagine the pattern of hyperlinks connecting Wikipedia pages or the connections between Tweeters and Followers on Twitter.  In these models, a line is drawn between two bits of information if they are related to each other in some way.  The nature of the connection can be less obvious than in these examples and made specifically to serve a particular algorithm.  For example, a popular form of machine learning called Latent Dirichlet Allocation (a mouthful, I know) can create “word clouds” of topics in a set of documents without being told the topics in advance. All it needs is a graph that connects word occurrences to the filenames.  Another algorithm can accurately guess the type of noun (i.e., person, place, or thing) if given a graph that connects noun phrases to surrounding context phrases.

          Many of these graphs are very large, with tens of billions of vertices (i.e., things being related) and hundreds of billions of edges (i.e., the relationships).  And, many that model natural phenomena possess power-law degree distributions, meaning that many vertices connect to a handful of others, but a few may have edges to a substantial portion of the vertices.  For instance, a graph of Twitter relationships would show that many people only have a few dozen followers while only a handful of celebrities have millions. This is all very problematic for parallel computation in general and MapReduce in particular.  As a result, Carlos Guestrin and his crack team at the University of Washington in Seattle have developed a new framework, called GraphLab, that is specifically designed for graph-based parallel machine learning.  In many cases, GraphLab can process such graphs 20-50X faster than Hadoop MapReduce.  Learn more about their exciting work here.

          Carlos is a member of the Intel Science and Technology Center for Cloud Computing, and we started working with him on graph-based machine learning and data mining challenges in 2011.  Quickly it became clear that no one had a good story about how to construct large-scale graphs that frameworks like GraphLab could digest.  His team was constantly writing scripts to construct different graphs from various unstructured data sources.  These scripts ran on a single machine and would take a very long time to execute.  Essentially, they were using a labor-intensive, low-performance method to feed information to their elegant high-performance GraphLab framework.  This simply would not do.

          Scanning the environment, we identified a more general hole in the open source ecosystem: A number of systems were out there to process, store, visualize, and mine graphs but, surprisingly, not to construct them from unstructured sources.  So, we set out to develop a demo of a scalable graph construction library for Hadoop.  Yes, for Hadoop.  Hadoop is not good for graph-based machine learning but graph construction is another story.  This work became GraphBuilder, which was demonstrated in July at the First GraphLab Workshop on Large-Scale Machine Learning and open sourced this week at 01.org (under Apache 2.0 licensing).

          GraphBuilder not only constructs large-scale graphs fast but also offloads many of the complexities of graph construction, including graph formation, cleaning, compression, partitioning, and serialization.  This makes it easy for just about anyone to build graphs for interesting research and commercial applications.  In fact, GraphBuilder makes it possible for a Java programmer to build an internet-scale graph for PageRank in about 100 lines of code and a Wikipedia-sized graph for LDA in about 130.

          This is only the beginning for GraphBuilder but it has already made a lot of connections.  We will continually update it with new capabilities, so please try it out and let us know if you’d value something in particular.  And, let us know if you’ve got an interesting graph problem for us to grind through.  We are always looking for new revelations.

          Intel, Facebook Collaborate on Future Data Center Rack Technologies  [press release, Jan 16, 2013]

          New Photonic Architecture Promises to Dramatically Change Next Decade of Disaggregated, Rack-Scale Server Designs

          NEWS HIGHLIGHTS

          • Intel and Facebook* are collaborating to define the next generation of rack technologies that enables the disaggregation of compute, network and storage resources.
          • Quanta Computer* unveiled a mechanical prototype of the rack architecture to show the total cost, design and reliability improvement potential of disaggregation.
          • The mechanical prototype includes Intel Silicon Photonics Technology, distributed input/output using Intel Ethernet switch silicon, and supports the Intel® Xeon® processor and the next-generation system-on-chip Intel® Atom™ processor code named “Avoton.”
          • Intel has moved its silicon photonics efforts beyond research and development, and the company has produced engineering samples that run at speeds of up to 100 gigabits per second (Gbps).

          OPEN COMPUTE SUMMIT, Santa Clara, Calif., Jan. 16, 2013 – Intel Corporation announced a collaboration with Facebook* to define the next generation of rack technologies used to power the world’s largest data centers. As part of the collaboration, the companies also unveiled a mechanical prototype built by Quanta Computer* that includes Intel’s new, innovative photonic rack architecture to show the total cost, design and reliability improvement potential of a disaggregated rack environment.

          “Intel and Facebook are collaborating on a new disaggregated, rack-scale server architecture that enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade,” said Justin Rattner, Intel’s chief technology officer during his keynote address at Open Computer Summit in Santa Clara, Calif. “The disaggregated rack architecture [since renamed RSA (Rack Scale Architecture)] includes Intel’s new photonic architecture, based on high-bandwidth, 100Gbps Intel® Silicon Photonics Technology, that enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today’s copper based interconnects.”

          Rattner explained that the new architecture is based on more than a decade’s worth of research to invent a family of silicon-based photonic devices, including lasers, modulators and detectors using low-cost silicon to fully integrate photonic devices of unprecedented speed and energy efficiency. Silicon photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable. Intel has spent the past two years proving its silicon photonics technology was production-worthy, and has now produced engineering samples.

          Silicon photonics made with inexpensive silicon rather than expensive and exotic optical materials provides a distinct cost advantage over older optical technologies in addition to providing greater speed, reliability and scalability benefits. Businesses with server farms or massive data centers could eliminate performance bottlenecks and ensure long-term upgradability while saving significant operational costs in space and energy.

          Silicon Photonics and Disaggregation Efficiencies

          Businesses with large data centers can significantly reduce capital expenditure by disaggregating or separating compute and storage resources in a server rack. Rack disaggregation refers to the separation of those resources that currently exist in a rack, including compute, storage, networking and power distribution into discrete modules. Traditionally, a server within a rack would each have its own group of resources. When disaggregated, resource types can be grouped together and distributed throughout the rack, improving upgradability, flexibility and reliability while lowering costs.

          “We’re excited about the flexibility that these technologies can bring to hardware and how silicon photonics will enable us to interconnect these resources with less concern about their physical placement,” said Frank Frankovsky, chairman of the Open Compute Foundation and vice president of hardware design at supply chain at Facebook. “We’re confident that developing these technologies in the open and contributing them back to the Open Compute Project will yield an unprecedented pace of innovation, ultimately enabling the entire industry to close the utilization gap that exists with today’s systems designs.”

          By separating critical components from one another, each computer resource can be upgraded on its own cadence without being coupled to the others. This provides increased lifespan for each resource and enables IT managers to replace just that resource instead of the entire system. This increased serviceability and flexibility drives improved total-cost for infrastructure investments as well as higher levels of resiliency. There are also thermal efficiency opportunities by allowing more optimal component placement within a rack.

          The mechanical prototype is a demonstration of Intel’s photonic rack architecture for interconnecting the various resources, showing one of the ways compute, network and storage resources can be disaggregated within a rack. Intel will contribute a design for enabling a photonic receptacle to the Open Compute Project (OCP) and will work with Facebook*, Corning*, and others over time to standardize the design. The mechanical prototype includes distributed input/output (I/O) using Intel Ethernet switch silicon, and will support the Intel® Xeon® processor and the next generation, 22 nanometer system-on-chip (SoC) Intel® Atom™ processor, code named “Avoton” available this year.

          The mechanical prototype shown today is the next evolution of rack disaggregation with separate distributed switching functions.

          Intel and Facebook: A History of Collaboration and Contributions

          Intel and Facebook have long been technology collaboration partners on hardware and software optimizations to drive more efficiency and scale for Facebook data centers. Intel is also a founding board member of the OCP, along with Facebook. Intel has several OCP engagements in flight including working with the industry to design OCP boards for Intel Xeon and Intel Atom based processors, support for cold storage with the Intel Atom processor, and common hardware management as well as future rack definitions including enabling today’s photonics receptacle.

          Disruptive technologies to unlock the power of Big Data [Intel Labs blog, Feb 26, 2013]

          By Ted Willke, Principal Engineer with Intel and the General Manager of the Graph Analytics Operation in Intel Labs.

          This week’s announcement by Intel that it’s expanding the availability of the Intel® Distribution for Apache Hadoop* software to the US market is seriously exciting for the employees of this semiconductor giant, especially researchers like me.  Why?  Why would I say this given the amount of overexposure that Hadoop has received?  I mean, isn’t this technology nearly 10 years old already??!!  Well, because the only thing I hear more than people touting Hadoop’s promise are people venting frustration in implementing it.  Rest assured that Intel is listening.  We get that users don’t want to make a career out of configuring Hadoop… debugging it…  managing it… and trying to figure out why the “insight” it’s supposed to be delivering often looks like meaningless noise.

          Which brings me back to why this is a seriously exciting event for me.  With our product teams doing the heavy lifting of making the Hadoop framework less rigid and easier to use while keeping it inexpensive, Intel Labs gets a landing zone for some cool disruptive technologies. In December, I blogged about the launch of our open source scalable graph construction library for Hadoop, called Intel® Graph Builder for Apache Hadoop software (f.k.a. GraphBuilder), and explained how it makes it easy to construct large scale graphs for machine learning and data mining. These structures can yield insights from relationships hidden within a wide range of big data sources, from social media and business analytics to medicine and e-science. Today I’ll delve a bit more into Graph Builder technology and introduce the Intel® Active Tuner for Apache Hadoop software, an auto-tuner that uses Artificial Intelligence (AI) to configure Hadoop for optimal performance.  Both technologies will be available in the Intel Distribution.

          So, Intel® Graph Builder leverages Hadoop MapReduce to turn large unstructured (or semi-structured) datasets into structured output in graph form.  This kind of graph may be mined using graph search of the sort that Facebook recently announced.  Many companies would like construct such graphs out of unstructured datasets and Graph Builder makes it possible.  Beyond search, analysis may be applied to an entire graph to answer questions of the type shown in the figure below.  The analysis may be performed using distributed algorithms implemented in frameworks like GraphLab, which I also discussed in my previous post.

          image

          Intel® Graph Builder performs extract, transform, and load operations, terms borrowed from databases and data warehousing.  And, it does so at Hadoop MapReduce scale.  Text is parsed and tokenized to extract interesting features.  These operations are described in a short map-reduce program written by the data scientist.  This program also defines when two vertices (i.e., features) in the graph are related by an edge.  The rule is applied repeatedly to form the graph’s topology (i.e., the pattern of edge relationships between vertices), which is stored via the library.  In addition, most applications require that additional tabulated information, or “network information,” be associated with each vertex/edge and the library provides a number of distributed algorithms for these tabulations.

          At this point, we have a large-scale graph ready for HDFS, HBase, or another distributed store.  But we need to do a few more things to ensure that queries and computations on the graph will scale up nicely, like:

          • Cleaning the graph’s structure and checking that it is reasonable
          • Compressing the graph and network information to conserve cluster resources
          • Partitioning the graph in a way that will minimize cluster communications while load balancing computational effort

          The Intel Graph Builder library provides efficient distributed algorithms for all of the above, and more, so that data scientists can spend more of their time analyzing data and less of their time preparing it.  Enough said. The library will be included in the Intel Distribution shortly and we look forward to your feedback.  We are constantly on the hunt for new features as we look to the future of big data.

          Whereas Intel® Graph Builder was developed to simplify the programming of emerging applications, Intel® Active Tuner was developed to simplify the deployment of today’s applications by automating the selection of configuration settings that will result in optimal cluster performance. In fact, we initially codenamed this technology “Gunther,” after a well-known circus elephant trainer, because of its ability to train Hadoop to run faster :-) .  It’s cruelty-free to boot, I promise.  Anyway, many Hadoop configuration parameters need to be tuned for the characteristics of each particular application, such as web search, medical image analysis, audio feature analysis, fraud detection, semantic analysis, etc.  This tuning significantly reduces both job execution and query time but is time consuming and requires domain expertise. If you use Hadoop you know that the common practice is to tune it up using rule-of-thumb settings published by industry leaders.  But these recommendations are too general and fail to capture the specific requirements of a given application and cluster resource constraints.  Enter the Active Tuner.

          Intel® Active Tuner implements a search engine that uses a small number of representative jobs to identify the best configuration from among millions or billions of possible Hadoop configurations.  It uses a form of AI known as a genetic algorithm to search out the best settings for the number of maps, buffer sizes, compression settings, etc., constantly striving to derive better settings by combining those from pairs of trials that show the most promise (this is where the genetic part comes in) and deriving future trials from these new combinations.  And, the Active Tuner can do this faster and more effectively than a human can using the rules-of-thumb.  It can be controlled from a slick GUI in the new Intel Manager for Apache Hadoop, so take it for a test run when you pick up a copy of the Intel Distribution.  You may see your cluster performance improve by up to 30% without any hassle.

          To wrap, these are one-of-a-kind technologies that I think you’ll have fun playing with.  And, despite offering quite a lot, Intel® Graph Builder and Intel® Active Tuner are just the beginning.  I am very excited by what’s coming next.  Intel is moving to unlock the power of Big Data and Intel Labs is preparing to blow it wide open.

          *Other names and brands may be claimed as the property of others

          Intel Unveils New Technologies for Efficient Cloud Datacenters [press release, Sept 4, 2013]

          From New SoCs to Optical Fiber, Intel Delivers Cloud-Optimized Innovations Across Network, Storage, Microservers, and Rack Designs

          NEWS HIGHLIGHTS

          • The Intel® Atom™ C2000 processor family is the first based on Silvermont micro-architecture, has 13 customized configurations and is aimed at microservers, entry-level networking and cold storage.
          • New 64-bit, system-on-chip family for the datacenter delivers up to six times1 the energy efficiency and up to seven times2 the performance compared to previous generation.
          • The first live demonstration of a Rack Scale Architecture-based system with high-speed Intel® Silicon Photonics components including a new MXC connector and ClearCurve* optical fiber developed in collaboration with Corning*, enabling data transfers speeds up to 1.6 terabits4 per second at distances up to 300 meters5 for greater rack density.

          SAN FRANCISCO, Calif., September 4, 2013 – Intel Corporation today introduced a portfolio of datacenter products and technologies for cloud service providers looking to drive greater efficiency and flexibility into their infrastructure to support a growing demand for new services and future innovation.

          Server, network and storage infrastructure is evolving to better suit an increasingly diverse set of lightweight workloads, creating the emergence of microserver, cold storage and entry networking segments. By optimizing technologies for specific workloads, Intel will help cloud providers significantly increase utilization, drive down costs and provide compelling and consistent experiences to consumers and businesses.

          The portfolio includes the second generation 64-bit Intel® Atom™ C2000 product family of system-on-chip (SoC) designs for microservers and cold storage platforms (code named “Avoton”) and for entry networking platforms (code named “Rangeley”). These new SoCs are the company’s first products based on the Silvermont micro-architecture, the new design in its leading 22nm Tri-Gate SoC process delivering significant increases in performance and energy efficiency, and arrives only nine months after the previous generation.

          “As the world becomes more and more mobile, the pressure to support billions of devices and users is changing the very composition of datacenters,” said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. “From leadership in silicon and SoC design to rack architecture and software enabling, Intel is providing the key innovations that original equipment manufacturers, telecommunications equipment makers and cloud service providers require to build the datacenters of the future.”

          Intel also introduced the Intel® Ethernet Switch FM5224 silicon which, when combined with the WindRiver Open Network Software suite, brings Software Defined Networking (SDN) solutions to servers for improved density and lower power.

          Intel also demonstrated the first operational Intel Rack Scale Architecture (RSA)-based rack with Intel® Silicon PhotonicsTechnology in combination with the disclosure of a new MXC connector and ClearCurve* optical fiber developed by Corning* with requirements from Intel. This demonstration highlights the speed with which Intel and the industry are moving from concept to functionality.

          Customized, Optimized Intel® Atom™ SoCs for New and Existing Market Segments
          Manufactured using Intel’s leading 22nm process technology, the new Intel Atom C2000 product family features up to eight cores, a range of 6 to 20Watts TDP, integrated Ethernet and support for up to 64 gigabytes (GB) of memory, eight times the previous generation. OVH* and 1&1, leading global web-hosting services companies, have tested Intel Atom C2000 SoCs and plan to deploy them in its entry-level dedicated hosting services next quarter. The 22 nanometer process technology delivers superior performance and performance per watt.

          Intel is delivering 13 specific models with customized features and accelerators that are optimized for particular lightweight workloads such as entry dedicated hosting, distributed memory caching, static web serving and content delivery to ensure greater efficiency. The designs allow Intel to expand into new markets like cold storage and entry-level networking.

          For example, the new Intel Atom configurations for entry networking address the specialized needs for securing and routing Internet traffic more efficiently. The product features a set of hardware accelerators called Intel® QuickAssist Technology that improves cryptographic performance. They are ideally suited for routers and security appliances.

          By consolidating three communications workloads – application, control and packet processing – on a common platform, providers now have tremendous flexibility. They will be able to meet the changing network demands while adding performance, reducing costs and improving time-to-market.

          Ericsson, a world-leading provider of communications technology and services announced that its blade-based switches used in the Ericsson Cloud System, a solution which enables service providers to add cloud capabilities to their existing networks, will sooninclude the Intel Atom C2000 SoC product family.

          Microserver-Optimized Switch for Software Defined Networking
          Network solutions that manage data traffic across microservers can significantly impact the performance and density of the system. The unique combination of the Intel Ethernet Switch FM5224 silicon and the WindRiver Open Network Software suite will enable the industry’s first 2.5GbE, high-density, low latency, SDN Ethernet switch solutions specifically developed for microservers. The solution enhances system level innovation, and complements the integrated Intel Ethernet controller within the Intel Atom C2000 processor. Together, they can be used to create SDN solutions for the datacenter.

          Switches using the new Intel Ethernet Switch FM5224 silicon can connect up to 64 microservers, providing up to 30 percent3 higher node density. They are based on Intel Open Network Platform reference design announced earlier this year.

          First Demonstration of Silicon Photonics-Powered Rack
          Maximum datacenter efficiency requires innovation at the silicon, system and rack level. Intel’s RSA design helps industry partners to re-architect datacenters for modularity of components (storage, CPU, memory, network) at the rack level. It provides the ability to provision or logically compose resources based on application specific workload requirements. Intel RSA also will allow for the easier replacement and configuration of components when deploying cloud computing, storage and networking resources.

          Intel today demonstrated the first operational RSA-based rack equipped with the newly announced Intel Atom C2000 processors, Intel® Xeon® processors, a top-of-rack Intel SDN-enabled switch and Intel Silicon Photonics Technology. As part of the demonstration, Intel also disclosed the new MXC connector and ClearCurve* fiber technology developed by Corning* with requirements from Intel. The fiber connections are specifically designed to work with Intel Silicon Photonics components.

          The collaboration underscores the tremendous need for high-speed bandwidth within datacenters. By sending photons over a thin optical fiber instead of electrical signals over a copper cable, the new technologies are capable of transferring massive amounts of data at unprecedented speeds over greater distances. The transfers can be as fast as 1.6 terabits per second4 at lengths up to 300 meters5 throughout the datacenter.

          To highlight the growing range of Intel RSA implementations, Microsoft and Intel announced a collaboration to innovate on Microsoft’s next-generation RSA rack design. The goal is to bring even better utilization, economics and flexibility to Microsoft’s datacenters.

          The Intel Atom C2000 product family is shipping to customers now with more than 50 designs for microservers, cold storage and networking. The products are expected to be available in the coming months from vendors including Advantech*, Dell*, Ericsson*, HP*, NEC*, Newisys*, Penguin Computing*, Portwell*, Quanta*, Supermicro*, WiWynn*, ZNYX Networks*.

          Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

          NEWS HIGHLIGHTS.

          • Intel discloses form factors and memory configuration details of the CPU version of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing“), to ease programmability for developers while improving performance.
          • Intel® Xeon® processor-based systems power more than 82 percent of all supercomputers on the recently announced 42nd edition of the Top500 list.
          • New Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools bring the benefits of Big Data analytics and HPC together.
          • Collaboration with HPC community designed to deliver customized products to meet the diverse needs of customers.

          SUPERCOMPUTING CONFERENCE, Denver, Nov. 19, 2013 –Intel Corporation unveiled innovations in HPC and announced new software tools that will help propel businesses and researchers to generate greater insights from their data and solve their most vital business and scientific challenges.

          “In the last decade, the high-performance computing community has created a vision of a parallel universe where the most vexing problems of society, industry, government and research are solved through modernized applications,” said Raj Hazra, Intel vice president and general manager of the Technical Computing Group. “Intel technology has helped HPC evolve from a technology reserved for an elite few to an essential and broadly available tool for discovery. The solutions we enable for ecosystem partners for the second half of this decade will drive the next level of insight from HPC. Innovations will include scale through standards, performance through application modernization, efficiency through integration and innovation through customized solutions.”

          Accelerating Adoption and Innovation
          From Intel® Parallel Computing Centers to Intel® Xeon Phi™ coprocessor developer kits, Intel provides a range of technologies and expertise to foster innovation and adoption in the HPC ecosystem. The company is collaborating with partners to take full advantage of technologies available today, as well as create the next generation of highly integrated solutions that are easier to program for and are more energy-efficient. As a part of this collaboration Intel also plans to deliver customized HPC products to meet the diverse needs of customers. This initiative is aimed to extend Intel’s continued value of standards-based scalable platforms to include optimizations that will accelerate the next wave of scientific, industrial, and academic breakthroughs.

          During the Supercomputing Conference (SC’13), Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing”), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will significantly reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking.

          Knights Landing will also offer developers three memory options to optimize performance. Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.

          In addition, Intel and Fujitsu recently announced an initiative that could potentially replace a computer’s electrical wiring with fiber optic links to carry Ethernet or PCI Express traffic over an Intel® Silicon Photonics link. This enables Intel Xeon Phi coprocessors to be installed in an expansion box, separated from host Intel Xeon processors, but function as if they were still located on the motherboard. This allows for much higher density of installed coprocessors and scaling the computer capacity without affecting host server operations.

          Several companies are already adopting Intel’s technology. For example, Fovia Medical*, a world leader in volume rendering technology, created high-definition, 3D models to help medical professionals better visualize a patient’s body without invasive surgery. A demonstration from the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) showed a 2D simulation of an F4 tornado, and addressed how a forecaster will be able to experience an immersive 3D simulation and “walk around a storm” to better pinpoint its path. Both applications use Intel® Xeon® technology.

          High Performance Computing for Data-Driven Discovery
          Data intensive applications including weather forecasting and seismic analysis have been part of the HPC industry from its earliest days, and the performance of today’s systems and parallel software tools have made it possible to create larger and more complex simulations. However, with unstructured data accounting for 80 percent of all data, and growing 15 times faster than other data1, the industry is looking to tap into all of this information to uncover valuable insight.

          Intel is addressing this need with the announcement of the Intel® HPC Distribution for Apache Hadoop* software (Intel® HPC Distribution) that combines the Intel® Distribution for Apache Hadoop software with Intel® Enterprise Edition of Lustre* software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.

          The Intel® Cloud Edition for Lustre* software is a scalable, parallel file system that is available through the Amazon Web Services Marketplace* and allows users to pay-as-you go to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud.

          With numerous vendors announcing pre-configured and validated hardware and software solutions featuring the Intel Enterprise Edition for Lustre, at SC’13, Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. Partners announcing these appliances include Advanced HPC*, Aeon Computing*, ATIPA*, Boston Ltd.*, Colfax International*, E4 Computer Engineering*, NOVATTE* and System Fabric Works*.

          Intel Tops Supercomputing Top 500 List
          Intel’s HPC technologies are once again featured throughout the 42nd edition of the Top500 list, demonstrating how the company’s parallel architecture continues to be the standard building block for the world’s most powerful supercomputers. Intel-based systems account for more than 82 percent of all supercomputers on the list and 92 percent of all new additions. Within a year after the introduction of Intel’s first Many Core Architecture product, Intel Xeon Phi coprocessor-based systems already make up 18 percent of the aggregated performance of all Top500 supercomputers. The complete Top500 list is available at www.top500.org.


          1 From IDC Digital Universe 2020 (2013)

          Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
          Optimization Notice
          Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
          Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

          Fujitsu Lights up PCI Express with Intel Silicon Photonics [The Data Stack blog of Intel, Nov 5, 2013]

          Victor Krutul is the Director of Marketing for the Silicon Photonics Operation at Intel.  He shares the vision and passion of Mario Paniccia that Silicon Photonics will one day revolutionize the way we build computers and the way computers talk to each other.  His other passions are tennis and motorcycles (but not at the same time)!

          I am happy to report that Fujitsu announced at its annual Fujitsu Forum on November 5th 2013, that it has worked with Intel to build and demonstrate the world’s first Intel® Optical PCIe Express (OPCIe) based server.  This OPCIe server was enabled by Intel® Silicon Photonics technology.  I think Fujitsu has done some good work when they realized that OPCIe powered servers offer several advantages over non OPCIe based servers.  Rack based servers, especially 1u and 2u servers are space and power constrained.  Sometimes OEMs and end users want to add additional capabilities such as more storage and CPUs to these servers but are limited  because there is simply not enough space for these components or because packing too many components too close to each other increases the heat density and prevents the system from being able to cool the components.

          Fujitsu found a way to fix these limitations!

          The solution to the power and space density problems is to locate the storage and compute components on a remote blade or tray in a way that they appear to the CPU to be on the main motherboard.  The other way to do this is to have a pool of hard drives managed by a second server – but this approach requires messages be sent between the two servers and this adds latency – which is bad.  It is possible to do this with copper cables; however the distance the copper cables can span is limited due to electro-magnetic interference (EMI).  One could use amplifiers and signal conditioners but these obviously add power and cost.  Additionally PCI Express cables can be heavy and bulky.  I have one of these PCI Express Gen 3 16 lanes cables and it feels like it weighs 20 lbs.  Compare this to a MXC cable that carries 10x the bandwidth and weighs one to two pounds depending on length.

          Fujitsu took two standard Primergy RX200 servers and added an Intel® Silicon Photonics module into each along with an Intel designed FPGA.  The FPGA did the necessary signal conditioning to make PCI Express “optical friendly”.  Using Intel® Silicon Photonics they were able to send PCI Express protocol optically through an MXC connector to an expansion box (see picture below).  In this expansion box was several solid state disks (SSD) and Xeon Phi co-processors and of course there was a Silicon Photonics module along with the FPGA to make PCI Express optical friendly.  The beauty of this approach was that the SSD’s and Xeon Phi’s appeared to the RX200 server as if they were on the mother board.  With photons traveling at 186,000 miles per second the extra latency of travelling down a few meters of cable cannot reliably be measured (it can be calculated to be ~5ns/meter or 5 billionths of a second).So what are the benefits of this approach?  Basically there are four.  First, Fujitsu was able to increase the storage capacity of the server because they now were able to utilize the additional disk drives in the expansion box.  The number of drives is determined by the physical size of the box.  The 2nd benefit is they were able to increase the effective CPU capacity of the Xeon E5’s in the RX200 server because the Xeon E5’s could now utilize the CPU capacity of the Xeon Phi co-processors. In a standard 1u rack it would be hard if not impossible to incorporate Xeon Phi’s.  The third benefit is the cooling.  First putting the SSD’s in a expansion box allows one to burn more power because the cooling is divided between the fans in the 1U rack and those in the expansion box,  The fourth benefit is what is called cooling density or, how much heat needs to be cooled per cubic centimeter.  Let me make up an example. For simplicity sake let’s say the volume of a 1u rack is 1 cubic meter and let’s say there are 3 fans cooling that rack and each fan can cool 333 watts for a total capacity of 1000 watts of cooling.  If I evenly space components in the rack each fan does its share and I can cool 1000 watts.  Now assume I put all the components so that just one fan is cooling them because there is no room in front of the other two fans.  If those components expend more than 330 watts they can’t be cooled.  That’s cooling density.  The Fujitsu approach solves the SSD expansion problem, the CPU expansion problem and the total cooling and cooling density problems.

          image

          Go to:https://www-ssl.intel.com/content/dam/www/public/us/en/images/research/pci-express-and-mxc-2.jpg  if you want to see the PCI Express copper cable vs the MXC optical cable (you will also see we had a little fun with the whole optical vs copper thing.)

          Besides Intel® Silicon Photonics the Fujitsu demo also included Xeon E5 microprocessors and Xeon Phi co-processors.

          Why does Intel want to put lasers in and around computers?

          Photonic signaling (aka fiber optics) has 2 fundamental advantages over copper signaling.  First, when electric signals go down a wire or PCB trace they emit electromagnetic radiation (EMI) and when this EMI from one wire or trace couples into an adjacent wire it causes noise, which limits the bandwidth distance product.  For example, 10G Ethernet copper cables have a practical limit of 10 meters.  Yes, you can put amplifies or signal conditioners on the cables and make an “active copper cable” but these add power and cost.  Active copper cables are made for 10G Ethernet and they have a practical limit of 20 meters.

          Photons don’t emit EMI like electrons do thus fiber based cables can go much longer.  For example with the lower cost lasers used in data centers today at 10G you can build 500 meter cables.  You can go as far as 80km if you used a more expensive laser, but these are only needed a fraction of the time in the data center (usually when you are connecting the data center to the outside world.)

          The other benefit of optical communication is lighter cables.  Optical fibers are thin, typically 120 microns and light.  I have heard of situations where large data centers had to reinforce the raised floors because with all the copper cable, the floor loading limits would be exceeded.

          So how come optical communications is not used more in the data center today? The answer is cost!

          Optical devices made for data centers are expensive.  They are made out of expensive and exotic materials like Lithium-Niobate or Gallium-Arsenide.  Difficult to pronounce, even more difficult to manufacture.  The state of the art for these exotic materials is 3 inch wafers with very low yields.  Manufacturing these optical devices is expensive.  They are designed inside of gold lined cans and sometimes manual assembly is required as technicians “light up” the lasers and align them to the thin fibers.  A special index matching epoxy is used that sometimes can cost as much as gold per ounce.  Bottom line is that while optical communications can go further and uses light fiber cables it costs a lot more.

          Enter Silicon Photonics!  Silicon Photonics is the science of making Photonic devices out of Silicon in a CMOS fab.  Also known as optical but we use the word photonics because the word “optical” is also used when describing eye glasses or telescopes.  Silicon is the most common element in the Earth’s crust, so it’s not expensive.  Intel has 40+ years of CMOS manufacturing experience and has worked over the 40 years to drive costs down and manufacturing speed up.  In fact, Intel currently has over $65 Billion of capital investment in CMOS fabs around the world.  In short, the vision of Intel® Silicon Photonics is to combine the natural advantages of optical communications with the low cost advantages of making devices out of Silicon in a CMOS fab.

          Intel has been working on Intel® Silicon Photonics (SiPh) for over ten years and has begun the process of productizing SiPh.  Earlier this year, at the OCP summit Intel announced that we have begun the long process of building up our manufacturing abilities for Silicon Photonics.  We also announced we had sampled customers with early parts.

          People will often ask me when we will ship our products and how much they will cost?   They also ask me for all sort of technical details about out SiPh modules.  I tell them that Intel is focusing on a full line of solutions – not a single component technology. What our customers want are complete Silicon Photonic based solutions that will make computing easier, faster or less costly.  Let me cite our record of delivering end-to-end solutions:

          Summary of Intel Solution Announcements

          January 2013:  We did a joint announcement with Facebook at the Open Compute Project (OCP) meeting that we worked together to design disaggregated rack architecture (since renamed RSA [Rack Scale Architecture]).  This architecture used Intel® Silicon Photonics and allowed for the storage and networking to be disaggregated or moved away from the CPU mother board.  The benefit is that users can now choose which components they want to upgrade and are not forced to upgrade everything at the same time.

          April 2013: At the Intel Developer Forum we demonstrated the first ever public demonstration of Intel® Silicon Photonics at 100G.

          September 2013: We demonstrated a live working Rack Scale Architecture solution using Intel® Silicon Photonics links carrying Ethernet protocol.

          September 2013: Joint announcement with Corning for new MXC and ClearCurve fiber solution capable of transmission of 300m with Intel® Silicon Photonics at 25G.  This reinforced our strategy of delivering a complete solution including cables and connectors that are optimized for Intel® Silicon Photonics.

          September 2013Updated Demonstration of a solution using Silicon Photonics to send data at 25G for more than 800 meters over multimode fibers – A new world record.

          Today: Intel has extended its Silicon Photonics solution leadership with a joint announcement with Fujitsu demonstrating the world’s first Intel® Silicon Photonics link carrying PCI Express protocol.

          I hope you will agree with me that Intel is focusing on more than just CPUs or optical modules and will deliver a complete Silicon Photonics solution!

          The future is here: Yes, it is Microsoft Surface 2 with modern apps only! (And ARM, not x86/x64!)

          This video is speaking for itself (and for the title): Why I Love my Microsoft Surface 2 : Tips and Tricks [Sean Ong YouTube channel, Nov 3, 2013]

          In this video I show off my favorite features in the Microsoft Surface 2, with windows 8.1 RT. I show off voice control (windows speech recognition), multiple monitor support, and a variety of accessories via USB hub (including external hard drive, mouse, keyboard, and Xbox 360 controller integration). I show how I connect the Surface 2 to my HDTV as well as wireless casting of music and video! I also go through some other features, such as Spotify web player, and icloud web. Also kid friendly applications and multiple accounts. There’s so much stuff this thing can do, it will blow your mind away

          That is how Sean Ong, a senior consultant at Navigant (focussing there on “technical, economic, and policy analysis of energy efficiency and renewable energy systems”) and himself an energy analysis engineer, was able to present the above, truly incredible customer value from current and especially future point of view for Windows 8.1 in geneneral and Surface 2 (ARM based) in particular. It is even more remarkable as nobody, I REPEAT NOBODY, from Microsoft worldwide could do that. I know even a highly professional, true world class Windows 8/Windows 8.1 expert who was not only fascinated himself by the above video, but acknowledged honestly that he was unaware of the speech recognition progress in Windows 8.1. And we are talking about an internal expert who has already been involved in the internal expert network of similar, most devoted Microsoft specialists in Windows 8 and Windows 8.1 for years.

          For me this video is incredibly important because:

          NOT ONLY FOR THE FUTURE OF MICROSOFT BUT FOR THE WHOLE STATE OF COMPUTING
          AS THE MISSING COMMUNICATIONS FROM MICROSOFT, EVEN THE TOTAL INABILITY OF MICROSOFT TO COMMUNICATE THE INHERENT WINDOWS 8.1/SURFACE 2 VALUES, WERE CLEARLY POINTING TO TOTAL LACK OF MARKETING COMPETENCY FOR ITS GAME-CHANGING, MICROSOFT-ONLY, POST PC AREA INNOVATIONS INHERENT IN WINDOWS 8.1/SURFACE 2

          Although these signs (both the positive and negative ones) were coupled with a number of competitive positive changes for Microsoft, such as:

          But a number of competitive negative changes for Microsoft became even more worrisome (than any time before) lately, such as:

          Fortunately we already know:

          Board of directors initiates succession process; Ballmer remains CEO until successor is named.
          Microsoft Corp. today announced that Chief Executive Officer Steve Ballmer has decided to retire as CEO within the next 12 months, upon the completion of a process to choose his successor. In the meantime, Ballmer will continue as CEO and will lead Microsoft through the next steps of its transformation to a devices and services company that empowers people for the activities they value most.
          “There is never a perfect time for this type of transition, but now is the right time,” Ballmer said. “We have embarked on a new strategy with a new organization and we have an amazing Senior Leadership Team. My original thoughts on timing would have had my retirement happen in the middle of our company’s transformation to a devices and services company. We need a CEO who will be here longer term for this new direction.”
          The Board of Directors has appointed a special committee to direct the process. This committee is chaired by John Thompson, the board’s lead independent director, and includes Chairman of the Board Bill Gates, Chairman of the Audit Committee Chuck Noski and Chairman of the Compensation Committee Steve Luczo. The special committee is working with Heidrick & Struggles International Inc., a leading executive recruiting firm, and will consider both external and internal candidates.
          The board is committed to the effective transformation of Microsoft to a successful devices and services company,” Thompson said. “As this work continues, we are focused on selecting a new CEO to work with the company’s senior leadership team to chart the company’s course and execute on it in a highly competitive industry.”
          “As a member of the succession planning committee, I’ll work closely with the other members of the board to identify a great new CEO,” said Gates. “We’re fortunate to have Steve in his role until the new CEO assumes these duties.”
          Founded in 1975, Microsoft (Nasdaq “MSFT”) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
          Outgoing Microsoft CEO Steve Ballmer has always been a speaker and performer like no other — his absolute enthusiasm for his company is electric in person, turning ordinary corporate events into raw displays of emotion that are often criticized but never forgotten. Read more at The Verge: http://www.theverge.com/2013/9/27/4779036/exclusive-video-steve-ballmers-intense-tearful-goodbye-to-microsoft
          Steve Ballmer paced his corner office on a foggy January morning here, listening through loudspeakers to his directors’ voices on a call that would set in motion the end of his 13-year reign as Microsoft Corp.’s MSFT -0.47% chief executive.
          Microsoft lagged behind Apple Inc. AAPL -0.60% and Google Inc. GOOG -0.16% in important consumer markets, despite its formidable software revenue. Mr. Ballmer tried to spell out his plan to remake Microsoft, but a director cut him off, telling him he was moving too slowly.
          “Hey, dude, let’s get on with it,” lead director John Thompson says he told him. “We’re in suspended animation.” Mr. Ballmer says he replied that he could move faster.
          But the contentious call put him on a difficult journey toward his August decision to retire, sending Microsoft into further tumult as it began seeking a successor to a man who has been at its heart for 33 years.
          “Maybe I’m an emblem of an old era, and I have to move on,” the 57-year-old Mr. Ballmer says, pausing as his eyes well up. “As much as I love everything about what I’m doing,” he says, “the best way for Microsoft to enter a new era is a new leader who will accelerate change.”
          Mr. Ballmer, in a series of exclusive interviews tinged with his characteristic bluster and wistfulness, tells of how he came to believe that he couldn’t lead Microsoft forward—that, in fact, Microsoft would not be led by him because of the very corporate culture he had helped instill.
          Mr. Ballmer and his board have been in agreement: Microsoft, while maintaining its strong software business, must shake up its management structure and refocus on mobile devices and online services if it is to find future profit growth and reduce its dependence on the fading PC market.
          The board’s beef was speed. The directors “didn’t push Steve to step down,” says Mr. Thompson, a longtime technology executive who heads the board’s CEO-search committee, “but we were pushing him damn hard to go faster.”
          Investors, too, were pushing for transformation. “At this critical juncture, Wall Street wants new blood to bring fundamental change,” says Brent Thill, a longtime Microsoft analyst at UBS AG. “Steve was a phenomenal leader who racked up profits and market share in the commercial business, but the new CEO must innovate in areas Steve missed—phone, tablet, Internet services, even wearables.”
          The Microsoft board’s list of possible successors includes, among others, former Nokia Corp. NOK1V.HE +0.25% CEO Stephen Elop, Microsoft enterprise-software chief Satya Nadella and Ford Motor Co. F -0.12% CEO Alan Mulally, say people familiar with the search. In conjunction with Microsoft’s annual shareholder meeting Nov. 19, the board plans to meet and will discuss succession, says a person familiar with the schedule.
          Representatives for Mr. Elop and Mr. Nadella say the men have no comment on the search. A Ford spokesman says “nothing has changed” since November 2012, when Ford said Mr. Mulally would remain CEO through at least 2014, adding: “Alan remains absolutely focused on continuing to make progress on our One Ford plan. We do not engage in speculation.”
          Microsoft’s next chief will be only the third in its history. Mr. Ballmer joined in 1980 at the suggestion of his Harvard University pal, co-founder Bill Gates, and is its second-largest individual shareholder and a billionaire.
          After growing up in Detroit, where his father was a Ford manager, Mr. Ballmer roomed down the hall from Mr. Gates at Harvard. He dropped his Stanford M.B.A. studies to become Microsoft’s first business manager.
          He was Mr. Gates’s right-hand man, helping turn Microsoft into a force that redefined how the world used computers. He took the reins in 2000, further solidifying Microsoft’s position in software markets and keeping the profit engine humming. Revenue tripled during his tenure to almost $78 billion in the year ended this June, and profit grew 132% to nearly $22 billion.
          But while profit rolled in from Microsoft’s traditional markets, it missed epic changes, including Web-search advertising and the consumer shift to mobile devices and social media.
          Last year, Mr. Ballmer sought to reboot. In an October shareholder letter, he declared Microsoft would become a provider of “devices and services” for businesses and individuals.
          He told the board he wanted to lead the charge and remain until his youngest son graduated from high school in four years. He began his own succession planning by meeting potential candidates in what he calls “cloak-and-dagger” meetings.
          Mr. Ballmer’s reboot plan required a corporate overhaul. For guidance, he called his longtime friend, Ford’s Mr. Mulally, once a top Boeing Co. BA +0.73% executive. They met Christmas Eve at a Starbucks on Mercer Island near Seattle.
          Mr. Ballmer brought a messenger bag, pulling out onto a table an array of phones and tablets from Microsoft and competitors. He asked Mr. Mulally how he turned around Ford. For four hours, he says, Mr. Mulally detailed how teamwork and simplifying the Ford brand helped him reposition it.
          The Ford spokesman says: “Ford and Microsoft have a long-standing business partnership, and many of our leaders discuss business together frequently.”
          It was a wake-up call for Mr. Ballmer, who had run the software giant with bravado and concedes that “I’m big, I’m bald and I’m loud.”
          Microsoft’s culture included corporate silos where colleagues were often pitted against one another—a competitive milieu that spurred innovation during Microsoft’s heyday but now sometimes leaves groups focused on their own legacies and bottom lines rather than on the big technology picture and Microsoft as a whole.
          He recalls thinking: “I’ll remake my whole playbook. I’ll remake my whole brand.”
          The board liked his new plan. But as Mr. Ballmer prepared to implement it, his directors on the January conference call demanded he expedite it.
          Pushing hardest, say participants, were Mr. Thompson, who had held top jobs at International Business Machines Corp. IBM +0.54% and Symantec Corp. SYMC +0.38%, and Stephen Luczo, CEO of Seagate Technology STX -2.33% PLC. Mr. Luczo declines to comment.
          “But, I didn’t want to shift gears until I shipped Windows,” Mr. Ballmer says he told the directors on the call, explaining that he hadn’t moved faster in late 2012 because he was focused on releasing in October the next generation of Windows, Microsoft’s longtime cash cow.
          Mr. Ballmer swung into gear, drafting a management-reorganization plan to discuss during a March retreat at a Washington mountain resort. He invited Mr. Thompson and another director, to get board perspective on his plan.
          Instead, he got more pressure. Mr. Thompson says he told Mr. Ballmer and his executives: “Either get on the bus or get off.”
          Mr. Ballmer says he took that as an endorsement of his plan. That evening, some of them played poker, drank Scotch and gathered around the lodge’s fireplace.
          The next month, hedge fund ValueAct Capital disclosed a $2 billion Microsoft stake. ValueAct’s CEO Jeffrey Ubben at a conference said Microsoft’s stock was undervalued. Other shareholders were urging it to increase its dividend and shed noncore businesses. A ValueAct spokesman declines further comment. In September, Microsoft increased its dividend but hasn’t sold off businesses investors have urged it to, such as the Bing search engine.
          Mr. Ballmer hewed to Mr. Mulally’s recommendations. For years, he had consulted with Microsoft’s unit chiefs individually, often dispensing marching orders. Now, he began inviting them to sit together in a circle in his office to foster camaraderie.
          It was a lurching corporate-culture change. “It’s not the way we operated at all in Steve’s 30-plus years of leadership of the company,” says Mr. Nadella, an executive vice president.
          Mr. Ballmer says his senior team struggled with the New Steve. Some resisted on matters large—combining engineering teams—and small, such as weekly status reports.
          Qi Lu, an executive vice president, submitted a 56-page report on applications and services. Mr. Ballmer sent it back, insisting on just three pages—part of a new mandate to encourage the simplicity needed for collaboration. Mr. Lu says he retorted: “But you always want the data and detail!”
          Mr. Ballmer says he started to realize he had trained managers to see the trees, not the forest, and that many weren’t going to take his new mandates to heart.
          In May, he began wondering whether he could meet the pace the board demanded. “No matter how fast I want to change, there will be some hesitation from all constituents—employees, directors, investors, partners, vendors, customers, you name it—to believe I’m serious about it, maybe even myself,” he says.
          His personal turning point came on a London street. Winding down from a run one morning during a May trip, he had a few minutes to stroll, some rare spare time for recent months. For the first time, he began thinking Microsoft might change faster without him.
          “At the end of the day, we need to break a pattern,” he says. “Face it: I’m a pattern.”
          Mr. Ballmer says he secretly began drafting retirement letters—ultimately some 40 of them, ranging from maudlin to radical.
          On a plane from Europe in late May, he told Microsoft General Counsel Brad Smith that itmight be the time for me to go.” The next day, Mr. Ballmer called Mr. Thompson, with the same message.
          Mr. Thompson called two other directors, Mr. Luczo and Charles Noski, former Bank of America Corp. BAC +0.84% vice chairman, and says he told them: “If Steve’s ready to go, let’s see if we can get on with this.”
          At the board’s June meeting in Bellevue, Wash., Mr. Ballmer says he told the directors: “While I would like to stay here a few more years, it doesn’t make sense for me to start the transformation and for someone else to come in during the middle.”
          The board wasn’t “surprised or shocked,” says Mr. Noski, given directors’ conversations with Mr. Ballmer. Mr. Thompson says he and others indicated that “fresh eyes and ears might accelerate what we’re trying to do here.”
          Mr. Gates, Microsoft’s chairman, told Mr. Ballmer that he understood from experience how hard it was to leave when Microsoft was your “life,” says someone familiar with Mr. Gates’s thinking. Mr. Gates told the board he supported Mr. Ballmer’s departure if it ensured Microsoft “remains successful,” this person says.
          That night, after Mr. Ballmer watched his son sing at his high-school baccalaureate ceremony—a Coldplay song with the lyrics: “It’s such a shame for us to part; nobody said it was easy; no one ever said it would be this hard”—he says he told his wife and three sons he was probably leaving Microsoft. They all cried.
          On Aug. 21, the board held a conference call to accept Mr. Ballmer’s retirement. Mr. Gates and Mr. Thompson sat with Mr. Ballmer in his office. It was over in less than an hour.
          Mr. Ballmer vows not to be a lame duck.
          “Charge! Charge! Charge!” he bellows, jumping up from an interview and lunging forward while pumping his fist forward like a battering ram. “I’m not going to wimp away from anything!”
          He has remained active, shepherding a $7.5 billion deal to buy Nokia’s mobile businesses and fine-tuning holiday-marketing strategies for Microsoft’s Surface tablets and new Xbox game console. In October, Microsoft reported better-than-expected quarterly earnings.
          At his final annual employee meeting this September, Mr. Ballmer gave high-fives and ran off the stage to the song: “(I’ve Had) The Time of My Life” from the movie “Dirty Dancing.”
          Last month, walking along Lake Washington, Mr. Ballmer bumped into Seattle Seahawks coach Pete Carroll, who was fired from earlier jobs and now is thriving. Mr. Carroll says he told his neighbor he went through “something like this” and predicted it is “going to be great.”
          Mr. Ballmer says he is weighing casual offers as varied as university teaching and coaching his youngest son’s high-school basketball team. He plans no big decisions for at least six months—except that he won’t run another big company. He says he’s open to remaining a Microsoft director.
          At a recent executive meeting, he perched on a stool to review developments. His third slide was labeled “New CEO.”
          “Not a soul in this room doesn’t think we need to go through this transition,” he said. As he stood up, his voice started to crack: “As much as I wish I could stay your CEO, I still own a big chunk of Microsoft, and I’m going to keep it.”
          He walked back toward the stool, then turned around and said in a near-whisper: “Please take good care of Microsoft.”

          You could read also Reporter’s Notebook: Two Days With Steve Ballmer [The Wall Street Journal, Nov 15, 2013] ending this way: 

          … This summer when he was deciding whether to step down, Mr. Ballmer quietly met with big institutional investors in Boston and San Francisco. The head of one big institution told him, “Microsoft would be better served with you gone.” Mr. Ballmer, who’s the second largest individual shareholder, knew the investor might get his wish. Yet, he argued, “Who cares more about Microsoft than I do? I own a lot. It’s my life.”

          And that showed how his emotions alternate between bluster and wistfulness. The deed is done, the decision has been made, a new CEO is imminent. But Mr. Ballmer is struggling because Microsoft has been so much more than a job … as he said, “my life.”

          My closing remarks:

          1. The next CEO problem to be solved is definitely the #1 issue for the future of the Microsoft
          2. The #2 issue is how successfully the Unique Nokia assets (from factories to global device distribution & sales, and the Asha sub $100 smartphone platform etc.) will now empower the One Microsoft devices and services strategy [‘Experiencing the Cloud’, Sept 3, 2013] for which the Microsoft answers to the questions about Nokia devices and services acquisition: tablets, Windows downscaling, reorg effects, Windows Phone OEMs, cost rationalization, ‘One Microsoft’ empowerment, and supporting developers for an aggressive growth in market share [‘Experiencing the Cloud’, Sept 4, 2013] is providing an interim answer, i.e. till the arrival of the new CEO
          3. The #3 isssue is How the device play will unfold in the new Microsoft organization? [‘Experiencing the Cloud’, July 14, 2013]. If Stephen Elop, former CEO of Nokia, and a previous senior executive of Microsoft, will become the next CEO then Minutes of a high-octane but also expert evangelist CEO: Stephen Elop, Nokia [‘Experiencing the Cloud’, July 13, 2013] could provide some clue for changes to be expected as a strategic evolution of the current one described in the already mentioned [‘Experiencing the Cloud’, July 14, 2013]. Even in case when he will not be selected by the Microsoft board as the next CEO he will have very strong influence on the device play for the initial first year integration of the acquired Nokia businesses into Microsoft, for very simple reason, that nobody could do this, and a successfull integration is a higher priority, #2 issue.
          4. Strategically, however, the most important issue is the
          5. Microsoft reorg for delivering/supporting high-value experiences/activities [‘Experiencing the Cloud’, July 11, 2013]

          6. Everything else which might be a crucial issue during this process is highly controversial, without any official clues from Microsoft or any other stakeholder sources. The most controversial among all of them is the issue of non-profitable and/or not necessarily integral to Microsoft businesses. These are the Bing and the Xbox businesses. The range of external opinions is extremely large with investment circles firmly believing that neither Bing nor Xbox are inherently integral to Microsoft, and most of the external development community with an exacly opposite belief of those businesses being inherently internal.

          7. My personal opinion is that with spin-off both extremes could be served sufficiently well, and even open completely new business development opportunities for both Bing and Xbox to grow substantially faster and bigger than otherwise. I would be especially enthusiastic for an Xbox spin-off as that business is already (with upcoming Oct 22 introduction of Xbox One) not a gaming console, but an entertainment ecosystem type of business. As such it would get enormous growth opportunities with its spin-off from the tightly integrated Microsoft mother ship.

          8. The ultimate issue for me, however, is how the currently quite crippled and/or bureaucratic marketing machinery of Microsoft could be completely overhauled as part of Nokia integration, and how fast that could be achieved, if any? I mean a new marketing machinery which is thriving on the huge number of opportunities provided by already delivered game-changing products and technologies, instead of not understanding them at all. I mean not simply an ability to produce videos like the one in the beginning of this post, but a competency to produce whole storyboards for production of such videos and other communication materials. One might call it “high-octane marketing” for simplicity. Even more I envisage such integration of the marketing activities into the whole supply chain management (SCM) as is done in Samsung. See my Samsung has unbeatable supply chain management, it is incredibly good in everything which is consumer hardware, but vulnerability remains in software and M&A [‘Experiencing the Cloud’, Nov 11, 2013] post for that, from which I will copy the following illustration here as well:

          Xamarin: C# developers of native “business” and “mobile workforce” applications now can easily work cross-platform, for Android and iOS clients as well

          … while other cross-platform applications, i.e. “applications for consumers only” are prohibited for C# developers by the still high price of Xamarin, which essentially applies to indie and start-up developers only

          The mobile application development technology behind this, from the cloud to the clients, was extensively covered in Windows Phone 8: getting much closer to a unified development platform with Windows 8 [‘Experiencing the Cloud’, Nov 8, 2012] post of mine (including the cross-platform possibilities with Xamarin already), and then continued in Windows Azure becoming an unbeatable offering on the cloud computing market [‘Experiencing the Cloud’, June 28, 2013] and Microsoft partners empowered with ‘cloud first’, high-value and next-gen experiences for big data, enterprise social, and mobility on wide variety of Windows devices and Windows Server + Windows Azure + Visual Studio as the platform [‘Experiencing the Cloud’, July 10, 2013] posts for the cloud part.

          Note: Decide for yourself how that “consumers only applications by indie and start-up developers” type of exclusion will effect the cross platform development needs, after you take a look at the current state of the evolution of smartphone and tablet markets:

           

          Q3’13 smartphone and overall mobile phone markets: Android smartphones surpassed 80% of the market, with Samsung increasing its share to 32.1% against Apple’s 12.1% only; while Nokia achieved a strong niche market position both in “proper” (Lumia) and “de facto” (Asha Touch) smartphones 
          [‘Experiencing the Cloud’, Nov 14, 2013]

          The tablet market in Q1-Q3’13: It was mainly shaped by white-box vendors while Samsung was quite successfully attacking both Apple and the white-box vendors with triple digit growth both worldwide and in Mainland China 
          [‘Experiencing the Cloud’, Nov 14, 2013]


          Details

          For one of the problems solved now by Microsoft see my Obstacles for .NET on other platforms [‘Experiencing the Cloud’, Oct 15, 2013] post.

          To understand what is the situation now I will start with:

          In: Cross Platform .NET Just A Lot Got Better [Haacked blog, Nov 13, 2013]

          Not long ago I wrote a blog post about how platform restrictions harm .NET. This led to a lot of discussion online and on Twitter. At some point David Kean suggested a more productive approach would be to create a UserVoice issue. So I did and it quickly gathered a lot of votes.

          Phil Haack – Customer Feedback for Microsoft http://visualstudio.uservoice.com/users/40986152-phil-haack:

          Remove the platform restriction on Microsoft NuGet packages 4,929 votes
          Phil Haack shared this idea and gave it 3 votes  ·  Sep 26, 2013

          COMPLETED  ·  Visual Studio team (Product Team, Microsoft) responded
          Thanks a lot for this suggestion and all the votes.
          We’re happy to announce that we’ve removed the Windows-only restriction from our license. We’ve applied this new license to most of our packages and will continue to use this license moving forward.
          Here is our announcement:
          http://blogs.msdn.com/b/dotnet/archive/2013/11/13/pcl-and-net-nuget-libraries-are-now-enabled-for-xamarin.aspx
          For reference, the license for stable packages can be found here:
          http://go.microsoft.com/fwlink/?LinkId=329770
          Thanks,
          Immo Landwerth
          Program Manager, .NET Framework Team
          Phil Haack commented  ·  Nov 13, 2013
          Amazing! Thanks! This is great!

          Bravo!

          Serious Kudos to the .NET team for this. It looks like most of the interesting PCL packages are now licensed without platform restrictions. As an example of how this small change sends out ripples of goodness, we can now make Octokit.net depend on portable HttpClient and make Octokit.net itself more cross platform and portable without a huge amount of work.

          I’m also excited about the partnership between Microsoft and Xamarin this represents. I do believe C# is a great language for cross-platform development and it’s good to see Microsoft jumping back on board with this. This is a marked change from the situation I wrote about in 2012.

          • then will go to S. Somasegar, Corporate Vice President of the Developer Division at Microsoft:

          In: Visual Studio 2013 Launch: Announcing Visual Studio Online [Somasegar’s blog, Nov 13, 2013]

          … Microsoft and Xamarin are collaborating to help .NET developers broaden the reach of their applications to additional devices, including iOS and Android …

          Partner News

          With today’s launch of Visual Studio 2013, we have 123 products from 74 partners available already as Visual Studio 2013 extensions.  As part of an ecosystem of developer tools experiences, Visual Studio continues to be a platform for delivering a great breadth of developer experiences.

          Xamarin

          The devices and services transformation is driving developers to think about how they will build applications that reach the greatest breadth of devices and end-user experiences.  We’ve offered great HTML-based cross platform development experiences in Visual Studio with ASP.NET and JavaScript.  But our .NET developers have also asked us how they can broaden the reach of their applications and skills. 

          Today, I am excited to announce a broad collaboration between Microsoft and Xamarin.  Xamarin’s solution enables developers to leverage Visual Studio, Windows Azure and .NET to further extend the reach of their business applications across multiple devices, including iOS and Android.

          The collaboration between Xamarin and Microsoft brings several benefits for developers today.  First, as an initial step in a technical partnership, Xamarin’s next release that is being announced today will support Portable Class Libraries, enabling developers to share libraries and components across a breadth of Microsoft and non-Microsoft platformsSecond, Professional, Premium and Ultimate MSDN subscribers will have access to exclusive benefits for getting started with Xamarin, including new training resources, extended evaluation access to Xamarin’s Visual Studio integration and special pricing on Xamarin products.

          Xamarin, the company that empowers developers to build fully native apps for iOS, Android, Windows and Mac from a single shared code base, today announced a global collaboration with Microsoft that makes it easy for mobile developers to build native mobile apps for all major platforms in Visual Studio. Xamarin is the only solution that unifies native iOS, Android and Windows app development in Visual Studio—bridging one of the largest developer bases in the world to the most successful mobile device platforms.

          A highly competitive app marketplace and the consumerization of IT have put tremendous pressure on developers to deliver high quality mobile user experiences for both consumers and employees. A small bug or crash can lead to permanent app abandonment or poor reviews. Device fragmentation, with hundreds of devices on the market for iOS and Android alone, multiplies testing efforts resulting in a time-consuming and costly development process. This is further complicated by faster release cycles for mobile, necessitating more stringent and efficient regression testing.

          The collaboration spans three areas:

          • A technical collaboration to better integrate Xamarin technology with Microsoft developer tools and services.
            Aligned with this goal, Xamarin is a SimShip partner for Visual Studio 2013, releasing same-day support for Microsoft’s latest Visual Studio release that launched today. In addition, Xamarin has released today full integration for Microsoft’s Portable Library projects in iOS and Android apps, making it easier than ever for developers to share code across devices.
          • Xamarin’s recently launched Xamarin University is now free to MSDN subscribers. The training course helps developers become successful with native iOS and Android development over the course of 30 days. Classes for the $1,995 program kick off in January 2014, with a limited number of seats available at no cost for MSDN subscribers.
          • MSDN subscribers have exclusive trial and pricing options to Xamarin subscriptions for individuals and teams.

            Get a 90-day trial to Xamarin, sign up for Xamarin University for free (normally $1,995), and save 30-50% on Xamarin with special MSDN pricing.
            All the productivity you love in Visual Studio and C#,
            on iOS and Android.

          The broad collaboration between Microsoft and Xamarin which we announced today is targeted at supporting developers interested in extending their applications across multiple devices, said S. Somasegar, Corporate Vice President, Microsoft Corporation. With Xamarin, developers combine all of the productivity benefits of C#, Visual Studio 2013 and Windows Azure with the flexibility to quickly build for multiple device targets.

          According to Gartner, by 2016, 70 percent of the mobile workforce will have a smartphone, half of which will be purchased by the employee, and 90 percent of enterprises will have two or more platforms to support. Faced with high expectations for mobile user experiences and the pressures of BYOD, companies and developers alike are looking for scalable ways to migrate business practices and customer interactions to high-performance, native apps on multiple platforms.

          To meet this need to support heterogeneous mobile environments, Microsoft and Xamarin are making it easy for developers to mobilize their existing skills and code. By standardizing mobile app development with Xamarin and C#, developers are able to share on average 75 percent of their source code across device platforms, while still delivering fully native apps. Xamarin supports 100 percent of both iOS and Android APIsanything that can be done in Objective-C or Java can be done in C# with Xamarin.

          In just two years, Xamarin has amassed a community of over 440,000 developers in 70 countries, more than 20,000 paying accounts and a network of over 120 consulting partners globally.

          We live in a multi-platform world, and by embracing Xamarin, Microsoft is enabling its developer community to thrive as mobile developers, said Nat Friedman, CEO and cofounder, Xamarin. Our collaboration with Microsoft will accelerate enterprise mobility for millions of developers.

          The groundbreaking partnership was announced as part of the Visual Studio Live 2013 launch event in New York City. In addition, Xamarin and Microsoft have teamed up with the popular podcast, .NET Rocks!, for a 20-city nationwide road show featuring live demos on how to use Visual Studio 2013, Xamarin and Windows Azure to build and scale mobile apps for iOS, Android and Windows. For a full list of cities and to sign up for an event, please visit: xamarin.com/modern-apps-roadshow

          About Xamarin
          Xamarin is the new standard for enterprise mobile development. No other platform enables businesses to reach all major devices—iOS, Android, Mac and Windows—with 100 percent fully native apps from a single code base. With Xamarin, businesses standardize mobile app development in C#, share on average 75 percent source code across platforms, and leverage their existing skills, teams, tools and code to rapidly deliver great apps with broad reach. Xamarin is used by over 430,000 developers from more than 100 Fortune 500 companies and over 20,000 paying customers including Clear Channel, Bosch, McKesson, Halliburton, Cognizant, GitHub, Rdio and WebMD, to accelerate the creation of mission-critical consumer and enterprise apps. For more information, please visit: xamarin.com, read our blog, and follow us on Twitter @xamarinhq.

          Earlier today, Soma announced a collaboration between Microsoft and Xamarin. As you probably know, Xamarin’s Visual Studio extension enables developers to use VS and .NET to extend the reach of their apps across multiple devices, including iOS and Android. As part of that collaboration, today, we are announcing two releases around the .NET portable class libraries (PCLs) that support this collaboration:

          Microsoft .NET NuGet Libraries Released

          Today we released the following portable libraries with our new license, on NuGet.org:

          You can now start using these libraries with Xamarin tools, either directly or as the dependencies of portable libraries that you reference.

          We also took the opportunity to apply the same license to Microsoft .NET NuGet libraries, which aren’t fully portable today, like Entity Framework and all of the Microsoft AspNet packages. These libraries target the full .NET Framework, so they’re not intended to be used with Xamarin’s iOS and Android tools (just like they don’t target Windows Phone or Windows Store).

          These releases will enable significantly more use of these common libraries across Windows and non-Windows platforms, including in open source projects.

          Cross-platform app developers can now use PCL

          imagePortable class libraries are a great option for app developers building for Microsoft platforms in Visual Studio, to share key business functionality across Microsoft platforms. Many developers use the PCL technology today, for example, to share app logic across Windows Store and Windows Phone. Today’s announcement enables developers using Xamarin’s tools to share these libraries as well.

          In Visual Studio, you’ll continue to use Portable Class Library projects but will be able to reference them from within Xamarin’s tools for VS. That means that you can write rich cross-platform libraries and take advantage of them from all of your .NET apps.

          The following image demonstrates an example set of .NET NuGet library references that you can use within one of your portable libraries. The .NET NuGet libraries will enable new scenarios and great new libraries built on top of them.

          You can build cross-platform libraries with .NET

          This announcement also benefits .NET developers writing reusable and open source libraries. You’ve probably used some of these libraries, for example Json.NET. These developers have been very vocal about wanting this change. This announcement greatly benefits those library developers, enabling them to leverage our portable libraries in their libraries.

          Getting started with portable libraries and Xamarin

          You can start by building portable libraries in Visual Studio, as you can see in the screenshot above. You can take advantage of the portable libraries that we released today. Write code!

          You’ll need an updated NuGet client, to take advantage of this new scenario. Make sure that you are using NuGet 2.7.2 or higher, or just download the latest NuGet for your VS version from the Installing NuGet page.

          We are working closely with Xamarin to ensure that our NuGet libraries work well with Xamarin tools, as well as PCL generally. Please tell us if you find any issues. We’ll get them resolved and post them to our known issues page.

          Thank You

          Thank you for the feedback on UserVoice. With today’s announcement, we can mark the request to Remove the platform restriction on Microsoft NuGet packages as complete. Thanks to Phil Haack for filing the issue. Coupled with our collaboration with Xamarin, .NET developers have some compelling tools, especially for targeting mobile devices.

          Both Microsoft and Xamarin want to see this scenario succeed. We’d love your feedback. Please tell us how the new features are working for you.

          This post was written by Rich Lander, a Program Manager on the .NET Team.

          [Some] Comments

          Immo Landwerth [MSFT] 13 Nov 2013 1:24 PM

          Thanks a lot for the kind words!

          @Curt: We absolutely understand that PCL support in Visual Studio express editions is super important to many of our developers. That’s why it’s on our list. However, I can’t promise that we actually end up delivering it in the VS 2013 time frame. As you’ve seen today, there is a lot of great stuff going on and resources are always more scarce than one would hope.

          Gz 14 Nov 2013 4:19 AM

          Xamarin is great but their pricing is insane! even with the MSDN discount. We’re a tiny start-up development house that has benefited from the MS BizSpark programme and we simply cannot stretch to paying out a thousand bucks per platform, per year, per developer – mobile isn’t even a revenue generator for us – it would merely be extending some functionality from our main apps to mobile and we’d give it to customers for free. I know they have a free & an indie edition blah blah blah but we wanna work in VS. The good news is that Xamarin will soon have a competitor in this space that could potentially blow them out of the water with full VS support and direct access to native APIs on each platform (iOS, Android & Mac) and their pricing will be less than 1/3rd of Xamarin’s. I’ve been sworn to secrecy about it but expect to have a cost-effective Xamarin alternative before the end of the year. (No I don’t work for the company, just got some info about it recently).

          Stilgar 14 Nov 2013 8:30 AM

          I second the need for PCLs in Express editions. Otherwise your company’s constant claims that the tooling for Windows 8 and Windows Phone development is free is pure hypocrisy.

          TL;DR: You can now (legally) use our .NET OData client and ODataLib on Android and iOS.

          Backstory

          For a while now we have been working with our legal team to improve the terms you agree to when you use one of our libraries (WCF Data Services, our OData client, or ODataLib). A year and a half ago, we announced that our EULA would include a redistribution clause. With the release of WCF Data Services 5.6.0, we introduced portable libraries for two primary reasons:

            1. Portable libraries reduce the amount of duplicate code and #ifdefs in our code base.

            2. Portable libraries increase our reach through third-party tooling like Xamarin (more on that later).

              It took some work to get there, and we had to make some sacrifices along the way, but we are now focused exclusively on portable libraries for client-side code. Unfortunately, our EULA still contained a clause that prevented the redistributable code from being legally used on a platform other than Windows.

              OData and Xamarin: Extending developer reach to many platforms

              We are really excited about Microsoft’s new collaboration with Xamarin. As Soma says, this collaboration will allow .NET developers to broaden the reach of their applications and skills. This has long been the mantra of ODataa standardized ecosystem of services and consumers that enables consumers on any platform to easily consume services developed on any platform. This collaboration will make it much easier to write a shared code base that allows consumption of OData on Windows, Android or iOS.

              EULA change

              To fully enable this scenario, we needed to update our EULA. We, along with several other teams at Microsoft, are rolling out a new EULA today that has relaxed the distribution requirements. Most importantly, we removed the clause that prevented redistributable code from being used on Android and iOS.

              The new EULA is effective immediately for all of our NuGet packages. This means that (even though we already released 5.6.0) you can create a Xamarin project today, take a new dependency on our OData client, and legally run that application on any platform you wish.

              Thanks

              As always, we really appreciate your feedback. It frequently takes us some time to react, but the credit for this change is due entirely to customer feedback. We hear you. Keep it coming.

              Thanks,
              The OData Team

              Leading PC vendors of the past: Go enterprise or die!

              Nov 5, 2013: Acer Chairman and CEO J.T. Wang Tenders Resignation

              … J.T. Wang, chairman and CEO of Acer, said, “Acer encountered many complicated and harsh challenges in the past few years. With the consecutive poor financial results, it is time for me to hand over the responsibility to a new leadership team to path the way for a new era.” …

              What I found after carefully analyzing the above outcome is summarized in the titles of the detailed sections of this post:

              1. To be great only for consumers was not enough to survive
              2. Taiwan is still confused
              3. How Acer’s “new strategy” that has been in place since April 1, 2011 came to an end
              4. The road which lead to Acer downfall


              1. To be great only for consumers was not enough to survive

              THE LATEST EPISODES showing what was great from a general consumer point of view but not enough by far from enterprise point of view:

              We would all like to be a touch smarter, a touch cooler, a touch classier, and a touch simpler… With Acer it is possible, explore beyond limits with our touch & type products.
              IFA BERLIN 2013. For the those who missed the latest designed ‘Touch’ innovations from Acer. The Iconia A3: 10.1″ display with wide viewing angle and immersive sound. The Aspire R7: Award wining designed for touch notebook with active pen. The Liquid S2: Full HD 6″ display with 4K recording.
              Highlights from Acer’s Computex Global Press Conference, product booth, and Tiësto party
              See what happended during Acer’s Global Press Conference in New York City on May 3rd, 2013. Redefining the computing experience.

              AcerCloud™ – Be Free! [Acer YouTube channel, Oct 23, 2012]

              AcerCloud lets you access your photos, music, videos and documents wirelessly and simultaneously on all devices anytime and anywhere – it enriches your life with more freedom! See how AcerCloud saves Roy from Major embarrassment! -http://bit.ly/AcerCloud

              AND BACK THEN: May 9, 2011: Interview [AllThingsD]: Ousted Acer CEO Gianfranco Lanci Talks About His Departure

              … Lanci said he was pushing the company to become more mobile-focused and more global. Acer, he said, needed to look beyond Taiwan as the world shifted to one in which Intel and Microsoft had less power and computer makers needed to do more work for themselves. … “The real major issue was doing that in Taiwan, this was not possible,” Lanci said. “We needed to go outside Taiwan, be it China or India or even the U.S. or Europe, wherever you can find software resources, software know-how.”

              What Lanci wanted to move beyond:

              Some highlights of the Acer Global Press Conference held in New York on November 23, 2010. Clear.fi, the Acer media sharing system, evolves with the introduction of some brand new products. Iconia, dual screen device, offers an entirely new touch experience, the new tablets ensure HD entertainment and Alive, the next generation content store, provides users with content tailored to their personal interests.
              Dual-screen, multi-touch: ICONIA is the new 14-inches tablet that incorporates the best features of any notebook or tablet device and much more! Thanks to its innovative concept ICONIA was the proud winner of the ‘Last Gadget Standing’ competition at CES 2011, Las Vegas. Welcome to a brand new computing and touch experience!
              Take a closer look at liquid mini, the compact and stylish Acer smartphone that packs maximum possibilities in a minimum size. Discover how many features are enclosed in this charming Android smartphone: multi-touch display, 5 megapixels camera, Acer’s exclusive Social Jogger app that integrates updates from your social network accounts into one feed… and much more!
              At Mobile World Congress in Barcelona the new Acer Iconia Tab family was officially introduced: see here the Iconia Tab A500, with a 10.1 display and Android OS, Iconia Tab A100, 7″ display with Android, and Iconia Tab W500, with 10.1″ display and Windows OS.
              NY Global Press Conference, November 23rd, 2010 – Iconia, the outstanding Acer’s dual screen device with all-point multi touch functionality, and the 10.1″ Windows tablet, completely touch screen but also equipped with a docking device that includes a keyboard, introduced by Jim Wong, Senior Corporate Vice President, Acer Group, President ITGO, Acer Inc.
              NY Global Press Conference, November 23rd, 2010 – Enter into the world of Clear.fi: the smartest way to enjoy multimedia at home. Jim Wong, Senior Corporate Vice President, Acer Group, President ITGO, Acer Inc., explains that contents stored on any Clear.fi enabled devices can be shared seamlessly with the other devices using the same interface. Take also a look to the new 10.1″ tablet that ensure powerful performances, the 7″ tablet, ideal on the go, and, finally, the 4.8″phone that is a real mini tablet!

               

              The Acer Group is the culmination of years of innovation and change. We have become the global group we are today by adhering to the values and principles we established at our foundation. The language of these values may have changed, but our respect for and dedication to them has not.

              ACER GROUP CORE VALUES:

              The way we must act:
              (1) Innovative
              (2) Fast
              (3) Effective

              The pillars on which we must base our actions:

              (1) Value Creating
              (2) Customer-centric
              (3) Ethical
              (4) Caring

              THE ACER GROUP’S MISSION:

              User-friendly technology makes all the difference in today’s world. Indeed, the innovation and breakthroughs that technology brings can change the course of history.” With this introduction, the Acer Chairman delivers a clear message of the responsibilities and opportunities that technology can provide. Breaking down the barriers between people and technology is not an isolated event. It’s an ongoing process that unlocks our potential to bring innovation to life and embrace the challenges of the future.

              As Acer continues to break down barriers, we have the real possibility to make a difference to the world we live in.
              J. T. Wang
              Chairman, Acer Inc.

               

              Lanci, who was replaced as CEO in March, said that the interests that control Acer were worried that his plan would lead to a de-Taiwanization of the company.

              “I said, ‘Look, it is not de-Taiwanization,’” he said. “It is just globalization. If we want to be in the top three (PC makers) in the next three to five years, we need to be a global company and we need to leverage resources wherever they are.”

              Although today’s tablets are a consumer phenomenon, Lanci said the push by Microsoft to deliver Windows on ARM-based chips will help the devices move solidly into the business domain.

              “You can easily think about a tablet thin and light, like the current iPad 2,” he said, but offering everything that the PC offers as well. However, he said that Acer needed to do more to prepare for that world. In addition to boosting its own software capabilities, he said the company needed a different relationship with chipmakers. The PC world, he said, was one of buying and selling components, with pricing and availability based solely on volume. The mobile world, he said, is based on close partnerships and strategic alliances.

              As for who is doing things right, Apple is clearly winning, but there are others also making moves to adjust for the shifting world.

              “I see Samsung is probably doing the right thing,” he said. “HP, maybe. It depends what they are going to do with software and with WebOS.”

              However, he said much of the PC industry is in a similar position where Acer was.

              So with Acer Chairman and CEO J.T. Wang Tenders Resignation as the result of Acer Q3’13 Financial Results: Consolidated Revenue NT$92.15B (US$3.11B), Operating Loss NT$2.57B (US$86.61M), Intangible Asset Impairment NT$9.94B (US$335.13M) leading PC vendors of the past should take advice from Dell Goes Private: 8 Things To Expect [InformationWeek, Nov 4, 2013]

              Dell CEO Michael Dell took the company private to gain more independence from Wall Street investors. Now that the buyout’s cleared, what moves can customers expect?

              After eight months of maneuvering, Dell CEO and founder Michael Dell has finally taken the company private. Dell executives remained tightlipped about the buyout as the process wore on, and as the flailing PC market continued to punish the company’s margins. But now that Dell has officially delisted, many of its enterprise customers no doubt are asking the question: How will this affect me?

              Many have probably been asking the question for months. Activist investors such as Carl Icahn at times appeared to have the upper hand against Michael Dell. It seemed plausible at points that the founder might be ousted from his own company, or that pieces of the company might be sold off.

              And even after it became clear Michael Dell would prevail, questions still remained. Observers widely interpreted that Dell didn’t want the burden of Wall Street’s quarterly scrutiny; after all, it’s hard to invest in new enterprise services when shareholders are howling about PC profits every three months. But now that Dell has rid himself of investor pressure, the question still remains: What will he do with his new flexibility, and how will it help customers?

              Dell North America President PH Ferrand spoke withInformationWeek about Dell’s strategy as a privately held company. Here are eight takeaways from the conversation.

              1. Dell can make investments as a private company that it couldn’t make as a public company.

              Ferrand affirmed one of the buyout process’s dominant narratives: that from Dell’s perspective, Wall Street was more trouble than it was worth. Ferrand said going private will give the company more flexibility. It “might not have been obvious to investors” when the company needed to double down on investments, he said.

              2. Dell sees no reason to make a smartphone but will continue to make PCs.

              “Very few players make money [selling smartphones],” Ferrand said. “We don’t feel we have to be in the space.”

              That’s consistent with what Michael Dell told InformationWeek last year at Dell World. But many of the device manufacturers with which Dell competes have started positioning smartphones as a gateway to consumer sales and BYOD business. Microsoft’s purchase of Nokia’s device business is a notable example. Execs at HP, another company struggling to adjust to the mobile world, have repeatedly indicated that a smartphone is coming.

              Why is Dell still resisting the trend? “The IT market is a $3 trillion business and we are about 2% of that,” Ferrand said. “We don’t need to have phones to get to 3% or 4%.”

              Even so, Ferrand said Dell remains committed to PCs and intends to become a leader in the commercial tablet space. He said he can’t rule out a Dell smartphone eventually but predicted that in the meantime, people will soon stop differentiating between tablets and computers; instead, they’ll simply talk broadly about mobile devices. If this revolution in user behavior happens, Dell hopes its Venue 11 Pro tablet will be one of the devices that gets it started; a “three-in-one” device, it attaches to a keyboard to become a laptop and docks to an external monitor to become a desktop replacement.

              3. Dell will focus on the hybrid cloud.

              Ferrand highlighted hybrid cloud services as a market on which Dell will focus, and which Dell sees as ripe for growth. “We want to dominate hybrid,” he said, explaining that customers want a company that will allow them to be flexible with their data. Customers want to move applications between private and public clouds as they see fit, and they want security from outages and data leaks, he said. He cited some of the investments Dell has already made to fulfill these needs, such as its acquisition of Gale, a company that makes cloud automation tools.

              But he said direct relationships with customers would be one of Dell’s defining traits as it builds its cloud business. With competitors such as HP, Microsoft, IBM and others occupying the same space, Dell hopes it can stand out not only with its products but also by serving as a “trusted advisor” for its customers.

              4. Dell wants to enable IT to manage BYOD and fragmented workplaces.

              Ferrand said device choice has become a smaller part of Dell’s conversations with customers. The reason? Dell’s cloud, virtualization and device management products allow companies to employ applications to whomever needs them, regardless of what kind device the person is using.

              “Connecting devices” will be one of Dell’s core competencies as a private company, Ferrand said, and it will involve a variety of products from the company’s existing portfolio, from Wyse technologies for thin clients, to KACE products for management and deployment, to Credent technologies for added security. Device management tools and virtual desktop products are fairly common, but Dell hopes the breadth of its offerings can help it to stand out. This “one-stop shop” mentality plays in the “trusted advisor” persona noted above. Ferrand said the attitude would apply to all Dell’s businesses.

              5. Dell will invest in next-gen data center technologies and big-data products.

              Ferrand also said Dell would continue to focus on next-generation data center products and big-data applications. The company has already achieved some early momentum with its Active System line ofconverged infrastructure products, as well as its hyperscale servers built around energy-efficient ARM processors. But for both these data centers products and its emerging analytics tools to stand out in the crowded market, Dell will need to continue showing that its software assets are starting to coalesce. The company spent several years acquiring software patents and expertise, but Dell’s success will rely on integrating all of the technologies at the right price and pace.

              6. Dell will increase its international sales coverage.

              U.S. customers currently account for an inordinate amount of Dell’s business but the company believes emerging markets will be central to its long-term success. Ferrand said the company will continue to participate heavily with channel partners but will also expand its fleet of direct sales representatives throughout the world.

              7. Dell will continue to focus on the middle market.

              As its enterprise portfolio has expanded, Dell has tried to carve out a niche by delivering enterprise-class resources to SMBs and mid-market customers. Ferrand said Dell will continue this strategy as a private company partly because the middle market contains the largest group of potential customers. But he said this focus also enables Dell to design more flexible products. It’s easier to scale up a mid-market architecture than to affordably repackage one designed for large companies, he said.

              8. Dell will execute moves more quickly than in the past.

              Ferrand didn’t offer any hints regarding big moves Dell might be planning — such as another major acquisition, or some kind of new product launch. But he said customers can expect Dell to quicken the pace of innovation. As a publicly-traded corporation, the company faced a variety of hurdles in making aggressive moves. But with Michael Dell now securely in the driver’s seat, Ferrand said changes will unroll much more quickly.


              2. Taiwan is still confused:

              China Times: China’s Internet phenomenon sends warning to Taiwan [Focus Taiwan, Nov 6, 2013]

              MomentCam, a mobile app that transforms pictures into cartoons, has quickly shot to popularity since its launch on Aug. 31, drawing 18.24 million users over the past two months.

              The Chinese company that developed the app, founded by Ren Xiaoqing, has obtained new investment of 30 million Chinese yuan since the app hit the market.

              The success story marks the rise of yet another Chinese Internet entrepreneur after Ma Huateng of Tencent Inc., Jack Ma of Alibaba Group, Yao Jinbo of 58.com Inc., and Zhuang Chenchao of qunar.com.

              China’s booming Internet sector stands in sharp contrast to the situation in Taiwan, where the country’s star ICT industry has been losing its luster and the economy remains sluggish.

              Taiwan’s ICT companies have hit a bottleneck because they have failed to reposition themselves from contract manufacturers to technology developers. In order to rescue the ICT industry, it is crucial for Taiwan to take part in the thriving Internet economy.

              Google Inc. has seen its share price soar from US$85 to over US$1,000 within the nine years since it was launched in 2004, and it currently has a market value of US$338 billion. The market capitalization of Facebook, meanwhile, has reached 1.3 times that of Taiwan Semiconductor Manufacturing Co. — the world’s largest contract chip maker.

              Taiwan’s ICT companies must not continue to confine themselves to the contract manufacturing market. The government should promote an alliance between the ICT industries of Taiwan and China and remove the current restrictions on the flows of information, talent and capital across the Taiwan Strait to salvage Taiwan’s dying economy. (Editorial abstract — Nov. 6, 2013)

              MomentCam app, China’s latest overnight sensation [WantChinaTimes.com, Nov 6, 2013]

              imageThree cartoon portraits made with the MomentCam app. (Internet Photo)
              A smartphone application that converts pictures of the user into cute cartoon characters has become a hit overnight in China, with the number of subscribers topping 20 million in the fourth months after its launch.
              The application, called MomentCam in English — a phonetic rendering of the Chinese which means “magic manga camera,” rose to the top of the free apps category on the Apple online store in China in just three days and notched a record 3.25 million subscribers a day. On the back of its rapid success, it recently attracted a 10 million yuan (US$1.64 million) loan.
              The software was created by two young people, Ren Xiaoqing and Huang Guangming, both members of the Dark Horse Development Camp, a platform dedicated to startups.
              Ren Xiaoqian, a fine-arts major, conceived of the idea when working as a souvenir designer for Walmart in the US in 2006. “A popular [design] for Walmart back then was planting a human face on the body of a cartoon character, although the effect was quite ugly as well as the dark background. This gave me the idea to render photographs of people in a cartoony comic style, believing that it would be even more popular,” Ren said.
              In 2008, she encountered Huang Guangming, then a manager at Microsoft, and they decided to combine their respective expertise in the fields of fine art and computing by returning to China to found a startup.
              The company initially dedicated itself to the production of custom-made cartoon souvenirs for some major local companies before Ren decided to switch to online business entirely due to the ceiling for offline products and her dislike of the need to entertain clients to drum up business.
              From a slow start, the MomentCam app suddenly became a hit overnight. “We were not mentally prepared for the phenomenal growth of subscribers,” Ren admitted. The number of downloads topped 1 million in one month and 10 million in three months as people became aware of the software, which converts a photograph of a human into a cartoon figure in the space of a few seconds.
              Ren said the challenge now is how to maintain the number of subscribers to avoid it becoming a short-lived fad, a fate that has befallen a great many applications in China.

              Windows 8.1 tablet sales 20-30% below expectations [DIGITIMES, Oct 31, 2013]

              Channel retailers are seeing their Windows 8.1-based tablet sales in October 20-30% below than their original expectations, despite strong price/performance ratios.

              Asustek Computer’s recently released Transformer Book T100 is priced at US$349 for a 32GB model and US$399 for 64GB and after bundling with telecom services, the 64GB model’s price drops from NT$12,900 (US$438) to NT$5,000-6,000 in Taiwan.

              Sources from channel retailers pointed out that the weakening Windows 8.1 tablet demand is due to competition from PC and Android-based tablets. Most of these products have received price cuts after the release of Windows 8.1-based 2-in-1 devices.

              Since Windows 8.1-based tablets are starting to face problems similar to those of previous Windows-based models, the sources are concerned that inventory issues may rise again in 2014.

              So far, channel retailers have not yet received any word about price cuts from brand vendors, but some retailers expect Windows 8.1 tablets to receive over 20% discounts in December for the year-end shopping season.

              Dell expected to overtake Acer to become third-largest notebook vendor in 2014, say Taiwan makers [DIGITIMES, Sept 17, 2013]

              Microsoft’s ending Windows XP technical support in April 2014 has triggered growing replacement of business-use notebooks, and this is expected to significantly benefit Dell because Dell has more focus on business-use modes than other notebook vendors, according to Taiwan-based supply chain makers. Consequently, Dell is expected to surpass Acer to become the global third-largest notebook vendor in 2014.

              Notebook vendors normally do not rely on business-use models for volume shipments mainly because sales are subject to the government sector’s and enterprises’ procurement scheduling, the sources indicated. But while demand for consumer notebooks has been shrinking due to competition from tablets and smartphones, business-use models have become the main source of growth for notebook vendors, the sources said.

              Dell is expected to continue to focus on the business-use market segment, especially after its privatization, the sources noted. Dell shipped 9.285 million notebooks globally in the first half of 2013, ranking fourth next to Acer’s shipments of 9.814 million units, the sources cited IDC statistics as indicating.

              Commentary: Suppliers need to prepare for Dell strategy change [DIGITIMES, Sept 27, 2013]

              As Dell is expected to become privatized, Taiwan’s upstream component suppliers may need to start preparing for the US vendor’s business reorganization.
              Michael Dell previously said that the company will accelerate its reorganization after becoming privatized and though the PC business will not be abandoned, it will surely no longer be the major focus of the US vendor.
              Dell’s financial report for the second quarter showed that the company still had about 33% of profits coming from computing-related product lines including desktops, notebooks and tablets. However, as the PC industry continues to decline, placing less emphasis on the PC business is a path Dell is likely to take in order to achieve growth in the future.
              The PC industry has already been shrinking for two consecutive years and is expected to continue declining in 2014. Although Wintel has been aggressively releasing new products and cutting prices, it has been unable to stimulate PC demand. This is a clear indication that the industry has already entered the decline stage and users may only replace their PC products when they are no long functional.
              PCs still have low penetration in emerging markets, but as consumers of these markets are also having high interest in smartphones and tablets, the PC industry is unlikely to return to a growth track through these markets.
              With the integration between software and hardware becoming a new trend of the IT market, upstream suppliers may also need to start preparing for Dell’s future strategy of combining software design with hardware products.

              Dell optimistic about Windows 8.1 for enterprise PCs [DIGITIMES, Aug 29, 2013]

              As Microsoft is ready to release Windows 8.1 on October 18, Jeff Clarke, Dell’s vice chairman and president of Global Operations and End User Computing Solutions, has expressed his optimism about the operating system. Compared to Android and iOS, Windows’ security and management abilities will allow the OS to become the top pick of the enterprise PC industry, Clarke noted.

              Although Clarke has mentioned that Dell is planning to release several Windows-based tablets in the second half, he has not provided much detail for the related plans.

              However, sources from the upstream supply chain has revealed that Dell is currently planning to release an 8-inch Windows-based tablet in the second half, targeting mainly the enterprise market.

              In addition, Dell is also considering releasing a 10.6-inch Windows tablet, adopting either a Core i or an Atom processor, the sources added.

              Dell aims to strengthen software businesses in Greater China [DIGITIMES, June 24, 2013]

              Dell has set up four major departments, End-User Computing, Enterprise Solutions Group (ESG), Dell Software Group (DSG) and Services, and plans to strengthen businesses in Greater China in 2013.
              The DSG was established earlier in 2013, while the Service department was formed only three years ago. With the four departments, Dell is able to push complete solutions as well as increase service consulting for its clients in Greater China.
              Dell has been acquiring solution providers in the market since 2010 and has acquired players such as Kace, SonicWall and Quest. Thanks to the acquisitions, Dell Taiwan’s software solution business currently has over one thousand clients that are using its solution services including datacenter, cloud computing, information and data management, mobile office management and security and data protection.
              Currently, Dell has about 40-50 service consultants for the Greater China region and is currently hiring more to support demand from the information and data management service sectors.
              Dell Taiwan president Terence Liao pointed out that Dell’s global revenues in 2012 were about US$50 billion and the software segment contributed about US$1.5 billion. Since Dell Taiwan’s software business also shared a similar proportion, it shows that the software business has already become a focus at Dell.
              In the future, Liao expects Dell Taiwan’s sales growth to be driven mainly by cloud computing and security and data protection services, and therefore will offer promotions to push the two services in the channel.


              3. How Acer’s “new strategy” that has been in place since April 1, 2011 came to an end:

              FOCUS TAIWAN – CNA ENGLISH NEWS:

              May 8, 2013: Acer forecasts shipment growth in Q2 (update)

              Taiwanese computer maker Acer Inc. said Wednesday that it is aiming for single-digit growth in shipments in the current quarter after returning to profitability in the first quarter.
              Acer Corporate President Jim Wong told an investor conference that he expects shipments of Acer’s notebooks, netbooks and tablets to remain flat or increase by up to 5 percent in the second quarter.
              The company said its total PC shipments fell 11 percent sequentially in the first quarter, but it did not disclose the actual number of units shipped.
              According to data compiled by research firm International Data Corp. (IDC), Acer shipments plunged 31.3 percent year-on-year to 6.15 million units in the first quarter, well below the industry’s average of a 13.9 percent decline.
              Wong said touch-enabled notebooks are expected to account for about 25 percent of Acer’s total notebook shipments in the second quarter, and that the ratio is likely to hit 30 to 35 percent by the end of the year.
              J.T. Wang, Acer’s chairman and chief executive officer, said his company plans to break even in the second quarter, when the shipping quantity of its touch notebooks is expected to double those shipped in the first quarter.
              He said Acer will continue to make more efforts in customer-centric designs and marketing to help the company regain growth momentum in the next decade.
              “Our approach is to focus on driving valuable growth that is profitable and enhances Acer brand value,” Wang said.
              The company’s operating margin in the quarter was 0.03 percent, and it had consolidated revenue of NT$91.7 billion (US$3.08 billion), down 9.4 percent from the previous quarter due to seasonal factors.
              The company’s first quarter net income was NT$515 million, or NT$0.19 per share, derived mainly from non-operating income such as foreign exchange gains and the disposal of stock.
              Acer’s operating income was NT$29 million, compared with an operating loss of NT$3.37 billion in the fourth quarter of last year that included a NT$3.5 billion intangible asset impairment charge for the loss in value of its rights to four trademarks.
              Acer unveiled a series of Windows 8-based laptops and tablets in New York on May 3 in a bid to boost shipments and strengthen its bottom line, but the company is still struggling to cope with weak PC demand and strong competition from other brands.
              Kirk Yang, a Hong Kong-based analyst at British banking group Barclays Plc, said Acer’s operating margin of 0.03 percent was much lower than his forecast of 0.18 percent and a consensus estimate of 0.17 percent by Bloomberg.
              “We expect Acer to guide revenue to grow by single digits sequentially, after posting quarter-on-quarter revenue contraction for five quarters in a row,” Yang said in a note to clients before the investor meeting.
              “However, we estimate that Acer’s operating margin in the second quarter of 2013 will not see any meaningful recovery due to weakening global PC demand and more low-priced tablet PC shipments in the mix,” he wrote.
              Barclays forecast that Acer’s sales revenue will grow 4.8 percent for the whole of 2013, with its operating margin improved to 0.8 percent. It maintained an “equal-weight” rating and a target price of NT$24 on the stock.
              Acer shares closed up 2.26 percent at NT$24.85 before the announcement of the quarterly results.

              August 8, 2013: Acer aiming to break even in Q3

              Taiwanese computer maker Acer Inc. said Thursday it expects to break even or record a small operating loss in the third quarter of 2013, despite its disappointing results in the previous quarter.
              The company’s mobile PC shipments — including notebooks, netbooks and tablets — are forecast to grow by 0-5 percent sequentially in the third quarter, Acer Corporate President Jim Wong told investors in a conference call.
              However, Acer has lowered its annual tablet shipment target to between 5.5 million and 6.5 million units, from its projection in May of 5 million to 10 million units, Wong said.
              He said touch-enabled notebooks will account for 20-25 percent of Acer’s total laptop shipments this year, below its previous estimate of 30 percent, in light of weakening demand for such products.
              “I think applications are most important. Today, there are still no killer applications for touch (notebooks),” Wong said in the conference call.
              Asked about Acer’s full-year outlook, he said the company is trying to “sustain its market share while protecting its bottom line.”
              The company is aiming to stay profitable in 2013 after registering losses over the past two years, Wong indicated.
              J.T. Wang, Acer’s chairman and chief executive officer, said the company is expanding its non-Windows business, including Android-based tablets and smartphones, as well as the web-centric Chromebook laptops promoted by Google Inc.
              Non-Windows business is expected to make up 10-12 percent of Acer’s revenue this year and 20-30 percent next year, Wang said.
              Acer reported an operating loss of NT$613 million (US$20.47 million) for the second quarterfollowing six consecutive quarters of operating profit — because of increasing investment and the rising cost of memory chips.
              For the first six months of 2013, the Taiwanese PC maker’s consolidated revenue fell 18.9 percent year-on-year to NT$181.35 billion, resulting in an operating loss of NT$585 million and earnings per share of NT$0.06.
              British bank Barclays Plc maintained its “equal-weight” rating on Acer shares and cut its earnings per share estimates by 5.4 percent for 2013, and by 5.3 percent for 2014, forecasting a contraction in Acer’s sales and more competition pressure.
              “We expect Acer’s sales to continue to be weak and do not expect any further momentum currently,” Kirk Yang, head of Asia ex-Japan Tech Hardware Research at Barclays, said in a research note dated Aug. 6.
              “We expect Acer will face a more competitive situation in the tablet and notebook segments in the near term and we don’t see it having an obvious plan in place to react,” said Yang, who reduced his price target on the stock from NT$24 to NT$23.
              Acer shares ended 3.97 percent lower at NT$20.55 Thursday on the Taiwan Stock Exchange.

              August 30, 2013: Talk of the Day — Will Acer be sold or merged?

              Acer Inc., Taiwan’s leading computer vendor, has seen its share price plunge to historically low levels in recent months.
              Market sources said earlier this week that investment banks are planning to broker a merger between Acer and one of two major rivals — Taiwan’s AsusTek Computer or China’s Lenovo Group.
              Acer founder Stan Shih said Thursday that he had an open mind toward such an overture.
              I would let nature take its course,” Shih said, but he added that no investment bankers have approached him for such talks.
              In charting the company’s future development strategy or direction, Shih said, the rights and interests of all stakeholders, including employees, shareholders and society at large, should be priority concerns.
              Shih has retired and is no longer involved in Acer’s management, but he remains the company’s largest shareholder, controlling 2.64 percent of its shares. His wife has a similar sized stake in the company.
              Shares of Acer gained 2.57 percent to close at NT$19.95 Friday.
              The following are excerpts from local media coverage of Shih’s views on Acer’s future:
              Economic Daily News:
              Acer spokesman Henry Wang said Thursday that the company has never thought about a merger with any other corporation.
              We are restructuring and streamlining our operations, and focusing more on innovating,” Wang said.
              While the company is tapping into the ever-expanding tablet market to help compensate for declining PC sales, it has also launched a new generation of laptops and desktops, including an ultra-thin laptop-tablet hybrid, he said.
              In the past, some foreign analysts have suggested that Taiwan’s two leading PC makers — Acer and AsusTek — should merge to expand their operating scale and enhance their international competitiveness.
              Acer Chairman J.T. Wang also said previously that Lenovo, which has emerged as the world’s second largest PC vendor and has a comprehensive portfolio of products, proposed a few years ago to buy out Acer, but Wang said he politely rejected such an offer.
              On Thursday, Stan Shih was asked to comment on reports that investment banks intend to mediate an Acer-AsusTek merger or an Acer-Lenovo merger.
              Shih said Acer is not a company that can be evaluated solely in financial terms.
              “Capitalists tend to assess things simply in monetary terms, but Acer has something invaluable,” Shih said.
              As one of Taiwan’s few international brands, Shih said, Acer has come a long way and overcome numerous challenges in building up its brand recognition.
              “I hope local people will give Acer more encouragement and support,” Shih said. (Aug. 30, 2013).
              China Times:
              Shih said a company’s share price is not the sole indicator used to assess a company’s value.
              “I have not been bothered by fluctuations in Acer’s share price,” Shih said Thursday when chairing an event marking the start of applications for this year’s Acer Digital Award.
              But he added that the PC industry is changing rapidly.
              “We should let nature take its course. If somebody wants to take over Acer at a price beyond what anybody could imagine and create an even better brand based on it, why we should resist such a deal,” he said. (Aug. 30, 2013).

              May 11, 2013: Acer, Asustek upbeat about Windows 8 market reception

              Taiwan-based Acer Inc. and Asustek Computer Inc., two of the world’s leading personal computer vendors, are optimistic about the market reception of Microsoft Corp.’s latest operating system Windows 8, which is to be revamped, market sources said Saturday.
              Acer Chairman J.T. Wang said Microsoft is eager to communicate with hardware device providers like Acer in an attempt to improve the Windows 8 functions and make the platform more user-friendly.
              Amid lackluster market reception since the new Microsoft operating system was launched at the end of October 2012, the U.S.-based software giant said it is planning to revamp the OS so that consumers will learn how to use the new platform more quickly.
              The plan to launch a new version of Windows 8 was announced after Tami Reller, Microsoft’s chief marketing and financial officer, conceded that it was not easy for consumers to get used to the platform.
              Many business users have been urging Microsoft to restore the “Start” button in its latest OS. In the earlier Windows versions, the icon appears in the lower-left corner of the computer screen, but is not visible in the latest software.
              To stir up buying interest, Microsoft has lowered its royalties by US$20-US$30 (NT$600-NT$1,000) on touch notebook computers 11.6 inches or smaller, while offering incentives to distributors of Windows 8 tablet computers.
              Market sources said Microsoft is expected to cut its royalties on Windows 8 tablet computers so that they can be sold at around US$199-US$349 and thus make them more competitive in the market.
              Wang said the changes in Microsoft’s strategy will have a positive effect on market reception of the Windows 8 OS and also on the future development of the PC industry.
              Acer said that with touch devices becoming the mainstream in the PC market, it will continue to unveil tablets, touch ultrabook computers, and combination PCs and smartphones, all running either Windows 8 or Google’s Android operating system.
              Meanwhile, Asustek said Windows 8 is a good product, although some consumers have not gotten used to it. Once Microsoft revamps the OS, sales of Windows 8 mobile devices will pick up, Asustek said.

              June 3, 2013: COMPUTEX: Acer unveils new product lines

              imageAcer Chairman J.T. Wang (left) holds the 8-inch Iconia W3,
              and Chief Marketing Officer Michael Birkin holds the 5.7-inch Liquid S1.

              Taiwanese computer maker Acer Inc. unveiled a series of new products Monday, including an 8-inch Windows tablet and a 5.7-inch phablet.

              At an international press conference held under the theme of “Redefining Technology Through Touch,” Acer showcased a wide array of its latest products one day ahead of Asia’s largest computer trade show.

              The 8-inch Iconia W3, one of the first 8-inch Windows tablets on the market, weighs 500 grams and is less than half an inch thick. With a battery life of eight hours, the device can beam out 720p video playback on a 1,280 x 800 display. It also comes with an optional full-size keyboard.

              The company also displayed its first phone-tablet hybrid product, the Liquid S1, with the aim of gaining traction in the fast-growing hybrid market.

              The new quad-core phablet features a 5.7-inch 720p display, weighs 195 grams and runs on Google’s Android 4.2 operating system.

              Acer projected that the global phablet market will grow to about 10 million units in 2013, up from between 7 million and 8 million units last year.

              June 3, 2013: COMPUTEX: Acer unveils new product lines (update)

              Acer Chairman J.T. Wang said on the sidelines of the launch ceremony that touch technology applications have become all the rage, and this will continue in the future.

              “It’s all about touch,” he said, adding that the launch of the new products is expected to meet consumer demand.


              4. The road which lead to Acer downfall:

              Acer press release:

              March 31, 2011: Acer CEO and President Gianfranco Lanci resigns – With immediate effect

              Gianfranco Lanci is appointed President of Acer Inc., effective January 2005 … Current President, J.T. Wang, will step into the role of Chairman and Chief Executive Officer (CEO) as Stan Shih retires from Acer at the end of this year. The new positions are effective from 1 January 2005. Lanci’s designation marks Acer’s appreciation for his outstanding performance in the European market, including his management style and successful business model – which may now extend to the Acer group worldwide. …
              Acer’s Lanci Takes Over CEO Role [IDG News Service, June 13, 2008] … Gianfranco Lanci, who came to Acer from Texas Instruments (TI) when Acer bought the TravelMate laptop PC business from TI in 1997, will add the CEO position to his current role as president of Acer.
              The company’s laptop business has been a driving force in its double-digit growth over the past few years and helped catapult Acer into the number-three spot in the PC industry.
              J.T. Wang, the current chairman of Acer, relinquished his CEO title at Acer but took on the title of Acer Group CEO on Friday, Acer said in a statement.
              Wang took over as chairman at Acer from company founder Stan Shih several years ago, after Acer split itself into three distinct companies in order to separate its branded business from its contract manufacturing operations. Acer took over as the branded company, while Wistron took most of the PC-related contract manufacturing and BenQ took on mobile phone and PC-related work.
              Shih retired from Acer in December 2004.
              Acer CEO and President Gianfranco Lanci has resigned from the company, with immediate effect. Acer Chairman J.T. Wang takes acting role in the interim. The company has commenced with the planning of organizational and operational adjustments for the sustainable future of Acer.
              The resignation was approved at a meeting of Acer’s Board of Directors today, and the company has communicated internally with its worldwide employees.

              On the company’s future development, Lanci held different views from a majority of the board members, and could not reach a consensus following several months’ of dialog. They placed different levels of importance on scale, growth, customer value creation, brand position enhancement, and on resource allocation and methods of implementation.

              The change does not affect current operations which are functioning as normal. Acer’s strong management team of multi-nationals has been well-informed and is committed to overseeing and implementing the company strategies, as does the amicable company relations with industry partners persist. Acer will continue to push for globalization, follow its multi-brand and channel business model, develop competitive products and services, and foster closer relations with key vendors and channel partners.

              Acer Chairman, J.T. Wang expresses, “The personal computer remains the core of our business. We have built up a strong foundation and will continue to expand within, especially in the commercial PC segment. In addition, we are stepping into the new mobile device market, where we will invest cautiously and aim to become one of the leading players.”

              “In this new ICT industry,” continued Wang, “Acer needs a period of time for adjustment. With the spirit of entrepreneurship, we will face new challenges and look to the future with confidence.”

              In his role as President and CEO, Lanci has contributed significantly toward Acer’s growth. The company expresses its true appreciation for Lanci’s efforts and wishes him all the best in his future endeavors.

              April 19, 2011: Acer appoints Jim Wong as Corporate President – Through teamwork, company to face challenges and embrace opportunities of the new ICT industry

              Acer Inc.’s board of directors has approved the candidacy of Jim Wong as the new Corporate President, with immediate effect. Wong previously held the positions of corporate senior vice president and president of IT Products Group (ITGO). Together with Chairman and CEO, J.T. Wang, they will lead the company forward to embrace new challenges and opportunities in the new ICT age.

              Acer Chairman and CEO, J.T. Wang … “As the ICT industry shifts from single to multiple operating system platforms, it opens up new challenges as well as new opportunities. Acer needs a leader who is familiar with technology, as well as understands the market. We reviewed Jim’s potential and agreed he would fit well in the role.”

              The rapid growth from data-creation to data-consumption devices is increasing the ICT market scale and opening up new prospects. Acer will aggressively yet cautiously develop data-consumption products, tablet PCs and smartphones based on the solid foundation of the main PC business.

              Jim Wong, new corporate president of Acer states, “The IT industry is encountering a profound change. I foresee many new opportunities and am ready to face the challenges ahead. I will encourage teamwork throughout the company and work closely with the new management team. We are ready with a clear set of goals and action plans.”

              In the PC business, Acer will continue to seek volume/shipment growth, but we must optimize our multi-brand strategy by having clear differentiation of the brands’ positioning and create value for our customers. Concurrently, Acer shall focus on developing selective models for mobile devices to lay a solid foundation for the future.
              Three key principles have been defined by Acer’s new management to ensure successful decision making:

              • Promote the spirit of teamwork to enhance company’s overall competitiveness, and encourage closer communication between front-end and back-end management teams for better mutual understanding.
              • Simplify operational systems and processes to boost effectiveness and speed.
              • Strengthen corporate governance and enhance company sustainability.

              Wong joined Acer in 1986, with experience in sales, product marketing, product development, with a keen understanding of ODM supplier operations and the brand business. In 2001 when he took charge of the ITGO, he has been one of the core members of Acer’s top management team. In 2005 he was promoted to corporate senior vice president.

              Born in 1958, Wong holds a bachelor degree, majoring in mathematics from Soochow University in Taiwan, and an MBA from Emory University, Georgia, USA.  In 1999 he received Taiwan’s 17th Annual Management of Excellency Award.

              Acer ICONIA [press release, Nov 23, 2011]

              Not so long ago mobile computing devices with touch screens were only found in science fiction. Now Acer presents ICONIA, a new concept device set to add a brand new tablet experience, combining the versatility of a conventional 14” form factor with a unique dual-screen layout and highly intuitive all-point multi-touch functionality, which means you can use all the fingers of your hands to navigate ICONIA.

              If you are looking for a different and innovative approach to personal computing, look no further. With its two all-point multi-touch displays Acer ICONIA offers an enhanced content consumption experience and brings the interaction with the tablet to a new level.

              Multimedia, entertainment, communication, web browsing and office productivity seamlessly flow across the dual screen, allowing users to set the best scenario for what they are doing. To improve readability of web sites or documents, the window can be spread across both screens. But the dual screen also means you can do one thing in one screen and something else entirely on the other: you can browse a website on the top screen and view the contents of your favourite folder on the bottom one or you can watch a video on the top screen and check out your multimedia library in the other.

              “We took this insight and created a range of easy to use devices with touch technology including Smartphones, Notebooks, AIO PCs, Tablet and our latest addition, the ICONIA Touchbook: this level of commitment to touch technology is something no other PC vendor can compete with.” states Jim Wong Acer Inc. Vice President and ITGO President. “The Intel® Core™ i5 processor together with our experience with touch technology has allowed us to completely remap the user experience to create a far more natural interaction with our devices.

              April 19, 2011: Acer establishes Touch Business Group to enhance development of new mobile devices

              • Acer Corporate President Jim Wong to lead Touch Business Group
              • Campbell Kan to lead PC Global Operations
              • Walter Deppeler to lead Chief Marketing Office

              Acer Inc. announces organizational adjustments in separating the back-end product-line operations into two independent entities: Touch Business Group (Touch BG) and PC Global Operations (PCGO) lead by new Acer corporate president, Jim Wong, and Campbell Kan, former VP of the smart handheld business unit, respectively. In addition, Acer announces new functions for mid- and long-term business planning and operation analysis.

              To make significant inroads in the mobile device business, Acer has reorganized the former IT product global operations into two independent entities. The newly founded Touch BG comprises of the former tablet PC and smartphone teams, while the PCGO consists of the main PC product lines.

              The Touch BG shall be led by new Acer corporate president, Jim Wong, and president of Eten Information Systems, Simon Hwang, concurrently appointed deputy president of Touch BG.

              Acer president, Jim Wong, states, “Touch/mobile devices open up a host of new opportunities. They form Acer’s new business and growth engine for the future. To focus on this market, we saw the need to allocate sufficient resources, and devise a new management structure different from the PC business specifically for this line of business.”

              New Functions
              Acer also creates three new functions deemed necessary for company’s competitive development, they are: Chief Marketing Office (CMO) – responsible for brand position and marketing strategy; Chief Technology Office (CTO) – responsible for mid to long term planning and integration of technologies; and Operation Analysis Office (OAO) – for studying and analyzing company business models and financial affairs.

              Senior corporate VP and EMEA president, Walter Deppeler, shall concurrently serve as CMO, while Tiffany Huang, AVP of supply chain operations will concurrently oversee the OAO. The CTO will be jointly led by former VP of quality and service, Jackson Lin, former CTO of products development, R.C. Chang, and former VP of technology center, Arif Maskatia.

              May 26, 2011: Acer’s manufacturing base in Chongqing commences operation – Ceremony to mark milestone achievement joined by Mayor Huang Qifan and Acer President Jim Wongise competitiveness on a global scale.

              Acer’s new global IT manufacturing center in Chongqing has commenced production. Today a ceremony attended by Chongqing Mayor Huang Qifan and Acer President Jim Wong was held to mark this achievement. The city of Chongqing in western China offers excellent infrastructure including land and air transportation, and stable manpower supply. The newly operational manufacturing center is expected to enhance Acer’s worldwide business and logistics to boost overall competitiveness.

              Acer President, Jim Wong, remarked, “Our decision to go west in China is a global strategy. Since December last year, the steps in setting up this manufacturing base have been smooth, enabling our production start in May. Acer is extremely grateful for the support of the Chongqing government and our manufacturing partners to make this a possibility.”

              “Major OEM companies have already set foot in Chongqing and all will begin shipping by the second half of this year,” continued Wong. “Key component suppliers have also set up presence here to create a complete supply chain. To begin with, we will produce our notebook and netbook PCs in Chongqing and gradually expand our manufacturing volume. By the end of 2011, 30-40% of our total notebook and netbook PCs will be produced here.”

              June 1, 2011: Acer Chairman & CEO to relinquish his remuneration

              Acer Chairman and CEO J.T. Wang is taking responsibility of the one-time write-off totaling US$150 million by relinquishing total remuneration from his position as director of the company board, as well as employee bonus of  2010.

              With Acer’s substantial loss in write-off, Wang deeply feels regretful of the current situation and will dedicate his efforts fully to investigating the reasons behind the loss and to improving internal management.

              July 18, 2011: Dave Chan appointed General Manager of China Operations, Acer Touch Business Group – Focus on penetration into China touch mobile device market

              Global IT industry veteran, Dave Chan, has been appointed General Manager of China Operations, Acer Touch Business Group. Under Chan’s leadership, Acer expects to accelerate penetration into China’s smartphone and tablet PC market.

              Chan has been working in the high-tech industry for more than 20 years, accumulating a wealth of experience in the consumer/retail business and operations with extensive geographic experiences ranging from global, regional (Asia) and country (China). Prior to this, he served as senior official for eight years at a first-tier IT company, responsible for notebooks, smartphones and tablet PCs in China.

              Acer Corporate President, Jim Wong, said, “Touch mobile device is Acer’s new strategic business. While China’s huge IT market, with unique applications and customer segment, presents great business potential. To address these specific needs, we established a separate business group overseeing the China touch mobile device market and will allocate the needed human resource.”

              “Dave will lead Acer’s touch business development team in China,” continued Wong, “cooperate with local telcos and operators on R&D, software, sales and services. His joining ensures that Acer has substantial leadership to steer this new business forward in China.”

              To make significant inroads in the mobile device business, Acer announced in April the newly founded Touch Business Group comprising of the former tablet PC and smartphone teams, and directly overseen by Wong.
              Chan holds a B.S. in Mechanical Engineering from Oregon State University and MBA from Santa Clara University.

              July 18, 2011: Acer sets up global R&D center in Chongqing – Focus on smart handheld application software and services

              A new global R&D center, Acer Intellectual (Chongqing) Co. Ltd., was inaugurated today to enhance Acer’s development in smartphones and tablet PCs. The center shall cooperate with Chongqing municipal government and China Mobile Ltd. in researching and developing smart handheld devices as well as related software and services.

              An inaugural ceremony was held today and joined by Chongqing government officials, during which Acer also signed an agreement with Chongqing Economic and IT Commission (CQEIC) and China Mobile’s Chongqing subsidiary (Chongqing Mobile) to jointly research and develop smart handheld devices, including smartphones and tablet PCs, application software and services.

              To begin with, Acer will invest US$4 million in Acer Intellectual (Chongqing) Co. The center, led by Acer Corporate President Jim Wong, will also focus on the smart handheld user behavior study in the China market.

              Acer’s R&D taskforce has already begun collaborating with Chongqing Mobile and local IT companies to successfully develop software applications for Android based TD (time domain) smartphones; the applications are used by Chongqing civil servants. Further on, together with Chongqing government, the center will develop smart handheld mobile terminals to provide more value-added services.

              January 8, 2012: Acer Unveils World’s Thinnest Ultrabook: Aspire S5

              “The Ultrabook is much more than just a product segment,” said Jim Wong, president of Acer Inc. “It’s a new trend that will become the mainstream for mobile PCs, and customers will see the unique features gradually extended across Acer’s notebook family.”

              January 8, 2012: AcerCloud Connects All Personal Devices Securely for Anytime, Anywhere Access to Digital Media and Data

              Acer today previewed its upcoming AcerCloud, which securely connects all personal smart devices for anytime, anywhere access. Featuring Acer Always Connect technology, users can retrieve multimedia and data files anytime, even when their main PC is in sleep (standby/hibernation) mode. Users can enjoy these advantages knowing that their information is stored and transferred securely via strong encryption and authentication. Bringing users tremendous functionality and value, Acer will include the AcerCloud, without additional cost, on all new Acer consumer PCs.

              Acer reduces the complexities of today’s fast-paced lifestyles by developing solutions that enable devices to communicate, simplifying the process of content sharing. With the ever-growing number of smart digital devices, users need to share and back up their multimedia and data files in a simple, smart way.

              Acer Inc. President Jim Wong stated, “AcerCloud not only provides the simplicity and efficiency when accessing and sharing data, but it’s also free with a new Acer PC and gives our users peace-of-mind, knowing that their data is safely transferred in a personal cloud space.”

              AcerCloud will be bundled on all Acer consumer PCs starting Q2 2012. It will support all Android devices, while future support is planned for Windows-based devices. The service will be available in America, Europe, Asia and China.

              August 30, 2012: Acer Steps Up Marketing, Engages Red Peak Group and Appoints Michael Birkin as Chief Marketing Officer

              To energize and strengthen Acer’s global marketing organization, Acer will engage Red Peak Group, a global marketing services firm, and appoint Red Peak Chairman Michael Birkin as Acer Chief Marketing Officer (CMO). This strategic move is aimed at strengthening Acer as a marketing-oriented company.

              Red Peak will assign Birkin and other Red Peak members to perform related marketing functions and services for Acer. And as CMO, Birkin will lead the global brand marketing team, and report directly to Acer’s chairman and CEO, commencing October 1, 2012.

              According to Acer Chairman and CEO, J.T. Wang, “Our key objectives for Red Peak are to enhance Acer’s marketing strengths and help steer the existing company mindset.”

              “In the product development stages, we will place marketing ahead of R&D and design,” said Wang. “Our precise understanding of customers’ needs will lead the way in products and services development. We will build an end-to-end marketing environment and enhance our marketing-oriented mindset.”

              Birkin is regarded as one of the world’s most respected brand strategists and marketing experts. During his career he served as the CEO of Interbrand Group, the brand consultancy, and worked in various capacities at Omnicom, the global advertising and marketing communications services group. In 2010, Birkin founded the Red Peak Group, a marketing services company with offices in New York, London and Los Angeles, offering a full range of services including brand consulting and design.

              In addition to the marketing organization and personnel changes, the incumbent CMO Walter Deppeler, has been assigned to lead a newly established marketing committee as Chairman, responsible for integrating Acer’s global branding and marketing strategy.

              June 4, 2012: Acer Unveils Windows® 8 PC Lineup: Ultrabooks™, Tablets, and AIO Desktops – Creating a world of explorers through transformational user experiences

              Acer today announces its series of Windows 8-based products, which includes the premium Aspire S7 Ultrabook™, ICONIA W Series tablets, and Aspire U Series all-in-one (AIO) desktops, all featuring innovative ergonomic designs and appealing beauty that deliver greater convenience and delight to the overall user experience.
              “It is a watershed moment for Acer,” says J.T. Wang, Chairman and CEO of Acer Inc. “Acer has always been committed to breaking the barriers between people and technology and the leading design of these products, when coupled with the Windows® 8 touch functions, will provide transformational experiences for users whether they are creating important output or simply being entertained.”
              Jim Wong, Corporate President of Acer Inc. comments, “Acer collaborated closely with Microsoft Corp. and has taken the lead to engineer new products that will be great with Windows® 8, demonstrating our product development efficiencies and taking advantage of our ability to provide an enhanced and satisfying computing experience. By focusing on ergonomics and style, we are addressing key consumer demands.”
              Wong continues: “Interaction between human beings and computers should be easy rather than complex. In our view, the touchscreen experience enabled by Windows® 8 is a massive step forward – simply because it makes computing more intuitive by offering users a backward in interface. We understand Windows® 8 innovation and benefits and by utilizing Intel’s architecture and platform performance on our products, we believe we will provide users a better touch experience across devices for both consumer and commercial products.”
              “Microsoft and Acer have been working together on new devices for Windows® 8, and it’s great to see the progress Acer is making,” says Steven Guggenheimer, Corporate Vice President, OEM Division, Microsoft Corp. “We expect customers to have a great experience using the combination of Windows® 8 and the new hardware designs from Acer.”
              “Intel and Acer continue to focus on innovation and collaboration delivering engaging and secure user experiences,” says Kirk Skaugen, Vice President and General Manager, PC Client Group of Intel Corp. “Combined with the increased responsiveness of Intel’s 3rd generation Intel Core processor, new breakthrough capabilities possible with future Microsoft Windows® 8, and the added flexibility of touch, the Acer Aspire Ultrabook™ will provide a magnificent experience for users.”
              Wong says further, “At CES we announced Acer’s new brand positioning, the visible statement of which is to explore beyond limits. Today’s announcement is the most significant yet in our goal to create a modern day explorer in everyone. Our new products are 100% designed and created to enable anyone to accomplish more whether they be an individual or a business.”
              In the development of the new product lineup, Acer has been working even more closely with Microsoft and Intel.
              PRODUCT INFORMATION
              The Aspire S7 Ultrabook™ — the premium model in the S Series — boasts a sleek aluminum unibody design. The 13.1-inch model is currently the thinnest Full HD touch Ultrabook™ and features glossy tempered glass, while the 11.6-inch model is the smallest Full HD touch-enabled Ultrabook™. Both devices are kitted out with the innovative Acer Twin Air cooling system for best thermal comfort, as well as a light-sensing keyboard that adjusts its backlight to facilitate typing, even in low light.
              The ICONIA W510 and W700 tablets have raised touch functionality to the next level. The W510 is equipped with a 10.1-inch display and has tri-mode touch, which allows users to touch, type and view. It also delivers up to 18 hours of battery life and headlines Always On, Always Connect technology. The W700 is the best-performing Windows tablet with a versatile cradle that is adjustable for different viewing requirements while offering data storage expansion and an additional battery. Sporting an 11.6-inch Full HD touchscreen, this tablet stuns with high-quality 1080p images.
              Aspire U Series AIO desktops are also available in two sizes. The 27-inch 7600U has an ultra-slim 35 mm profile and a gorgeous Full HD edge-to-edge screen and Dolby® Surround Sound. This AIO features multi-user touch, and can be tilted from 0 to 90 degrees. Furthermore, the screen can swivel to all sides when laid flat. The 23-inch 5600U is the slimmest AIO PC that can tilt from 30 to 85 degrees, enhancing personal touch use. Both models have leading ergonomic designs, and a slim, stylish finish that complements interior decor.

              October 30, 2012: Acer Aspire S7 Series The Thinnest and Lightest “Touch & Type” Ultrabooks™

              First previewed at Computex Taipei, the Acer Aspire S7 Series, the thinnest and lightest Ultrabooks™, has been hailed as one of the most exciting Windows 8-based touch Ultrabooks to launch. It was also featured prominently in Microsoft’s launch event in New York and highlighted as one of the best PCs ever made. The positive reviews have been unparalleled.

              As thin as a smartphone, the S7 is an iconic combination of power and beauty. The use of straight lines, glossy white glass, electroluminescent lighting and anodized aluminum have culminated in an Ultrabook that champions cutting-edge technology and innovative design. The dual torque hinge and Acer Green Instant On / Always Connect features ensure the ultimate in control and seamless usability.

              “Acer took a fresh approach to the design and development of the Aspire S7, using premium construction methods and materials,” said Jim Wong, corporate president of Acer. “The high level of engineering and design quality we set for the S7 was achieved by placing the user experience as our top design priority, and by our ongoing commitment to introducing technologies into our products that truly complement human behavior, and stimulate curious and progressive thought and action.

              November 12, 2012: Acer America’s New C7 Chromebook: Secure, Speedy and Simple

              Editor’s Summary:

              • Available for purchase starting tomorrow in the U.S. through Google Play, Best Buy stores and BestBuy.com at an affordable $199
              • Provides hassle-free computing with automatic security and software updates
              • Great for use as an additional home computer
              • Includes built-in apps for productivity, collaboration and entertainment

              Acer America today debuts its new Acer C7 Chromebook, its next-generation mobile computer that runs Google’s Chrome operating system and is priced at a low $199.

              The new Acer C7 Chromebook is the ideal additional laptop for families, students and business people that need a fast, easy and secure way to get online to do their computing in the cloud, such as using Gmail, keeping up on social networks, shopping, and paying bills.

              Today’s computer users are doing more online heightening the need for enhanced security, quicker online access and an easy-to-use interface,” Jim Wong, corporate president, Acer Inc. “The Acer C7 Chromebook provides all this at an affordable price, making it the right choice for families and students on a budget as well as anyone who wants a new or second mobile PC for web-based computing.”

              “The core of Google’s Chromebook vision is creating a better, more simple computing experience and making it available to everyone,” said Sundar Pichai, senior vice president, Chrome and Apps, Google. “We’re excited about the Acer C7 Chromebook, the newest addition to the Chromebook family. The Acer C7 delivers a hassle-free computing experience with the speed, security and simplicity that users expect of Chromebooks built in.”

              December 10, 2012: Acer Appoints Tiffany Huang President of PC Global Operations – Incumbent president, Campbell Kan, to serve as special assistant to Acer chairman

              Acer announces the appointment of Tiffany Huang to become the president of Personal Computer Global Operations (PCGO), reporting to the corporate president, Jim Wong. Huang shall replace Campbell Kan who will serve as special assistant to the chairman, J.T. Wang. Both appointments shall take effect from January 1, 2013.

              Kan has held key positions within Acer’s IT products global operations over the past twelve years, and is accredited for his excellent management and contribution to the mobile PC business. With his extensive knowhow, Kan shall take charge of key projects assigned by Acer chairman where he can lend his expertise for the future of the company.
              With her latest appointment, Huang leaves her post as associate vice president of Supply Chain Operations Business Unit after twelve years in this field. In the past year, she has also held positions in the Operations Analysis Office responsible for analyzing and strategizing corporate operations, and the Strategic Demand Planning Business Unit for demand and material planning.
              During her career at Acer, Huang has demonstrated clear potential with her leadership quality, execution and communication skills, and experience in cross cultural and cross functions. Her sense of business acumen, global insight, matched by accurate end-to-end projections on many occasions deemed her to be the ideal candidate to take the position as president of PCGO, as Kan assumes his new post.
              Huang joined Acer in 1988 in the legal division dealing with intellectual property rights. From 1997 to 2001 she served as director of operations management at Acer’s U.S. operations. In 2001 she returned to the Taipei headquarters and was later promoted to associate vice president of supply chain operations until the latest appointment.
              Born in 1964, Huang has a Bachelor of Science degree in Law from Taiwan’s Chung-Hsing University.

              January 7, 2013: Acer Extends AcerCloud to Top Three Operating Systems, Making it Easy to Share Files and Media among Windows, iOS and Android Devices

              Acer today announced cross-platform support for AcerCloud, the company’s file sharing and media management solution, free to Acer customers. Consumers can now share, retrieve and enjoy their multimedia and data files using a variety of computing devices, regardless of which operating system they are running – Windows, Android or iOS.

              AcerCloud uses the free space on a PC’s hard drive as cloud storage spaceUsers simply designate one of their PCs as their “Cloud PC,” enabling them to use the available hard drive space on their own PC, giving them security and full control over their storage needs.  And unlike other cloud solutions, consumers won’t receive constant reminders about exceeding capacity with solicitations to pay for more storage.

              “With AcerCloud, Acer now supports free file sharing between all of the key mobile devices, adding tremendous value to Acer customers,” said Acer President, Jim Wong.  “AcerCloud greatly simplifies our customers’ ability to manage all of their digital assets across all of their devices, regardless of platform.”

              Acer, Asustek actively marketing cloud computing solutions [DIGITIMES, July 25, 2013]

              Acer and Asustek have been pushing forward in marketing hardware/software-integrated cloud computing solutions focusing on educational applications and web storage, respectively, according to the companies.

              Acer has integrated its servers with software used in eDC, its electronic information management center, into cloud computing solutions and promoted sales through cooperation of system integration providers, the company indicated. The cloud computing solutions are mainly used for educational purposes, with procurement by local governments being the major source of business, Acer noted. In addition to contracts from schools in Taiwan and Thailand, Acer has been marketing products in Nanjing City, eastern China, and Chongqing City, western China, and plans to tap the North America and Europe markets, Acer noted.

              Asustek has its subsidiary, Asus Cloud, responsible for operating its cloud computing business. In addition to Taiwan-based Cathay Financial Holdings and Taishin Financial Holding, Asus Cloud-developed storage solutions have been adopted by the National Center for High-performance Computing (NCHC) under the government-sponsored National Applied Research Laboratories, Asus Cloud CEO Peter Wu said. Asus Cloud will offer a storage solution of 1PB in total capacity for NCHC, with more than 10TB to come into use in the second half of 2013, Wu indicated. In addition, Asus Cloud has signed with the government of Chongqing City to develop cloud computing platforms for education, civic services and by small- to medium-size enterprises in the city, Wu said.

              June 3, 2013: Acer Enhances its Flagship Ultrabook™, the Aspire S7

              “We designed the S7 to be the best touch Ultrabook in the world, bar none,” said Jim Wong, Acer Corporate President. “We listened carefully to users to find substantial ways to make it even better.” The re-engineered S7 delivers improved battery life of up to 7 hours, a 33% increase from its predecessor. Its new light-sensing EL backlit keyboard is also refined, with a deeper keystroke for more natural and comfortable typing. Plus, thanks to 2nd generation Acer TwinAir cooling technology, the noise at maximum load is more than 20% lower than the previous S7, keeping the system quiet and cool. … The new Aspire S7 will be available in Q3 2013.

              The end of the road announcements:

              Acer Chairman and CEO J.T. Wang Tenders Resignation; Corporate President Jim Wong to Succeed as CEO – Wang to remain in chairmanship to fulfill tenure as Acer begins a comprehensive restructuring and transformation [press release, Nov 5, 2013]

              Acer announces that the resignation of J.T. Wang, Chairman and CEO, has been approved by its board of directors. Wang shall remain in chairmanship until the end of his tenure next June. The Board and The Search Committee also agreed that Corporate President Jim Wong will succeed Wang as the new CEO from January 1, 2014. A comprehensive restructuring plan has been formulated by the Acer management team, and without delay, the Board will commence with its corporate transformation.
              J.T. Wang, chairman and CEO of Acer, said, “Acer encountered many complicated and harsh challenges in the past few years. With the consecutive poor financial results, it is time for me to hand over the responsibility to a new leadership team to path the way for a new era.”
              Acer’s board of directors stated, “We are very grateful for Wang’s contribution and hard work. The past two to three years have been extremely tough for Acer due to the rapidly changing industry and market conditions. We fully respect Wang’s decision to step down; however, in the interest of ensuring company stability and a smooth transition during this latest restructuring and transformation, we have asked J.T. to remain to complete his tenure as Chairman which ends in June 2014.”
              Wang elaborated, “Together with the management team, we have crafted a far-reaching plan for Acer’s transformation. I wish to thank the board members for their support and to Jim for assuming the CEO duties. I feel optimistic toward Acer’s future. The management team promises to carry out the internal restructuring and will work closely with the Board on the corporate transformation.”
              Acer’s Board has set up a Transformation Advisory Committee with board member [founder] Stan Shih as Chairman and Acer co-founder George Huang as executive secretary. The committee will propose changes in the company vision, strategy, and execution plans for the Board’s approval. They will work with the management team to carry out the transformation to increase shareholder value. To support new development needs, the Board has approved the issue of 136 million new common shares for a capital increase in cash (approximately 4.8% of total shares).
              Stan Shih stated, “After I retired from Acer I shifted my attention to promoting public interests. But when J.T. tendered his resignation, the Board turned to me for help. In consideration of personal social responsibility and for Acer’s onward sustainability, I agreed to take on the duty to help the management team with a smooth handover during this transition period.”
              Shih added, “After making structural adjustments, we will introduce more competitive products within the existing PC, tablet, and smartphone business and stabilize our market share. This will be the basis of our transformation and for developing new business opportunities.”
              Acer’s personnel and business restructuring plans include reducing manpower, product plan termination with related product tooling and legal fees, resulting in a one-time cost of US$150M which is expected to be reported in the Q4’13 financial results. Acer will cut its worldwide employees by 7% resulting in OPEX savings of US$100M annually from 2014.

              Acer Q3’13 Financial Results: Consolidated Revenue NT$92.15B (US$3.11B), Operating Loss NT$2.57B (US$86.61M), Intangible Asset Impairment NT$9.94B (US$335.13M), PAT NT$-13.12B (US$-442.19M), EPS NT$-4.82 [press release, Nov 5, 2013]

              Acer’s financial results for Q3 2013, approved by its Board of Directors, are: Consolidated Revenue of NT$92.15B (US$3.11B), up 3.1% quarter-over-quarter and down 11.8% year-over-year; an Operating Loss of NT$2.57B (US$86.61M). In addition, due to a non-cash related intangible asset impairment of NT$9.94B (US$335.12M), profit after tax was NT$-13.12B (US$-442.19M), and earnings per share was NT$-4.82.
              Q3’s operating loss was mainly due to the gross margin impact of gearing up for the Windows 8.1 sell in and the related management of inventory. In addition, in Q3, there were one time compensation payments related to the long standing eMachines consumers litigation. This is now settled.
              The intangible asset impairment loss, which includes trademarks and goodwill, is NT$9.94B (US$335.13M).This impairment, which covers the Gateway, Packard Bell, Founder, iGware and ETen brands, is made in accordance with IFRS (International Financial Reporting Standards) and is reflective of changes in business strategy. The impairment is a non-cash charge and has no impact on Acer’s business operation and working capital.
              Acer’s consolidated revenue for the first three quarters is NT$273.50B (US$9.22B), down 16.6% year-over-year; operating losses for this period are NT$3.15B (US$106.3M). Due to the impact of the intangible asset impairment of NT$9.94B (US$335.13M), PAT is NT$-12.95B (US$-436.42M), and EPS is NT$-4.76. After the impairment of intangible assets, Acer’s net value per share is NT$23.1.
              Looking at Q4, due to the adjustment on brand strategy, shipments for Acer’s notebooks, tablet PCs and Chromebooks are expected to decrease by 10% compared to Q3, however, the gross margin is expected to improve.
              Notes:

              • The spot rate as of November 5, 2013 was used — US$1: NT$29.67.
              • Acer Inc. consolidated revenue includes revenues from other companies in which Acer Inc. has 50% or more ownership, and already deducts any revenues between Acer Inc. and these companies to avoid double-counting.

              Amazon Web Services not only achieved the clear and far dominant leader status in the Cloud Infrastructure as a Service (Cloud IaaS) market, but “the balance of new projects are going to AWS, not the other providers” – according to Gartner

              According to the latest analysis by Gartner, Amazon Web Services (AWS) is:

              1. overwhelmingly the dominant vendor” of the Cloud Infrastructure as a Service (Cloud IaaS) market
              2. a clear leader, with more than five times the compute capacity in use than the aggregate total of the other fourteen providers included in the so called Magic Quadrant (MQ)
              3. appreciated for “innovative, exceptionally agile and very responsive to the market and the richest IaaS product portfolio” which puts AWS into a quite far ahead position even against CSC, the only other in the Leaders quadrant currently

              In addition Amazon Web Services has come up in July with a price cut that reaches 80% on its EC2 cloud computing platform.

              Note that Gartner’s ranking is a complex evaluation, based on various point of views deemed to be most important from vendor-supplier point of view (see in the 3d party explanation of Gartner’s Magic Quadrant included in the Details part). It is not based on any kind of banchmarking, not even those run buy customers according to their specific application requirements. Therefore it is a well know fact that from pure cloud engineering point of view, especially in terms of focussed benchmarks Amazon EC2 is far from being a leader. The latest example of that:
              image

              About the Test
              UnixBench runs a set of individual benchmark tests, aggregates the scores, and creates a final, indexed score to gauge the performance of UNIX-like systems,which include Linux and its distributions (Ubuntu, CentOS, and Red Hat). From the Unixbench homepage:
              The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple tests are used to test various aspects of the system’s performance. These test results are then compared to the scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. The entire set of index values is then combined to make an overall index for the system.
              The UnixBench suite used for these tests ran tests that include: Dhrystone 2, Double-precision Whetstone, numerous File Copy tests, Pipe Throughput, ProcessCreation, Shell Scripts, System Call Overhead, and Pipe-based Context Switching.

              image

              Price-Performance Value: The CloudSpecs Score
              The CloudSpecs score calculates the relationship between the cost of a virtual server per hour and the performance average seen from each provider. The scores are relational to each other; e.g., if Provider A scores 50 and Provider B scores 100, then Provider B delivers 2x the performance value in terms of cost. The highest value provider will always receive a score of 100, and every additional provider is pegged in relation to that score. The calculation is:
              • (Provider Average Performance Score) / (Provider Cost per Hour) = VALUE
              • The largest VALUE is then taken as the denominator to peg other VALUES.
              • [(Provider’s VALUE) / (Largest VALUE)] * 100 = CloudSpecs Score (CS Score)
              Source: IaaS Price Performance Analysis: Top 14 Cloud Providers – A study of performance among the Top 14 public cloud infrastructure providers [Cloud Spectator and the Cloud Advisory Council, Oct 15, 2013] where—in addition of Unixbench—even more focussed benchmark results are reported as well from the Phoronix Test Suite (i.e. one of benchmark suites in PTS):
              For ‘”CPU Performance” the 7-Zip File Compression benchmark which runs p7zip’s integrated benchmark feature to calculate the number of instructions a CPUcan handle per second (measured in millions of instructions per second, or MIPS) when compressing a file
              For “Disk Performance” the Dbench benchmark which can be used to stress a filesystem or a server to see which workload it becomes saturated and can also be used for prediction analysis to determine “How many concurrent clients/applications performing this workload can my server handle before response starts to lag?” It is an open source benchmark that contains only file-system calls for testing the disk performance. For the purpose of comparing disk performance, write results are recorded.
              For “RAM Performance” the RAMspeed/SMP which is a memory performance benchmark for multi-processor machines running UNIX-like operating systems, which include Linux and its distributions(Ubuntu, CentOS, and Red Hat). Within the RAMspeed/SMP suite, the Phoronix Test Suite conducts benchmarks using a set of Copy, Scale, Add, and Triad testsfrom the *mem benchmarks (INTmem, FLOATmem, MMXmem, and SSEmem) in BatchRun mode to enable high-precision memory performance measurementthrough multiple passes with averages calculated per pass and per run.
              For “Internal Network” the Iperf benchmark which is a tool used to measure bandwidth performance. For the purpose of this benchmark, Cloud Spectator set up 2 virtual machines within thesame availability zone/data center to measure internal network throughput.
              Amazon EC2 performed “equally bad” in these particular bechnmarks. Check the published report.

              THE DETAILS BEHIND 

              The 2013 Cloud IaaS Magic Quadrant [by Lydia Leong on Gartner blog, Aug 21, 2013]

              Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, 2013, has just been released (see the client-only interactive version, or the free reprint). Gartner clients can also consult the related charts, which summarize the offerings, features, and data center locations.

              the best image obtained from the web:

              image

              We’re now updating this Magic Quadrant on a nine-month basis, and quite a bit has changed since the 2012 update (see the client-only 2012, or the free 2012 reprint).

              In particular, market momentum has strongly favored Amazon Web Services. Many organizations have now had projects on AWS for several years, even if they hadn’t considered themselves to have “done anything serious” on AWS. Thus, as those organizations get serious about cloud computing, AWS is their incumbent provider — there are relatively few truly greenfield opportunities in cloud IaaS now. Many Gartner clients now actually have multiple incumbent providers (the most common combination is AWS and Terremark), but nearly all such customers tell us that the balance of new projects are going to AWS, not the other providers.

              Little by little, AWS has systematically addressed the barriers to “mainstream”, enterprise adoption. While it’s still far from everything that it could be, and it has some specific and significant weaknesses, that steady improvement over the last couple of years has brought it to the “good enough” point. While we saw much stronger momentum for AWS than other providers in 2012, 2013 has really been a tipping point. We still hear plenty of interest in competitors, but AWS is overwhelmingly the dominant vendor.

              At the same time, many vendors have developed relatively solid core offerings. That means that the number of differentiators in the market has decreased, as many features become common “table stakes” features that everyone has. It means that most offerings from major vendors are now fairly decent, but only a few are really stand out for their capabilities.

              That leads to an unusual Magic Quadrant, in which the relative strength of AWS in both Vision and Execution essentially forces the whole quadrant graphic to rescale. (To build an MQ, analysts score providers relative to each other, on all of the formal evaluation criteria, and the MQ tool automatically plots the graphic; there is no manual adjustment of placements.) That leaves you with centralized compression of all of the other vendors, with AWS hanging out in the upper right-hand corner.

              Note that a Magic Quadrant is an evaluation of a vendor in the market; the actually offering itself is only a portion of the overall score. I’ll be publishing a Critical Capabilities research note in the near future that evaluates one specific public cloud IaaS offering from each of these vendors, against its suitability for a set of specific use cases. My colleagues Kyle Hilgendorf and Chris Gaun have also been publishing extremely detailed technical evaluations of individual offerings — AWS, Rackspace, and Azure, so far.

              A Magic Quadrant is a tremendous amount of work — for the vendors as well as for the analyst team (and our extended community of peers within Gartner, who review and comment on our findings). Thanks to everyone involved. I know this year’s placements came as disappointments to many vendors, despite the tremendous hard work that they put into their offerings and business in this past year, but I think the new MQ iteration reflects the cold reality of a market that is highly competitive and is becoming even more so.

              A 3d party explanation of the GARTNER IaaS MAGIC QUADRANT 2013 [cloud☁mania, Aug 29, 2013]

              Gartner just released the 2013 update of his traditionally Magic Quadrant for Cloud Infrastructure-as-a-Service. Here are some consideration about the evaluation methodology and MQ players.

              In the context of this Magic Quadrant, IaaS is defined by Gartner as “a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and API optionally. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s datacentre.”

              To be included in Magic Quadrant IaaS providers should target enterprise and midmarket customers, offering high-quality services, with excellent availability, good performance, high security and good customer support. For each IaaS provider included in MQ Gartner is offering deep description related to services offer like: datacentre locations, computing issues, storage & network features, special notes, and recommended users. Also deep comments about Strengths & Caution in Cloud adoption are offered for each IaaS provider, despite the MQ positioning.

              Gartner Magic Quadrant for IaaS is a more than eloquent picture of actual status of IaaS major players. IaaS market momentum is strongly dominated by Amazon Web Services both Vision and Execution essentially directions. According Garner analysts, AWS is a clear leader, with more than five times the compute capacity in use than the aggregate total of the other fourteen providers included in MQ. AWS is appreciated for “innovative, exceptionally agile and very responsive to the market and the richest IaaS product portfolio”.

              The Leaders Quadrant is positioning CSC as second player, a traditional IT outsourcer with a broad range of datacentre outsourcing capabilities. CSC is appreciated for his commitment to embrace the highly standardized cloud model, and his solid platform attractive to traditional IT operations organizations that still want to retain control, but need to offer greater agility to the business

              The Challengers Quadrant is including Verizon Terremark – the market share leader in VMware-virtualized public cloud IaaS, Dimension Data – a large SI and VAR entering in the cloud IaaS market through the 2011 acquisition of OpSource, and Savvis – a CenturyLink company with a long track record of leadership in the hosting market.

              Big surprise for Visionaries Quadrant is the comfortable positioning of Microsoft with his Windows Azure platformPreviously strictly PaaS, Azure is becoming IaaS also in April 2013 when Microsoft launched Windows Azure Infrastructure Services which include Virtual Machines and Virtual Networks.  Microsoft place in Visionary Quadrant is motivated by Gartner by the global vision of infrastructure and platform services “that are not only leading stand-alone offerings, but also seamlessly extend and interoperate with on-premises Microsoft infrastructure (rooted in Hyper-V, Windows Server, Active Directory and System Center) and applications, as well as Microsoft’s SaaS offerings.” 

              Between the IaaS providers from the Niche Players Quadrant, we have to note the presence of heawy playes triade:IBM, HP, and Fujitsu. Gartner appreciate IBM for his wide range of cloud-related products and services, IaaS MQ analyse including only cloud offering from SmartCloud Enterprise (SCE) and cloud-enabled infrastructure service IBM SmartCloud Enterprise+. In the same way, from HP’s range of cloud-related products and services Gartner is considered only HP Public Cloud and some cloud-enabled infrastructure services, such HP Enterprise Services Virtual Private Cloud. Fujitsu is one of the few non-American cloud providers, being appreciated by Gartner for the large cloud IaaS offerings, including the Fujitsu Cloud IaaS Trusted Public S5 (formerly the Fujitsu Global Cloud Platform), multiple regional offerings based on a global reference architecture (Fujitsu Cloud IaaS Private Hosted, formerly known as Fujitsu Local Cloud Platform), and multiple private cloud offerings, especially in Asia-Pacific area and Europe.

              Speaking about non-America regions we should observe that significant European-based providers like CloudSigma, Colt, Gigas, Orange Business Services, OVH and Skyscape Cloud Services was not included in this Magic Quadrant. The same for Asia/Pacific region with major players like Datapipe, NTT and Tata Communications.

              Gartner considered also two offerings that are currently in beta stage, and therefore could not be included in this evaluation, but could be considered as prospective players of next MQ edition: Google Compute Engine (GCE)a model similar to Amazon EC2′s, and VMware vCloud Hybrid Service (vCHS) – a full-featured offering with more functionality than vCloud Datacenter Service.

              Additional Gartner blog posts related to that:

              Cloud IaaS market share and the developer-centric world [by Lydia Leong on Gartner blog, Sept 4, 2013]

              Bernard Golden recently wrote a CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. He raised a number of good questions that I thought it would be useful to address. This is part 1 of my response. (See part 2 for more.)
              (Broadly, as a matter of Gartner policy, analysts do not debate Magic Quadrant results in public, and so I will note here that I’m talking about the market, and not the MQ itself.)
              Bernard: “Why is there such a distance between AWS’s offering and everyone else’s?”
              In the Magic Quadrant, we rate not only the offering itself in its current state, but also a whole host of other criteria — the roadmap, the vendor’s track record, marketing, sales, etc. (You can go check out the MQ document itself for those details.) You should read the AWS dot positioning as not just indicating a good offering, but also that AWS has generally built itself into a market juggernaut. (Of course, AWS is still far from perfect, and depending on your needs, other providers might be a better fit.)
              But Bernard’s question can be rephrased as, “Why does AWS have so much greater market share than everyone else?”
              Two years ago, I wrote two blog posts that are particularly relevant here:
              These posts were followed up wih two research notes (links are Gartner clients only):
              I have been beating the “please don’t have contempt for developers” drum for a while now. (I phrase it as “contempt” because it was often very clear that developers were seen as lesser, not real buyers doing real things — merely ignoring developers would have been one thing, but contempt is another.) But it’s taken until this past year before most of the “enterprise class” vendors acknowledged the legitimacy of the power that developers now hold.
              Many service providers held tight to the view espoused by their traditional IT operations clientele: AWS was too dangerous, it didn’t have sufficient infrastructure availability, it didn’t perform sufficiently well or with sufficient consistency, it didn’t have enough security, it didn’t have enough manageability, it didn’t have enough governance, it wasn’t based on VMware — and it didn’t look very much like an enterprise’s data center architecture. The viewpoint was that IT operations would continue to control purchases, implementations would be relatively small-scale and would be built on traditional enterprise technologies, and that AWS would never get to the point that they’d satisfy traditional IT operations folks.
              What they didn’t count on was the fact that developers, and the business management that they ultimately serve, were going to forge on ahead without them. Or that AWS would steadily improve its service and the way it did business, in order to meet the needs of the traditional enterprise. (My colleagues in GTP — the Gartner division that was Burton Group — do a yearly evaluation of AWS’s suitability for the enterprise, and each year, AWS gets steadily, materially better. Clients: see the latest.)
              Today, AWS’s sheer market share speaks for itself. And it is definitely not just single developers with a VM or two, start-ups, or non-mission-critical stuff. Through the incredible amount of inquiry we take at Gartner, we know how cloud IaaS buyers think, source, succeed, and sometimes suffer. And every day at Gartner, we talk to multiple AWS customers (or prospects considering their options, though many have already bought something on the click-through agreement). Most are traditional enterprises of the G2000 variety (including some of the largest companies in the world), but over the last year, AWS has finally cracked the mid-market by working with systems integrator partners. The projected spend levels are clearly increasing dramatically, the use cases are extremely broad, the workloads increasingly have sensitive data and regulatory compliance concerns, and customers are increasingly thinking of AWS as a strategic vendor.
              (Now, as my colleagues who cover the traditional data center like to point out, the spend levels are still trivial compared to what these customers are spending on the rest of their data center IT, but I think what’s critical here is the shift in thinking about where they’ll put their money in the future, and their desire to pick a strategic vendor despite how relatively early-stage the market is.)
              But put another way — it is not just that AWS advanced its offering, but it convinced the market that this is what they wanted to buy (or at least that it was a better option than the other offerings), despite the sometimes strange offering constructs. They essentially created demand in a new type of buyer — and they effectively defined the category. And because they’re almost always first to market with a feature — or the first to make the market broadly aware of that capability — they force nearly all of their competitors into playing catch-up and me-too.
              That doesn’t mean that the IT operations buyer isn’t important, or that there aren’t an array of needs that AWS does not address well. But the vast majority of the dollars spent on cloud IaaS are much more heavily influenced by developer desires than by IT operations concerns — and that means that market share currently favors the providers who appeal to development organizations. That’s an ongoing secular trend — business leaders are currently heavily growth-focused, and therefore demanding lots of applications delivered as quickly as possible, and are willing to spend money and take greater risks in order to obtain greater agility.
              This also doesn’t mean that the non-developer-centric service providers aren’t important. Most of them have woken up to the new sourcing pattern, and are trying to respond. But many of them are also older, established organizations, and they can only move so quickly. They also have the comfort of their existing revenue streams, which allow them the luxury of not needing to move so quickly. Many have been able to treat cloud IaaS as an extension of their managed services business. But they’re now facing the threat of systems integrators like Cognizant and Capgemini entering this space, combining application development and application management with managed services on a strategic cloud IaaS provider’s platform — at the moment, normally AWS. Nothing is safe from the broader market shift towards cloud computing.
              As always, every individual customer’s situation is different from another’s, and the right thing to do (or the safe, mainstream thing to do) evolves through the years. Gartner is appropriately cautionary when it discusses such things with clients. This is a good time to mention that Magic Quadrant placement is NEVER a good reason to include or exclude a vendor from a short list. You need to choose the vendor that’s right for your use case, and that might be a Niche Player, or even a vendor that’s not on the MQ at all — and even though AWS has the highest overallplacement, they might be completely unsuited to your use case.

              Where are the challengers to AWS? [by Lydia Leong on Gartner blog, Sept 4, 2013]

              This is part of 2 of my response to Bernard Golden’s recent CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. (Part 1 was posted yesterday.)

              Bernard: “What skill or insight has allowed AWS to create an offering so superior to others in the market?”

              AWS takes a comprehensive view of “what does the customer need”, looks at what customers (whether current customers or future target customers) are struggling with, and tries to address those things. AWS not only takes customer feedback seriously, but it also iterates at shocking speed. And it has been willing to invest massively in engineering. AWS’s engineering organization and the structure of the services themselves allows multiple, parallel teams to work on different aspects of AWS with minimal dependencies on the other teams. AWS had a head start, and with every passing year their engineering lead has grown larger. (Even though they have a significant burden of technical debt from having been first, they’ve also solved problems that competitors haven’t had to yet, due to their sheer scale.)

              Many competitors haven’t had the willingness to invest the resources to compete, especially if they think of this business as one that’s primarily about getting a VM fast and that’s all. They’ve failed to understand that this is a software business, where feature velocity matters. You can sometimes manage to put together brilliant, hyper-productive small teams, but this is usually going to get you something that’s wonderful in the scope of what they’ve been able to build, but simply missing the additional capabilities that better-resourced competitors can manage (especially if a competitor can muster both resources and hyper-productivity). There are some awesome smaller companies in this space, though.

              Bernard: “Plainly stated, why hasn’t a credible competitor emerged to challenge AWS?”

              I think there’s a critical shift happening in the market right now. Three very dangerous competitors are just now entering the marketMicrosoft, Google, and VMware. I think the real war for market share is just beginning.

              For instance, consider the following, off the cuff, thoughts on those vendors. These are by no means anything more than quick thoughts and not a complete or balanced analysis. I have a forthcoming research note called “Rise of the Cloud IaaS Mega-Vendors” that focuses on this shift in the competitive landscape, and which will profile these four vendors in particular, so stay tuned for more. So, that said:

              Microsoft has brand, deep customer relationships, deep technology entrenchment, and a useful story about how all of those pieces are going to fit together, along with a huge army of engineers, and a ton of money and the willingness to spend wherever it gains them a competitive advantage; its weakness is Microsoft’s broader issues as well as the Microsoft-centricity of its story (which is also its strength, of course). Microsoft is likely to expand the market, attracting new customers and use cases to IaaS — including blended PaaS models.

              Google has brand, an outstanding engineering team, and unrivaled expertise at operating at scale; its weakness is Google’s usual challenges with traditional businesses (whatever you can say about AWS’s historical struggle with the enterprise, you can say about Google many times over, and it will probably take them at least as long as AWS did to work through that). Google’s share gain will mostly come at the expense of AWS’s base of HPC customers and young start-ups, but it will worm its way into the enterprise via interactive agencies that use its cloud platform; it should have a strong blended PaaS model.

              VMware has brand, a strong relationship with IT operations folks, technology it can build on, and a hybrid cloud story to tell; whether or not its enterprise-class technology can scale to global-class clouds remains to be seen, though, along with whether or not it can get its traditional customer base to drive sufficient volume of cloud IaaS. It might expand the market, but it’s likely that much of its share gain will come at the expense of VMware-based “enterprise-class” service providers.

              Obviously, it will take these providers some time to build share, and there are other market players who will be involved, including the other providers that are in the market today (and for all of you wondering “what about OpenStack”, I would classify that under the fates of the individual providers who use it). However, if I were to place my bets, it would be on those four at the top of market share, five years from now. They know that this is a software business. They know that innovative capabilities are vitally necessary. And they know that this has turned into a market fixated on developer productivity and business benefits. At least for now, that view is dominating the actual spending in this market.

              You can certainly argue that another market outcome should have happened, that users shouldhave chosen differently, or even that users are making poor decisions now that they’ll regret later. That’s an interesting intellectual debate, but at this point, Sisyphus’s rock is rolling rapidly downhill, so anyone who wants to push it back up is going to have an awfully difficult time not getting crushed.

              Verizon Cloud is technically innovative, but is it enough? [by Lydia Leong on Gartner blog, Oct 4, 2013]

              Verizon Terremark has announced the launch of its new Verizon Cloud service built using its own technology stack.

              Verizon already owns a cloud IaaS offering — in fact, it owns several. Terremark was an early AWS competitor with the Terremark Enterprise Cloud, a VMware-based offering that got strong enterprise traction during the early years of this market (and remains the second-most-common cloud provider amongst Gartner’s clients, with many companies using both AWS and Terremark), as well as a vCloud Express offering. Verizon entered the game later with Verizon Compute as a Service (now called Enterprise Cloud Managed Edition), also VMware-based. Since Verizon’s acquisition of Terremark, the company has continued to operate all the existing platforms, and intends to continue to do so for some time to come.

              However, Verizon has had the ambition to be a bigger player in cloud; like many other carriers, it believes that network services are a commodity and a carrier needs to have stickier, value-added, higher-up-the-stack services in order to succeed in the future. However, Verizon also understood that it would have to build technology, not depend on other people’s technology, if it wanted to be a truly competitive global-class cloud player versus Amazon (and Microsoft, Google, etc.).

              With that in mind, in 2011, Verizon went and made a manquisitionacquiring CloudSwitch not so much for its product (essentially hypervisor-within-a-hypervisor that allows workloads to be ported across cloud infrastructures using different technologies), as for its team. It gave them a directive to go build a cloud infrastructure platform with a global-class architecture that could run enterprise-class workloads, at global-class scale and at fully competitive price points.

              Back in 2011, I conceived what I called the on-demand infrastructure fabric (see my blog post No World of Two Clouds, or, for Gartner clients, the research note, Market Trends: Public and Private Cloud Infrastructure Converge into On-Demand Infrastructure Fabrics) — essentially, a global-class infrastructure fabric with self-service selectable levels of availability, performance, and isolation. Verizon is the first company to have really built what I envisioned (though their project predates my note, and my vision was developed independently of any knowledge of what they were doing).

              The Verizon Cloud architecture is actually very interesting, and, as far as I know, unique amongst cloud IaaS providers. It is almost purely a software-defined data center. Components are designed at a very low level — a custom hypervisor, SDN augmented with the use of NPUs, virtualized distributed storage. Verizon has generally tried to avoid using components for which they do not have source code. There are very few hardware components — there’s x86 servers, Arista switches, and commodity Flash storage (the platform is all-SSD). The network is flat, and high bandwidth is an expectation (Verizon is a carrier, after all). Oh, and there’s object-based storage, too (which I won’t discuss here).

              The Verizon Cloud has a geographically distributed control plane designed for continuous availability, and it, along with the components, are supposed to be updatable without downtime (i.e., maintenance should not impact anything). It’s intended to provide fine-grained performance controls for the compute, network, and storage resource elements. It is also built to allow the user to select fault domains, allowing strong control of resource placement (such as “these two VMs cannot sit on the same compute hardware”); within a fault domain, workloads can be rebalanced in case of hardware failure, thus offering the kind of high availability that’s often touted in VMware-based clouds (including Terremark’s previous offerings). It is also intended to allow dynamic isolation of compute, storage, and networking components, allowing the creation of private clouds within a shared pool of hardware capacity.

              The Verizon Cloud is intended to be as neutral as possible — the theory is that all VM hypervisors can run natively on Verizon’s hypervisor, many APIs can be supported (including its own API, the existing Terremark API, and the AWS, CloudStack, and OpenStack APIs), and there’ll be support for the various VM image formats. Initially, the supported hypervisor is a modified Xen. In other words, Verizon wants to take your workloads, wherever you’re running them now, and in whatever form you can export them.

              It’s an enormously ambitious undertaking. It is, assuming it all works as promised, a technical triumph — it’s the kind of engineering you expect out of an organization like AWS or Google, or a software company like Microsoft or VMware, not a staid, slow-moving carrier (the mere fact that Verizon managed to launch this is a minor miracle unto itself). It is actually, in a way, what OpenStack might have aspired to be; the delta between this and the OpenStack architecture is, to me, full of sad might-have-beens of what OpenStack had the potential to be, but is not and is unlikely to become. (Then again, service providers have the advantage of engineering to a precisely-controlled environment. OpenStack, and for that matter, VMware, need to run on whatever junk the customer decides to use, instantly making the problem more complex.)

              Unfortunately, the question at this stage is: Will anybody care?

              Yes, I think this is an important development in the market, and the fact that Verizon is already a credible cloud player in the enterprise, with an entrenched base in the Terremark Enterprise Cloud, will help it. But in a world where developers control most IaaS purchasing, the bare-bones nature of the new Verizon offering means that it falls short of fulfilling the developer desire for greater productivity. In order to find a broader audience, Verizon will need to commit to developing all the richness of value-added capabilities that the market leaders will need — which likely means going after the PaaS market with the same degree of ambition, innovation, and investment, but certainly means committing to rapidly introducing complementing capabilities and bringing a rich ecosystem in the form of a software marketplace and other partnerships. Verizon needs to take advantage of its shiny new IaaS building blocks to rapidly introduce additional capabilities — much like Microsoft is now rapidly introducing new capabilities into Azure.

              With that, assuming that this platform performs as designed, and Verizon can continue to treat Terremark’s cloud folks like they belong to a fast-moving start-up and not an ossified pipe provider, Verizon may have a shot at being one of the leaders in this market. Without that, the Verizon Cloud is likely to be relegated to a niche, just like every other provider whose capabilities stop at the level of offering infrastructure resources.


              From: Amazon.com Announces Third Quarter Sales up 24% to $17.09 Billion [press release, Oct 24, 2013]

              • Amazon Web Services (AWS) introduced more than 15 new features and enhancements to its fully managed relational and NoSQL database services. Amazon Relational Database Service (RDS) now supports Oracle Statspack performance diagnostics and has expanded MySQL support, including capabilities for zero downtime data migration. Enhancements to Amazon DynamoDB include new cross-region support, a local test tool, and location-based query capabilities.
              • AWS continued to bolster its management services, making it easier to provision and manage more AWS resources with AWS CloudFormation and AWS OpsWorks, which both added support for Amazon Virtual Private Cloud (VPC). AWS also enhanced the AWS Console mobile app and introduced a new Command Line Interface.
              • AWS continued to gain momentum in the public sector and now has more than 2,400 education institutions and 600 government agencies as customers, including recent new projects with customers such as the U.S. Federal Drug Administration.

              THE JULY PRICE CUT

              From Amazon.com Announces Second Quarter Sales up 22% to $15.70 Billion [press release, July 25, 2013]

              • AWS announced it had lowered prices by up to 80% on Amazon EC2 Dedicated Instances, instances that run on single-tenant hardware dedicated to a single customer account. In addition, AWS lowered prices on Amazon RDS instances with On-Demand price reductions of up to 28% and Reserved Instance (RI) price reductions of up to 27%.
              • Amazon Web Services (AWS) became the first major cloud provider to achieve FedRAMP Compliance which recognizes the ability of AWS to meet extensive security requirements and compliance mandates for running sensitive US government applications and protecting data. FedRAMP certification simplifies and speeds the ability for government agencies to evaluate and adopt AWS for a wide range of applications and workloads.
              • AWS announced the launch of the AWS Certification Program, which recognizes IT professionals that possess the skills and technical knowledge necessary for building and maintaining applications and services on the AWS Cloud. AWS Certifications help organizations identify candidates and consultants who are proficient at architecting and developing for the cloud.
              • AWS further enhanced its security and identity management capabilities across several services – introducing resource-level permissions for Amazon Elastic Compute Cloud (EC2) and Amazon Relational Database Service (RDS), adding identity federation to AWS Identity and Access Management (IAM), extending Amazon Simple Storage Service (S3) Server Side Encryption support to Amazon Elastic Map Reduce (EMR), and adding custom SSL certificate support for CloudFront. These enhancements give customers more granular security controls over their AWS deployments, applications and sensitive data.

              Some directly related and general/major previous press releases from that overall list:

              Leading edge Nokia phablets for both entertainment and productivity: Lumia 1320 targeting the masses at $339, and Lumia 1520 the imaging conscious business users and individuals at $749

              This is my conclusion after carefully analyzing the announced products in all of their details. Only “imaging consciousness” needs a little explanation in the very beginning because otherwise the substantial $410 pricing difference would be hard to understand. To illustrate the rationale for that I copied here an image currently available at http://refocus.nokia.com/ which shows that you can make everything in focus. By going to that address you will also be able to experience with the image there (likely a different one) and change the focus as you like. And this is only one aspect of all the benefits for “imaging conscious” business users and individuals willing to pay that $410 extra.

              image

              Another aspect is the Screen Test: Nokia Lumia 1520 versus Galaxy Note 3 [TheHandheldBlog YouTube channel, Oct 22, 2013] which is showing particularly well the Lumia 1520 advantage over the phablet market leader Samsung Galaxy Note 3:

              Nokia’s so proud of the display that they put in the Lumia 1520 that they had setup a demo zone that simulated the lighting available during various times of the day e.g. in sunlight, indoors etc and the idea was to see how the screen kept up with the changing lighting. Both devices were set to automatic maximum brightness, and this is how they stood up. Hint: The Nokia killed it.

              The Nokia World Nokia Lumia 1520 daylight visibility demo [WMPowerUser YouTube channel, Oct 22, 2013] is showing the advantage against the smartphone market leaders:

              The Nokia Lumia 1520 features a very bright screen with much reduced glare, which allows much improved visibility in daylight conditions, as demoed in a special light box at Nokia World against and iPhone 5 and Samsung Galaxy S4. All devices are at maximum brightness.

              To understand how “a very bright screen with much reduced glare” is uniquely achieved by Nokia, please read The leading ClearBlack display technology from Nokia [‘Experiencing the Cloud’, Dec 18, 2011 – May 8, 2012] post of mine.

              imageAs far as the opportunities for business customer are concerned Chris Weber, EVP Sales & Marketing at Nokia said that the opportunity is huge even in 2013:
              imageIn addition Nokia and Microsoft are uniquely positioned in that space because Nokia smartphones (Windows phones) are rated highest by business users etc. Watch his presentation in Replay Nokia World Abu Dhabi (the second webcast:) Breakout Sessions [Nokia Conversations, Oct 22, 2013], from [1:49:00].

              Now consider what one gets for a much lesser amount ($339) with Nokia Lumia 1320 unveiled at Nokia World Abu Dhabi [Maurizio Pesce YouTube channel, Oct 22, 2013]:

              [Note that the Lumia 1320 also has IPS LCD with ClearBlack display technology, just the resolution is lower: 1280 x 720 (HD) vs 1920 x 1080 (full HD) of the Lumia 1520.]

              While the latest innovations in functionality are shown quite well in the above video the most important quote from Nokia (because of its implication) is:

              Similar to the Lumia 1520 – and building on the affordability of the Lumia 520 – the new Nokia Lumia 1320 also features a 6-inch screen and a wealth of features at a lower price.

              And here is the Nokia Lumia 1520 first hands-on [Nokia YouTube channel, Oct 22, 2013] video for a kind of comparison (from Nokia itself):

              Be the first to see the stunning new Nokia Lumia 1520, which packs a full HD 6-inch display and a 20 megapixel PureView camera into a sleek polycarbonate shell. It’s also brings an exciting third row of tiles to boost your creativity and productivity, as well as the brand new Nokia Camera app. Find out more: http://conversations.nokia.com/2013/10/22/standing-tall-the-nokia-lumia-1520-and-lumia-1320/

              Technical similarities/differences between the two Nokia phablets could be seen in the below table of mine, which was compiled from Lumia 1320 & 1520 spec data + dev specs.image
              Regarding the similarity between the two devices the most important thing is the functionality which is 90% the same. The Lumia 1520 is just different with:
              – Multimicrophone uplink noise cancellation
              – HERE Drive+
              – Secure NFC (although in developer specs it is indicated for Lumia 1320 as well)
              – Public transportation routing guidance
              – HERE Transit
              – Panorama (That is get the bigger picture with Nokia’s easy-to-use Panorama app. Simply take your pictures and the app automatically stitches them into a picture-perfect view. Once you’re done, share your panorama directly to Twitter and Facebook.)
              – Additional light sensitivities: ISO 1600, ISO 3200, ISO 4000
              + Nokia Refocus (as an application on top of that) illustrated in the very beginning of this post already

              The Nokia World Abu Dhabi 2013 [Red Robot – Intelligent Distribution YouTube channel,
              Oct 23, 2013] had certainly much wider set of new announcements:
              At Nokia World: Abu Dhabi, Nokia unveiled six new devices alongside new accessories, Nokia experiences and third-party developer applications.

              In order to understand the substantially higher price positioning for the Lumia 1520 please note that the Samsung Galaxy Note 3 LTE 32GB unlocked offering (which has nothing like the imaging functionality of Lumia 1520) has a U.S. retail price of $735.00-789.95+ on Amazon (not to speak of the 32GB Apple iPhone 5s with its $850-950+ price as a minimum). Here is a very recent, brief comparison:
              Nokia Lumia 1520 vs Samsung Galaxy Note 3 [Recombu YouTube channel, Oct 22, 2013]

              The underdog vs the prized champion – the Nokia Lumia 1520 vs the Samsung Galaxy Note 3. Two big phones but which has the biggest impact when places side by side?
              More Note 3 related information could be found in The new Air Command S Pen User Experience making the Samsung Galaxy Note 3 phablet, and Galaxy Note 10.1, 2014 Edition tablet next-generation devices [‘Experiencing the Cloud’, Sept 12, 2013] post of mine.

              Now The Nokia Lumia 1520 and the Arabian Peninsula [Nokia Conversations, Oct 23, 2013] post is describing quite clearly the ultimate advantage of Lumia 1520 as:

              The Nokia Lumia 1520 is very much the bigger brother of the Lumia 1020, sporting a 6-inch display.
              Although it only packs 20 megapixels against the 1020′s 41 megapixels, the results are no less stunning. “The image quality is fantastic” is [National Geographic photographer Stephen] Alvarez’s instant feedback, “every bit of the Lumia 1020′s”. The larger screen makes a difference, too.
              “It’s easier to frame because you can see the image much larger,” Alvarez says of the device’s size, “but it isn’t so large that it’s cumbersome”. When doing really careful compositions, Stephen believes the Lumia 1520 makes short work of such tasks, citing the screen as being “remarkable”.
              For a professional photographer, the ability to extract DNG (Digital Negative) image files from a device is crucial. Whether it’s to work further on the image in post-production, or whether to prove you have an un-tainted image (if you’re a journalist or gathering evidence, for example). With the Lumia 1520 comes support for DNG files, much to Stephen’s delight when he found out.
              Amateur and professional photographers alike benefit from the fact there is zero compression in the file. Pulling more detail out of a shadow is just one example Alvarez offers before pointing out that the file size is comparable to what he sees from his DSLR.

              Some additional advantages from Standing tall: The Nokia Lumia 1520 and Lumia 1320 [Nokia Conversations, Oct 22, 2013] are described as:

              Lumia 1520 features Nokia Rich Recording with four microphones, providing directional stereo recording capability to capture distortion free audio from the preferred direction for clear, accurate sound. (We will have more information to share with you about the Lumia 1520’s audio capabilities soon.)
              The Lumia 1520 is a genuine workhorse, too. It comes with Microsoft Office built in, so you can have the flexibility to work when you need to; and it includes a massive battery with integrated wireless charging (Qi compatible). This powerful combination means you can manage to easily log a whole day of work and enjoy entertainment like movies without worrying about finding a place to charge up.
              The ability to show a third column on the Start screen isn’t just about scaling the experience of Windows Phone 8. It really is about getting to you favorite apps faster with less scrolling. For instance, the email app now shows one additional row of content in the Live tile, right on the Start screen.
              Do you do a lot of video conferencing? The front-facing camera on the Lumia 1520 offers 1.2 HD 720p wide angle video so you’ll always look your best.
              And, with the new Papyrus app coming to Windows Phone for the first time, you can easily take handwritten notes on your Nokia Lumia 1520.

              The Lumia 1520 features all the latest in display technology to make the beautiful six inch screen stand out, even in the brightest sunlight. With ClearBlack display technology, High Brightness Mode and assertive display technology by Apical, the screen offers a viewing experience to enjoy no matter what the light conditions are. And when winter rolls in, wearing gloves won’t prevent you from using the screen just as easily as before thanks to the super sensitive touch screen with Gorilla Glass 2.

              No wonder that in this Nokia Lumia 1520 Hands on and tour of Nokia’s latest phablet [funviz YouTube channel, Oct 24, 2013] 3d party video the display is touted as the big selling point of the device: “[0:19] That display is probably the best display I’ve ever seen on any digital device. [0:23]”

              he Lumia 1520 is Nokia’s high-end ‘phablet’ Windows Phone released in early November, 2013. Featuring a 6-inch 1080 display and a Qualcomm Snapdragon 800 2.2GHz quad-core CPU, the Lumia 1520 is Nokia’s first Update 3 device featuring the Lumia ‘Black’ firmware, a rolling cover and a 20 MP camera with a f/2.0 lens. The device also has 32 GB of internal storage and it can take up to 64 GB of external memory (microSD). The Lumia 1520 has a massive 3400 mAh battery life for an estimated 25 hours talk time and 2GB of RAM, making it one of the more powerful Lumias to date. Compared to other Lumias, the 1520 also has 4 High-amplitude audio capture (HAAC) microphones on board to give it excellent audio fidelity on calls and while recording videos. The Lumia 1520 will also feature a new Refocus Lens and StoryTeller apps from Nokia and it comes in four colors, including yellow, white, black and glossy red.

              And don’t forget that there is the quite rich common Lumia 1320/1520 functionality (except Nokia Refocus ?for time being?) which is touted by Nokia in the following way:

              Continuing to redefine smartphone innovation, Nokia introduces its first ever large screen Lumia smartphones, the Lumia 1520 and Lumia 1320. With a six inch screen and the latest software advancements for Windows Phone, the Lu.mia 1320 and Lumia 1520 are perfectly suited for entertainment and productivity. A new third column of tiles on the home screen means people can see and do more on a larger screen. Bringing larger displays to the award-winning Lumia design, the new format is coupled with some of the most advanced camera innovations so people can capture and share the world around them. … With Microsoft Office built in, documents can be edited and shared easily for maximum productivity.
              It includes Nokia Music and HERE Maps for great music and location experiences, and welcomes both Nokia Camera and Nokia Storyteller apps, available in the Windows Phone Store. With the latest apps, the best from Nokia, and the best from Microsoft supported by LTE connections you can make the most out of the bright large screen on the Lumia 1320. Sit back and relax while you play the latest Xbox games, read web content, or watch movies on the six inch screen. A wealth of content can be stored using the free 7GB of SkyDrive storage or with the addition of a micro SD card adding up to 64GB more space.
              HERE experiences get an update too! Now with LiveSight in HERE Drive and HERE Drive+ you can easily locate your parked car, while in HERE Transit it will be easier to find nearby transit stations. … The third-party apps line-up for Lumia is significantly boosted with the arrival of … the Instagram photo editing and sharing network, Papyrus and InNote, which both offer handwritten note-taking solutions, and the Vine short-form video service from Twitter – available soon!
              imageFrom: Papyrus – Use your finger or a stylus to take notes on your Lumia. The vector graphics engine keeps your notes beautiful at any zoom level.
              Windows Phone Store is stocked with a full range of certified applications and games, with hundreds more titles added every day. Starting today, Windows Phone owners can download a number of new applications including: CamScanner, Goal Live Scores, InNote, phriz.be, Rail Rush, SophieLensHD and an updated TuneIn Radio application with radio stations live on the Start screen.
              Over the coming weeks, Windows Phone owners can experience many more new applications including: ESPN F1, EA SPORTS(TM) FIFA 14, Instagram, My Talking Tom, Papyrus, Temple Run 2, Vine, Vyclone Pro and Xbox Video among others**.
              The screen real estate afforded by the larger full HD display screens on the Nokia Lumia 1520 (6″) … has already inspired developers to create unique experiences for consumers. … Flipboard, a personal magazine, allows people to read and collect all the news they care about, curating their favorite stories into their own magazines. With Live Tiles … people can digest news faster and easier than ever before. Flipboard will be available … for Lumia smartphones in the coming months.
              During Nokia World 2013, Flipboard CTO Eric Feng gave us a rundown of some of the new features that are coming to the Windows version of the popular social-networking and magazine app.
              More information:
              Replay Nokia World Abu Dhabi – (the 2nd embedded webcast:) Breakout Sessions [Nokia Conversations, Oct 22, 2013]
              Eric Feng (CTO, Flipboard): [42:15] Quick overview of Flipboard:
              — World’s first personal personal magazine (curated, personalized and displayed)
              — ~8000 content partnerships (The New York Times, BBC, AlJazeera, ESPN etc.)
              — Partnering with world’s 14 leading social networks
              — A community of a couple of million users who are curating magazines every day
              — About 4 million magazines curated already on every single day
              — ~100 million items can be consumed on Flipboard every single day
              — presented in a beautiful magazine layout
              image
              ~90 million people and growing
              ~200 countries worldwide
              image
              [47:10]
              Flipboard’s Mike McCue | Disrupt NYC 2013 [TechCrunch YouTube channel, April 29, 2019]
              Flipboard is a digital social magazine that aggregates web links from your social circle, i.e. Twitter and Facebook, and displays the content in magazine form on an iPad. Here, watch Co-Founder and CEO Mike McCue onstage at Disrupt NY 2013 talk about Flipboard.
              Flipboard and Pandora, gamechangers in mobile [This Week In Startups YouTube channel, Oct 15, 2013]
              Flipboard was one of the first iPad apps that made clear exactly how powerful the experience could be. Blending the best of RSS content with beautiful print-style layout, the app has only gotten better. Now Flipboard’s 85m users can create their own magazines tailored to specific interests. When cofounder and CEO Mike McCue sat down with Jason at LAUNCH Mobile, they dug into how the app has become publisher and platform, changing how we think about and interact with content. Then, a conversation with Tom Conrad, longtime CTO of Pandora. The publicly-traded company has long held sway in the mobile space, transforming how we think about radio by identifying exactly what makes a genre or mood, through its music genome project. Then, putting that tailored-just-for-you radio station into the pockets of its 175m users. Don’t miss these great conversations that point to the future of media in the mobile space.
              Today, let’s take a look at our homegrown apps right from the Nokia app factory.
              Nokia Camera App demo on Lumia 1520. Nokia Camera is available now to download for all Nokia Lumia PureView smartphones, and coming to the rest of the WP8 Lumia range along with the Lumia Black update early next year. Nokia Camera also has raw file support (DNG format) on Nokia Lumia 1520, and this will be coming to the Lumia 1020 when the Lumia Black update rolls out.
              Nokia Camera
              Two glorious apps combined into one. Nokia Pro Camera and Nokia Smart Camera have been merged to simplify your photo-taking experience, and ensure you get the perfect shot every time.
              The Nokia Camera integrates easy automatic mode, but you can also take control of your camera and become a pro by manually altering focus settings, shutter speed, and ISO as well as using Smart Burst, Remove Moving Objects or create an Action Shot features.
              Nokia has a new Refocus app which put Lytro cameras to shame, allowing users to focus on any element of the picture after the fact
              Nokia Refocus
              Rather than worry about getting the focus perfect first time, Nokia Refocus takes a photo and lets you choose the focus point after you’ve taken the shot.
              [AS AN ALTERNATIVE—to the above one—PRESENTATION] Nokia Refocus lets you readjust the focus after taking the photos. It will be available ON THE LUMIA 1520 AND ALL OTHER PUREVIEW LUMIA DEVICES in mid-november.
              This means you can create different expressions of a single photo, changing the same scene in a number of ways by refocusing on the area you want to highlight most. You can also choose to make everything in focus, or magically add colour pop to the focused area for more impact.
              [You can experience the effect of Nokia Refocus on an image captured with Nokia Refocus placed at http://refocus.nokia.com/]
              Nokia Storyteller
              One of the (many) announcements from Nokia World 2013 in Abu Dhabi is Nokia’s new Storyteller app, coming first with the Lumia 1520. We go hands on for a video tour of the new app.
              A single app that combines your photographs and videos with HERE location information to create a picture journey. When you take a photo or record a video, you can see precisely where it was taken on a familiar HERE map.
              See clusters of photos scattered across the globe and zoom in to an exact location. Plus, zoom out of any photo within the gallery to see where it was shot. You will also notice that your photos truly come live, the videos are played in-line, the action shots are beautifully animated, and the refocus effect is subtly played back to give the immersive feeling when browsing between the images.
              Nokia Beamer wins our award for the zaniest thing we’ve seen at Nokia World. The app allows you to share your display, live, to another Windows Phone (or really any device). Just watch the video and expect the app in the coming weeks.
              Nokia Beamer
              Nokia Beamer is the ideal way to share what’s on your Lumia display to any internet-enabled screen.
              PowerPoint presentations, photos, videos, absolutely anything that you see on your Lumia can be shared, remotely, to anywhere in the world.
              It’s like magic, there’s no hassle with connections, and it just works. You beam the screen to a nearby HTML enabled screen, or SMS it to a friend on the other side of the globe. You can even tweet it and let your followers see your screen in real time!

              It is quite remarkable that how much the “building on the affordability of the Lumia 520” claim (quoted in the beginning) is true for Lumia 1320. When comparing the two in devspecs (functionality comparison I will omit here) there are quite significant improvements (over Lumia 520) on one hand, such as:
              – 720 x 1280 pixels, 6 inches
              – Corning® Gorilla® Glass 3
              – ClearBlack, IPS LCD
              – High brightness mode
              – Lumia Color Profile
              – LTE with upto 100 Mbps uplink / 50 Mbps downlink speeds
              – 1.7 GHz Qualcomm Snapdragon S4 processor
              – 1 GB RAM, possibility to have 64 GB Micro SD cards
              – HERE Maps and HERE Drive
              – Magnetometer sensor (compass)
              – Auto and Manual Exposure, Face recognition, LED Flash and Video Light for the main camera
              – Full HD (1920 x 1080) video recording resolution
              – Video call, Video sharing and Video stabilizaion
              – Secondary camera
              – Dolby headphone
              – Bluetooth 4.0
              – Maximum standby time: 28 day
              – Maximum talk time (2G): 25 h
              – Maximum talk time (3G): 21 h
              – Maximum music playback time: 98 h
              – Maximum video playback time: 9 h
              – Maximum Wi-Fi browsing time: 11.5 h

              On the other hand—however—the list price of the Lumia 1320 is just $339 vs. the initial list price of the Lumia 520 around $200 when it was released to the market by end March this year. Considering that since then the street price of Lumia 520 went down to as low as $125 (corresponding to Rs.7,714 in India) we may expect that in half year after its Q1 2014 planned release (i.e. in Q3 2014) the street price of Lumia 1320 could become as low as $212 too (iff the sales go as well as for Lumia 520).

              In that case the higher by $410 price of Lumia 1520 ($749) will be even less lucrative for an average phablet buyer as it is now (IMHO already not lucrative at all). Look at the list functionality copied here from Lumia 1320 and 1520 spec data, and you will easily understand that (functionality denoted in bold is for Lumia 1520 only):

              Software and applications
              Productivity features

              • Personal information management features: Calculator, Clock, Calendar, Phonebook, Alarm clock, Reminders, To-do list, Social networks in Phonebook, OneNote, Wallet, Family Room, Kid’s Corner
              • Business apps: Adobe acrobat reader free download, Lync (Corporate IM) free download, SkyDrive storage for documents and notes, Company Hub for enterprise applications, Office apps: Excel, Word, Powerpoint, OneNote
              • Document formats supported: Excel, PDF, Word, OneNote, Powerpoint
              • Sync type: Exchange ActiveSync, Windows companion app, 1320: Mac companion app, Nokia Photo Transfer for Mac
              • Sync content: Calendar, Video, Pictures, Music, Contacts

              Other Applications

              • Game features: DirectX 11, Touch UI, XBox-Live Hub

              Software platform & User Interface

              • SW Platform: Windows Phone
              • Software release: Windows Phone 8 with Lumia Black

              Communications
              Email and Messaging5

              • Email clients: Yahoo! Mail, Outlook Mobile, Gmail, Office 365, Nokia Mail, Windows Live / Hotmail / Outlook.com, MS Exchange Active Sync
              • Email protocols: SMTP, IMAP4, POP3
              • Email features: Viewing and editing of email attachments, Always up to date, Multiple simultaneous email accounts, Text-to-speech message reader, Email attachments, Conversational view on email, Linking multiple inboxes to one, Inbox filtering, HTML viewer
              • Supported instant messaging services: Google Talk, Twitter, Windows Live Messenger, WhatsApp, Yahoo! Messenger, Skype IM, Facebook
              • Messaging features: Integrated text messaging and chat, Instant messaging, Concatenated SMS for long messages, Multiple SMS deletion, List of recently used numbers, Audio messaging, Text-to-speech message reader, Text messaging, Number screening for messaging, Unified MMS/SMS editor, Automatic resizing of images for MMS, Distribution lists for messaging, Multimedia messaging, Conversational chat style SMS, Unified inbox for SMS and MMS

              Call management

              • Call management features: Voice commands, Call waiting, Voice mail, Integrated hands-free speakers, Call forwarding, Call logs: dialled, received and missed, Call history, Voice dial, Conference call, Skype voice call
              • Video call features: Skype video call
              • Supported amount of phonebooks: One integrated phonebook
              • Supports amount of contacts: Unlimited
              • Ringtones: Downloadable ringtones, MP3 ringtones
              • Noise cancellation: Yes –> Multimicrophone uplink noise cancellation
              • Speech codecs: AMR-WB, GSM FR, AMR-NB, GSM HR, EFR, GSM EFR

              Device security
              Security

              • Enterprise security features: Remote security policy enforcement
              • General Security features: Remote device locking via Internet, Secure NFC, Track and Protect via internet, Firmware update, Remote wipe of user data via Internet, Device lock, Device passcode, PIN code, Firmware and OS integrity check, Secure device start-up, Online back-up and restore, Application sandboxing and integrity check
              • Advance security features: Lost device tracking, Browser integrated anti-phishing
              • Data encryption: User data encryption for device

              Sharing and Internet
              Browsing and Internet

              • Internet browser capabilities: Internet Explorer 10
              • Social apps: Facebook, LinkedIn, Twitter, LINE, WeChat
              • Photo sharing: Share over Bluetooth, Facebook, Picasa, Flickr, Send as email attachment, SkyDrive, Nokia Beamer
              • Video sharing: YouTube, Flickr, Picasa, Video sharing to social network and internet, Facebook, SkyDrive, Joyn video call sharing
              • Location sharing: WP location sharing, Foursquare
              • Wi-Fi hotspot: Up to 8 Wi-Fi-enabled devices
              • Nokia Beamer: Nokia Beamer lets you share your screen with family, colleagues and friends by simply pointing your Nokia Lumia at any screen displaying the web address http://beam.nokia.com. Or share your live Lumia screen by sending a link by email, sms or social networks.

              Navigation
              GPS and navigation6

              • Location and navigation apps: HERE Maps, HERE Drive –> HERE Drive+
              • Navigation features: (Pin places to Start screen, Reveal the surrounding places, Reviews, info and photos for places, Routing options, Save favourite destinations, Call, share and get directions to places,) –> Public transportation routing guidance, Free maps, (Live traffic information,) Automatic day/night view switching, (Venue maps – shopping and transport centers,) Offline maps, (Online and offline favourites), Speed limit warnings, LiveSight, Download the latest maps with Wi-Fi, Free turn-by-turn walk navigation, (Save and sync favs with HERE.com,) Free global voice guided turn-by-turn drive navigation
              • Location technologies: Magnetometer, A-GPS, Cellular and Wi-Fi network positioning, A-GLONASS

              Location and navigation apps

              • HERE Maps: Discover the best places in any neighbourhood with HERE Maps. New LiveSight technology reveals interesting places in your display. Explore malls, stations and more with venue maps. Access your favourite places on any HERE app and on here.com.
              • HERE Drive –> HERE Drive+: Navigate safely with (free regional voice-guided directions) –> global free turn-by-turn navigation and true offline support. (HERE Drive) –> HERE Drive+ features audible speed limit warnings, dedicated dashboard and commute assistance. Save and access favourite places on any HERE app and on here.com.
              • HERE Transit: Get around by bus, train and subway all in one easy-to-use app. Compare route options, arrival and departure times and walking distances in over 700 cities and 50 countries. Over-the-air updates ensure you always have the latest routes and schedules.

              Photography
              Photography apps

              • Nokia Camera7: Nokia Camera brings together the features from Nokia Pro Cam and Nokia Smart Cam modes into one application. It gives you the whole exclusive Nokia camera experience with fast access to editing and sharing.
              • Creative Studio: Get more out of your pictures with this quick and easy photo editor. Creative Studio’s editing tools let you quickly adjust color balance, remove red eye and apply filters. Then, share your pictures directly on Facebook and Twitter.
              • Cinemagraph: A magical blend of photo and movie-like animation, creating pictures that seem almost alive. Helpful on-screen assistance lets you select the animated area of your picture and easily create and edit a cinemagraph. You can share your cinemagraph with friends via social media, email and messaging.
              • Panorama: Get the bigger picture with Nokia’s easy-to-use Panorama app. Simply take your pictures and the app automatically stitches them into a picture-perfect view. Once you’re done, share your panorama directly to Twitter and Facebook.
              • Nokia Storyteller: Nokia Storyteller organizes your photos into stories on an interactive HERE map by time and place. Cinemagraphs and smart photos play automatically. See points of interest around the photos taken and pull up contact info, call or email directly from Storyteller, enabling you to authentically re-experience your entire journey when the time comes.

              Image capturing

              • Capture modes: Video, Still
              • Scene modes: Automatic, Sports, Night
              • White balance modes: Cloudy, Incandescent, Fluorescent, Daylight, Automatic
              • Light sensitivity: Automatic, ISO 100, ISO 200, ISO 400, ISO 800, ISO 1600, ISO 3200, ISO 4000
              • Photos viewed by: Camera Roll, Month, Timeline, Photo editor, Favorites, Album, Photos from social networks, Nokia Storyteller

              Music and Audio
              Music apps

              • Nokia Music: Unlimited streaming of music for free with Nokia Music. Discover great new music wherever you go. Create your own artist inspired mix or enjoy the mixes from our music experts and celebrity friends. You can listen to your favourite mixes offline and discover live music around you with Nokia Music Gig Finder.

              Accessibility
              Accessibility features

              • Hearing: Vibrating alerts, TTY support, Video call, BT neckloop compatible
              • Vision: Screen reader compatible, Voice recognition, Vibrating alerts, Zoom, Voice commands, Font magnification, High contrast mode, Customizable home screen
              • Physical skills: Speakerphone, Voice recognition, Voice commands, Customizable home screen

              6Downloading of maps over-the-air may involve transferring large amounts of data. Your service provider may charge for the data transmission. The availability of products, services and features may vary by region. Please check with your local Nokia dealer for further details and availability of language options. These specifications are subject to change without notice.

              7Nokia Camera comes as an update to Nokia Pro Camera. Available now from Store.

              Why Intel is pressed to go as far down as to $99 with its Android tablet prices (but not with Windows 8.1)?

              There is a typical misunderstanding from reports like Intel says get ready for $99 tablets, $299 Haswell notebooks, $349 2-in-1 hybrids [ZDNet, Oct 16, 2013] that those rock bottom prices ($99+) will apply to Windows 8.1 tablets as well. This is very far from the truth both from possibilities and business rationale point of view for the company. 

              From: Intel’s CEO Discusses Q3 2013 Results – Earnings Call Transcript [Seeking Alpha, Oct 15, 2013]

              During the holiday selling season, you will see Atom SoCs and tablets as low as $99, and in 2-in-1 systems as low as $349.

              David Wong – Wells Fargo

              Thanks very much. Bay Trail. If I’m not mistaken there are Android tablets using Clover Trail+ the currently available, when might we expect Android tablets using Bay Trail in the market?

              Brian Krzanich – Chief Executive Officer, Director

              You are absolutely right there, several tablets out there currently today with Clover Trail+ using Android. What I told you was, there are about 50 designs on Bay Trail, about 20 of those are 2-in-1s, probably 25, 20 of them are Bay Trail tablets on Android, there is going to be about eight systems on shelf, eight to 10 systems on shelf, we believe, by the say Black Friday timeframe. Most of those will be Android tablets.

              Intel plans cheap Bay Trail CPUs for 2Q14 [DIGITIMES, Oct 14, 2013]

              Intel is planning to release entry-level Bay Trail-based processors for the Android platform in the second quarter of 2014, according to sources from tablet players.

              The sources expect the CPUs to be priced between US$15-20, about US$12 lower than the current models.

              Although Intel has already offered subsidies for its Bay Trail-T processors including Atom Z3740 and Z3770 at US$32 and US$37 and another 10% off for bulk purchase, they are still less competitive in pricing compared to ARM-based quad-core processors.

              With the new entry-level processors, the sources expect Intel to gain an equal footing against players such as Mediatek, Qualcomm and Nvidia.

              AND WHY “This [$99+ Windows 8.1 tablet] is very far from the truth both from possibilities and business rationale point of view for the company”?

              Here are the clues from Intel’s CEO Discusses Q3 2013 Results – Earnings Call Transcript [Seeking Alpha, Oct 15, 2013]

              During the third quarter, our revenue grew 5% sequentially and was flat versus the third quarter of 2012. Year-over-year PC CPU volumes declined slow and were offset by solid growth in the data center and enterprise. While consumer demand in emerging markets was sluggish, we started to see early signs of improvements in North America and Western Europe. I see our performance in this environment as evidence of an increasingly broad and diverse product portfolio. I would like to highlight a few of the most important results from the quarter.

              Following the launch of Ivy Bridge EP and the Atom-based Avoton SoCs, the data center group, delivered all-time record revenue. DCG saw strength across its lines of business in geographies. Cloud revenue was up 40% year-over-year. Storage was up 20% and high performance computing was up 27%. Even traditional enterprise servers were up a bit over the last year on the strength of our MP product line.
              While the data center group’s results demonstrate some of Intel’s core capabilities, we saw strong performance beyond DCG. Our embedded business grew 21% year-over-year, reaching an all-time record for revenue driven by communications infrastructure, transportation, the internet [of] and retail. Embedded revenue is well on its way to a double-digit growth year.
              Just a few weeks ago, we announced our newest product family, Quark, an ultra low power and low cost architecture. And while any significant revenue impact is some time away, the architect and the speed with which we are bringing it to market are evidence of the changes we are making to ensure we are in a better position to lead and define technology trends moving forward.
              Finally, our NAND business grew 20% over last year. As enterprise and data center customers increasing use of high-performance SoCs have put this segment on a path to double-digit growth for the year.

              (See also The long awaited Windows 8.1 breakthrough opportunity with the new Intel “Bay Trail-T”, “Bay Trail-M” and “Bay Trail-D” SoCs? [‘Experiencing the Cloud’, Sept 14-26, 2013] for how much the current Bay Trail is priced for the overall Windows market (not only tablets) where prices are much higher than on the Android market).

              The current Android tablet offers from Intel based on Clover Trail +:

              You will see that with current Clover Trail + Android tablets there is a clear performance disadvantage against the ongoing quad-core ARM Cortex-A9 Android tablets which are also priced much lower than the upcoming $149.99 and $179.99 Android tablets from Dell. From pricing point of view compare that even with that of Amazon’s move into overall leadership: Kindle Fire HDX with Snapdragon 800, “revolutionary on-device tech support” (Mayday), enterprise and productivity capable Fire OS 3.0 forked from Android 4.2.2 etc. PLUS a significantly enhanced, new Kindle Fire HD for a much lower, $139 price [‘Experiencing the Cloud’, Sept 27, 2013], not to speak of the Chinese whitebox tablets costing even less than the new Kindle Fire HD at around $100.

              Android tablet user experience [ARMflix YouTube channel, Oct 10, 2013]
              ARM Quad core Cortex-A9 @1.4GHz vs. Intel Dual core Clovertrail+ @1.6GHz

              Performance comparison of two Android tablets (ARM-based vs. Intel-Based) with same size screens using a real application rather than a synthetic benchmark. A Top 10 Android 3D Game (same version) is launched on both tablets and shown in real-time. Performance differences are highlighted throughout.

              So there are Dell Venue 7 and 8 Tablets [Dell YouTube channel, Oct 15, 2013] to capitalise on the well know and aceepted Dell brand name with higher prices:

              Stay connected with Venue 7 and 8 tablets featuring fast Intel processors and easy to use Android OS.

              for which Dell says on its Coming Soon: New Dell Venue Tablets [Oct 2, 2013] campaign page:

              Dell Venue 7 & Venue 8: The tablets that draw a crowd.
              Dell Android tablets combine the power of Intel® with compact connectivity, featuring a 7″ or 8″ HD screen with wide-angle viewing and both front and back cameras. Available in October.

              High-def details:
              Enjoy every detail in high resolution on a 7″ or 8″ HD display screen for sharing your favorite photos, playing games and more.

              All-access apps:
              Whether you’re looking to relax or be productive, the Android-based platform means you have access to thousands of Android apps.

              High-performance processor:
              Expect speed with 4th Gen Intel® Atom™ processors for maximum performance.

              From Dell Introduces New Line of Tablets and Updated XPS Laptops: Create, Share and Access Content from Virtually Anywhere [press release, Oct 2, 2013]


              The Dell Venue 7 and Dell Venue 8 feature Intel Atom Z2760 (“Clover Trail”) [Z2560/Z2780 Clover Trail+ – see below] processors

              Availability and Pricing

              The Dell Venue 7, Venue 8, … will be available from October 18 on www.dell.com in the United States and select countries around the world.

              • Venue 7: $149.99
              • Venue 8: $179.99

              image
              Links to click: Venue™ 7Venue™ 8Z2560Z2580 –  Clover Trail +

              Intel’s new era of integrated computing: Look inside, looking ahead by Renee James, President

              Intel App Show for Developers – IDF 2013 Day 1 Keynote Review [intelswnetwork YouTube channel, published on Oct 2, 2013]

              Bob and Eric Mantion [Capt Geek] breakdown IDF13’s day one keynote and discuss why they believe this could be the best keynote in recent memory.
              imageFrom: 2013 Intel Developer Forum Opening Keynote [transcript, Sept. 10, 2013] Brian Krzanich, CEO, and Renee James, President
              Brian Krzanich: … to show just how far we’re looking ahead, it gives me great pleasure to introduce Intel’s newest president, Renee James. [The inserted images are from the presentation PDF]
              Renee James: Good morning. For 45 years, Intel’s been inventing the future. For 45 years, we’ve been building the foundation of this industry, which is the silicon transistor, which you just saw. And for 45 years, we have been doing the things that everybody said can’t be done.

              image

              Now, we’re going to lead the industry into a new era of computing, an era of computing where everything computes. And we’ll transition from worrying about the form factor, or the look and feel of the device, to the real problems that computing has solved for us — compute that’s integrated into the fabric of our daily lives, and assists us in solving problems, like managing huge global cities, or finding cures through personalized healthcare.
              We’ll be able to solve ordinary problems in extraordinary ways, and extraordinary problems will be solved in seemingly ordinary ways. It will be from the mundane to the miraculous, when integrated computing is in our future.
              For the rest of this talk, what I’m going to do is give you a glimpse of some of the projects that are started today using integrated computing to solve really tough problems that are out there, and give you a glimpse of what the world’s going to look like, from our point of view. But first, I’d like to take you back to the beginning, where all good stories start.
              image
              Forty-five years ago, when Intel was founded by Robert Noyce and Gordon Moore. Bob was the inventor of the silicon transition and integrated circuits, and he gave us a mandate, to go out and do something wonderful. Gordon gave us the compass for that mission with Moore’s Law, and since then Intel has been on the relentless pursuit of the essential underpinnings of this industry, improving the silicon transistor.
              All of you know this, because some of you have written it. Moore’s Law has been declared dead at least once a decade since I’ve been at Intel, and as you know, you heard from Brian, we have 14 nanometers working, and we can see beyond that. I assume you, it’s alive and well, and we’re going to enable many, many things with it.
              One of the things that Moore’s Law enables is the mobility that all of you are using to tweet and surf and text while I talk. We’re going to talk about that.
              image
              All right, today we work in the nano-world, and for those of you that aren’t big aficionados of semiconductor technology, I thought I’d take a second and just explain to you what it really is like. We build transistors atom by atom. Not long ago, we actually didn’t imagine how we would build a transistor that was smaller than 22 nanometers, and now you’ve seen 14 working in Brian’s talk this morning. So, if you don’t know how small that is, consider this. A nanometer is to a yardstick — let me get my marble — as this marble is to the planet earth, that’s how small.
              image
              And we build billions of those transistors on every chip, and hundreds of millions of those chips a year. At our scale, what we do is as complex as putting a man on the moon was in 1969, or putting a rover on Mars in the 21st century. What we do takes fundamental scientific breakthrough. Just to make a single new feature or a new product, something for example like HKMG [High-K Metal Gate] or a 3D transistor, both of which were research projects until Intel had fundamental breakthroughs that moved them into high production and scale.
              image
              These are a few of the additional technological breakthroughs that people said they were barriers. You can’t overcome them, it can’t be done. And the fact is, we have, and we’ve done it so consistently that we make it look easy. Every time you turn on your phone, your tablet, your PC, it just works. It seems easy. And behind that are tens of thousands of people fundamentally making scientific breakthrough so that works.
              These are the breakthroughs that fuel the entire industry, and they make the foundation of the compute platform that you as developers do your work on. And compute platforms and devices follow Moore’s Law as well, not just silicon transistors. They continue to evolve in power and features and performance, and it’s all based on that underlying progress that we make.
              image
              So, I want to give you some examples — they’ll be super fun. So, here’s one. I know all of you are going to recognize this. This — right, the DynaTAC 8000, Motorola phone. In 1980, this phone was built using 1500-nanometer technology, which was state of the art, for 22 nanometers today, right? Some of you remember this was your first cellphone, and it was super cool — not so much today. Today it looks like a prop from a movie. Wasn’t very pocket friendly. Battery life measured in — anybody? — minutes, exactly.
              Okay, here’s state of art today. This is an Intel-based phone, it’s a Lenovo K900. And this phone is state of the art. Twelve days in standby, 12 hours in talk time. So remember, until 1990, most phones were installed in cars permanently, because they needed a power source, right? And all you could do was make a voice call. Could you imagine buying a phone that could only do a voice call today? No one would buy that, right? Making a call is not the most extraordinary thing that this phone does.
              image
              So, let’s talk about what’s extraordinary about it. It has more performance than Pentium® 4. It runs at two gigahertz, that phone, which 12 years ago was the fastest desktop computer you could buy. This is the fundamental advancement of what Intel does. It’s what Moore’s Law brings you, and it’s what we’ve done to make that phone’s performance seem totally mundane.
              image
              We’ve driven three breakthroughs in computing. The first one was very much about task-based computing. And the next phase — the one that I think we’re living in today — I call is lifestyle computing. I’ll talk a little bit more about why. The next phase is very much about integrated computing.
              I’ll start with task-based. Task-based computing really started with origins with the mainframe. It was very much about the scarce resource, and your important task, and what you had to get done. In fact, Intel’s first significant products were memory products for working in mainframes.
              image
              The PC changed that. The PC democratized computing and allowed everybody to be able to do their own tasks. It was still very task-based. But, of course, the PC evolved. It evolved into the era that we’re in now, lifestyle computing. Lifestyle is very much about you, your data, wherever you want it, whenever you want it, to do what you need to get done.
              I want to just pause there and think about evolutions in computing. They don’t come that often. When they do, at the beginning, we think it’s the next big thing. Everything that came before it, dead. But that’s not true. Right? It’s an evolution. Evolutions in computing don’t end. What happens is they continue forward, like the mainframe does today, and they evolve, and they adapt. You should think about each new phase in computing as not an ending but the beginning of the next frontier of where we’re going to go.
              image
              So the next chapter. What happens in the next chapter of computing? We think that familiar objects that occur in your everyday life get new capabilities. So I’m going to give you a pretty mundane example — a car headlight. What has been the greatest breakthrough in the car headlight in the last decade? Not that much. But now we can add silicon-based sensors to them and make them smart so they can detect the rain. Okay. But I don’t need to detect the rain. I need to actually see individual raindrops so that they can shoot the headlight beams around them.
              What it allows you to do is, of course, safer driving, better clarity at night. Ordinary or extraordinary? Mundane or miraculous? Safer driving. When silicon can be made small enough, smart enough to transform a headlight, it can transform every other area of our life.
              Quark — which Brian just talked about — is our new family of products that are targeted at integrated computing. And I use that term to be inclusive of Internet of things, of wearables, of traditional embedded. All of these new areas, and some of the older areas in embedded technology, that are getting smarter, and they’re getting connected. All of them will be connected, all of it will compute.
              image
              So let me show you a few examples of what’s happening today. The city of Dublin, Ireland — not the one in the East Bay — has a program that’s called City Watch and City Sensing. And what they’re doing is they put sensors into the street drainage system, which sounds pretty boring. But it allows them to monitor the flood warnings in the city of Dublin. And it alerts the crews to what’s happening.
              But more importantly, it sends out some other information through their cloud servers. It sends out signals to the traffic system to divert [unintelligible] away from the high water area, and it also sends out a city map so that if you live in Dublin, Ireland, you can figure out what’s going on. And the citizens get to participate because, of course, there’s an app for that. There’s a City Watch app. And so they submit real time update reports. And they basically use all of that data together in a crowd sourcing way to put real time status as to what’s going on in the city of Dublin.
              Most people don’t even know what’s happened. They don’t know that there’s sensors in their street. They don’t know that the traffic lights are timing or diverting them in different places, getting multiple sources of data real time, being put into a cloud service and sent out back to their smartphones.
              image
              Why is this important? Because by 2050, 70 percent of the world’s population are going to live in these megacities — Dublin not being one of the biggest ones, of course. And something as mundane as a clogged drain becomes more than an annoyance. It becomes a systemic problem that needs the ability to fix it quickly, to manage massive amounts of data, to alert a huge number of populations.
              Imagine, as developers, for you, what this means. Whole new platforms that we haven’t even thought about as compute platforms. Brand new kinds of applications that can be built. And managing [mega]cities is just one of those examples.
              The other really interesting example — and there are so many that we actually had to pare it down so we could get it into this time slot — is in healthcare. 70 percent of these people that I was talking about that are going to be living in big cities, they’re going to be aging — as am I. We have these questions that we keep asking. Are we going to have enough hospitals? Will we have enough clinics? Will we be able to train enough doctors with this aging population?
              They need more than just hospitals and clinics and doctors. They need care that’s affordable and is easy to administer. And the era of integrated computing allows us to offer some new answers to those old questions.
              image
              What if we’ve moved healthcare out of a hospital? [14:31]
              [This – for some unknown reason – was left out of the published keynote at http://intelstudios.edgesuite.net/idf/2013/sf/keynote/130910_rj/index.html 
              So here is that video part starting at [0:32] of this report:
              Amazing New Wearable Devices demonstrated by Intel President Renée James at IDF 2013 [Santa Barbara Arts TV YouTube Partner Global News YouTube channel, Sept 10, 2013] covering eveything, except the dimmed two paragraphs in the end. 

              ] Brian talked about wearables, and you’ve seen kind of a glimpse of what’s coming. It’s going to be beyond jewelry and eyeglasses into devices like this one.

              image
              Let me show you this. This is a wearable from Sotera Wireless, in trials right now. I will put it on. I’m going to see if my heart rate’s really high here. What it’s doing is it’s taking a constant reading and transmitting reports wirelessly to a service. This is actually a real time EKG, blood pressure, and other vitals, just from a wristband. It is pretty big and unattractive but what this replaces is an entire — on this table, on the end — bunch of equipment that you would have to have in a medical clinic, and it gives you real time results to the doctor.
              Here is another example of innovation in medicine by MC10. Through the magic of what silicon and transistor technology, in the future, this patch — this prototype silicon-based patch – could take the wonderful innovation shown by Sotera and perhaps even do much of the same in an even smaller package. This will be directly on your skin. This patch will perform all of the same functions that that wearable does today. This is from a company called MC10, and it’s a prototype right now.
              So why is this important? That little patch thing is like a Band-Aid. You just peel it off and stick it on. So why is it important?
              Because it’s a constant data stream that your doctor can see, that if something’s wrong it’s immediate, it’s up-to-date and accurate. And it allows us to move into the most exciting phase of healthcare that I think is in this frontier for us, and that is moving into customized care.
              image
              [14:31] Care that’s actually tailored to the things that are going on in your body. There are a tremendous number of other devices and other applications — injectables, ingestibles — that we’ve looked at. I didn’t have time for all of them today. But all based on a fundamental, foundational building block of this industry, which is the silicon transistor.
              Customized care, with your own genomic data, is the pinnacle of healthcare. And we first mapped the human genome using an Intel high performance computer, a Xeon-based computer. That’s pretty exciting for us. And as you can imagine – as we like to talk about Big Data – there is one Big Data challenge.

              image

              I’d like to share how big a Big Data problem. One person’s genomic map is a petabyte of data. That’s 1000 terabytes for one person, enough to fill 20 filing cabinets of information. And through the work that we do, the advancements in price performance, Moore’s Law, what we do every single day, we’ve transformed the ability to sequence. And what used to take years in 2000 is now down to two weeks, and we’re working to get that down to days and hours.

              image

              But more importantly, a single sequence used to be $70 million. It’s now less than $5,000 to do one sequence, and we are on route to make that $1,000, which means personalized genomic sequencing is within our reach. And it’s moving faster than the rate of Moore’s Law.
              But let’s think about the benefits of that. Why are we excited? Why am I excited about that? Why do we get up every day and say, you know what, working with Intel, working at Intel, it’s pretty excited because we get to change the world? Why?
              image
              One-third of all women and half of all men are going to be diagnosed with cancer, right? Early detection and treatment is the way to solve cancer in most cases, and it’s customized to that individual, it makes the profoundest difference in its effectiveness. And that’s where we can make a difference.
              image
              Using high-performance computers, the Knight Center for Cancer Research at the Oregon Health Sciences University is working on analyzing human genomic profiles and creating searchable DNA, customized DNA maps. And what I’d like to do is share directly from them with you what they’re doing. [17:15]
              [Video plays.]
              image
              [19:21] Renee James: As doctor Drucker said, in this next era, we’re moving the biology problem to a computational problem in the treatment of cancer.
              Computing doesn’t get any more personal than when it saves your life, so I’d like to share another story with you. And it’s the story about an Intel employee, in fact, one of our fellows, who’s here with us at IDF. He fought a 24-year battle with cancer. When he was a young man in college, he was diagnosed with kidney cancer, and he was given a few years to live. And he went through dozens and dozens of debilitating cancer therapies, and he was very brave, and he defined all his doctors’ odds with his longevity, but in the end, the cancer never went away, and his kidneys did eventually fail.
              Recently, in his work that he’s been doing, he was visiting a genomic company, and they asked if they could sequence his tumor. And he said yes. He allowed them to do it. And what they did is they shared that data with all of his doctors. I’m not going to tell you the end of this story. I would like you to help me welcome Intel fellow Eric Fishman to tell his story.

              image

              Eric Dishman: Thank you. Alive and well. I think I’ve had more predictions of my death than maybe even Moore’s Law.
              Renee James: [Moore’s Law, alive and well, ladies and gentlemen.]
              Eric Dishman: [Unintelligible.]
              Renee James: Why don’t you tell everybody what happened the day that you showed up to your doctors and they had your tumor sequence?
              Eric Dishman: It was just miraculous. At that point, I was so sick, I was going to the doctor twice a week. So it was my Thursday appointment, and I walk in, and they’ve got some of my East Coast physicians on Skype and some doctors on the phone, and all my doctors are working together, and I’m like, uh oh. And then they basically tell me that 90 percent of the drugs that they’ve put me on were never going to work because this genomic map had revealed this to them. And they basically admitted that they had mischaracterized and sort of misunderstood my cancer for over two decades.
              Renee James: And then what happened?
              Eric Dishman: Well, at that point, then they had the good news, which was we think we understand enough about your cancer, and it’s really Eric’s cancer, it’s unique, like the [physician] said, we’re going to put you on this drug for completely different organs and see how it goes. Four months later, I walk into my diagnostics, the technicians, you know, looking in shock at the scans, they do them again, and they’re like you’re cancer free, you can start the whole kidney transplant process at this point in time.
              Renee James: That is miraculous.  And I want you to share with us how now your work at Intel is about scaling that out, so that other people can have this experience.
              Eric Dishman: That is exactly true, and scale is the thing. That’s one of the reasons I work at Intel. [I mean], probably less than 50,000 people on the planet have had access to the kind of whole genome sequencing that I’ve had, and that’s generated about 2.5 petabytes of data. If we had every cancer patient today having a whole genome sequence like once every two weeks, which is what they would ultimately want to do, we’d generate 500 exabytes of data, and that’s just in the U.S.
              So as we think about this globally, how do we scale? So we’ve got our product teams in there working on the fabric, the storage, the compute, I mean, the whole system — how’s it possibly going to be done? On the policy side, we’re working on how do we deal with the privacy and the security and the ethical issues of sort of scaling this?
              On the R&D side, it’s everything like you showed, from biochips to Big Data and solving breakthroughs there. And then, finally, on the sort of human and sort of education side, we’ve got to figure out how we’re going to create a genome-ready workforce, train a million doctors on how to incorporate this data and move forward on getting biologists to understand programming and programmers to understand biology.
              Renee James: Wow. Thank you for sharing your very personal story with the audience, and congratulations on being cancer free.
              Eric Dishman: Thank you.
              Renee James: Thank you. So 20 years of ineffective therapies at an expense and certainly the worry of what Eric went through, all of that changed by the benefits of personalized medicine and cost-effective integrated computing. Affordable genomics,  cities that reroute traffic and alert you to problems — a few years ago, a lot of what I talked about seemed like science fiction, and today, you can see it’s in our near future.
              It’s the future before us when computing becomes truly integrated into our lives. For 45 years, Intel has done the things that everybody said couldn’t be done, and we’ve invented the future time and time again. I’d like to close by saying, in the words of Intel founder Bob Noyce, I’d like to invite all of you to not be encumbered by history and to go off and do something wonderful. Thank you.

              image

              [End of presentation.]

              IDF13 Day 1 Keynote Highlights & Takeaways [by CaptGeek [Eric Mantion] (Intel) on Intel® Developer Zone, Sept 10, 2013]

              So, this is not my first rodeo (as the saying goes) – in fact, I’ve been going to IDF, on and off, for over 10 years, starting with my time when I was a semiconductor analyst. And, yes, I now work for Intel, so some may feel my opinion is biased, but, regardless, here it is anyway:
                   This morning was the best IDF Keynote I’ve ever seen
              What made this morning better? If I had to summarize it, I’d say it breaks down into 3 things: Intimacy, Lifestyle, and Leadership. Let me explain…
              Intimacy
              The very first thing I noticed this morning was, before Brian Krzanich said his first word was how he was dressed. Not only did he not wear a tie, but he didn’t even wear a jacket. The tone was very casual, but not in a lazy way. When he spoke, on stage, he went right out to he front of it, basically as far out to the audience as he could, as if he wanted to say “I am one of you – I’m a Geek & I’m proud of it.” Now, someone will say that a slight shift to a dress code & positioning on stage doesn’t much matter, but I would completely disagree because, before joining Intel in 2005, I knew well the biggest criticisms of Intel. In one word, it would have been Arrogance. In three words, it would have been “Intel Doesn’t Listen.” Now, I think that is changing, which I think is a great thing. But it wasn’t just the lack of a jacking and where he stood – the subtleties continued when our new President, Renée James did her keynote. Not once did she hold up a wafer. Not once did she say the word Gigahertz. But, what she did talk about was how Intel was making life better. During Brian’s portion, he talked about the   Intel Quark SoC, which is planned to be 1/5th the size of Intel Atom processors and 1/10th the power consumption. But when Renée spoke, she addressed the why wearables mattered. A great example was what I called a “Hospital-in-a-Patch” that didn’t look much different thank an anti-smoking patch, but would be able to monitor several of your medical vitals no matter where you were. While still in development, it shows the amazing promise of the not-too-distant-future. But she didn’t just pontificate, she brought out an Intel Fellow, Eric Dishman who told a very personal story. Arguably, it was the most personal story a person could tell because it was not only about his own 24-year battle with Cancer, but also how mapping his genome has led his doctors to a path that, thankfully, gave them the opportunity to tell him the magical words: “Eric, you’re cancer free.” I don’t know how you can get more personal, more intimate that that in a story. But it didn’t stop there. Then Renée was finished, Brian re-joined her on stage for the first-ever, “open Q&A with the CEO and the President of Intel.” This has never been done in the history of IDF, but I loved that it did. To me, it signaled change. To me, it was a message: “Yes, we know we make amazing silicon, but none of it means anything if we don’t have get hardware partners to put them into products and great software partners that make the magic happen. In short, Intel is nothing without our partners, so we want you to know that we care, deeply, about you. We want to have a closer, more intimate relationship with you and do amazing, wonderful things together…
              Lifestyle
              What is the difference between Ordinary and Extraordinary. Renée said it best: Intelligence. What happens when everything gets smarter? The simple answer is life gets better. Whether it is critical technology like the Hospital-in-a-Patch mentioned above or just convenient technology, as things get smarter, life gets better. For example, what if every parking meter was smarter? What if, before you leave your car, you put your smart phone next to the NFC sensor on the parking meter to register your phone. Then, if your meal is running long, it sends you a quick message of “your meter is running low, would you like to refill it?” and, with a simple press of the button, you can. How great would that be? When I was trying to explain the implications today at lunch, I used the table we were eating at as an example. What if, when you sat down, your table was your menu? Instead of the wait staff having to go back and forth, asking if you were ready to order, as soon as you were, you ordered. Also, the moment the kitchen runs out of “Catfish” then all the menus are automatically updated so that option would be grayed out. Also, as soon as you were ready to pay your bill, you could, right on the table, with the NFC on your phone. Or, if you wanted some help, you could just push a button like you do on an airplane & your server could come right out. But this doesn’t just help customers, it would help the restaurateurs as well. If you could save 10 minutes for every customer, a eating establish might be able to fit an entirely extra sitting in the course of a dining cycle. For the fixed costs of the chief & kitchen staff, that could be the difference between being profitable and closing your doors. But these types of “Lifestyle Computing” – or integrated computing, depending on how you looked at it – wasn’t just about tiny, minuscule computers, but also on the other end, the Big Data server rooms. For example, you want better healthcare, then your doctors need to get to know you better, and far better than you can do from just a form. They need to map your Genome, which, if your curious, is about a Petabyte of Data. For those not so familiar with these prefixes, that is around a thousand Terabytes or around a million Gigabytes. So, take that smart phone with 1GB of memory & put it in a pile with a million other phones – that’s the data required to map EVERY person’s genome. Multiple that by the 1/3 of all women and 1/2 of all men that will be diagnosed with cancer in their lifetime and you get to the legal definition of a “butt-load of data.” But, never fear, the new i5 Xeon processors being launched this week are up to that task. So, your lifestyle computing – whether it is wearables devices or warehouse of servers, Intel has got you covered. And that brings us to our last category…
              Leadership
              It was subtle, but our new CEO – affectionately called “BK” in the halls of Intel – put all Intel employees on notice:
              • If it computers, we will lead
              To me, that is vision. That is leadership. There was no squishy areas there, no caveats, no outs. It was simple, straight-forward, and to the point. If it computes, than Intel will do its best so serve that market segment as well as we can. Oh, and, if you missed it, in the future, everything will compute. Your grandpa’s favorite recliner won’t just recline, but rather it will watch him. It will monitor his vitals it will check to see if he’s been siting there past when he was supposed to take his medication and alert him if it needs to. And, heaven forbid, he should have a heart attack while sitting there in an empty house, he will be helped, immediately, even faster than if you were in the next room. In essence, in the future, no seasoned citizen will ever be sitting in an empty house again, but houses, furniture, kitchens, everything will be smarter and connected. Making your life, my life, and most importantly, the lives of the people we love, not only better, but, ideally, longer – as long as possible. Roughly a century ago, we were went through an important transformation – an electrical one. Instead of candles, we gained electric lights. Instead of washboards, we gained washing machines. Instead of a hand pump in your kitchen, we gained running water. Now we are on the cusp of the next transformation: Intelligence. Instead of an electric light, we’ll get a smart one – that turns itself off when not needed (like when no one is in the room) and turns itself on when needed. Instead of washing machines, we’ll get smart ones that analyses the soiling of your clothes and put in the right combination of detergent chemicals to optimize the cleaning. Instead of running water, we’ll gain smart faucets that automatically detects if the water coming out has a higher than allowable amount of harmful chemicals. It doesn’t matter what you pick – a bed, a pool, and gym, with greater intelligence comes a better life, just as electricity has been improving life for the last century or more. General Electrics’ age old tag line has been “We bring good things to life.” Perhaps Intel should adopt: “We bring better things to life,” because, as we lead in everything that computes, from wearables to phones to tablets to 2in1s and Ultrabooks to desktop PCs, and, of course, servers, life will get better, for everyone. And I, as one particularly proud Intel employee, doesn’t mind saying, that is a future that feels wonderful. Which, as it happens, was one of the pieces of closing advice from this morning’s keynote – a quote from one of our founders, Robert Noyce:

              Q&A: Intel president Renee James on wearables [CITEworld, Sept 11, 2013]

              After calculators, PCs and mobile phones, Intel is now jumping into wearable devices with an extremely low-power chip called Quark, which was big news at the company’s annual Intel Developer Forum in San Francisco. Leading the charge into the new market is Intel’s new leadership team consisting of CEO Brian Krzanich and President Renee James, who also articulated on plans to achieve fast growth in the mobile market while trying to reinvigorate PC sales.
              It’s been an especially busy few months for James, who became Intel’s president on May 2 after running the company’s software unit as executive vice president and general manager of the software and services group. She is laying the groundwork for Quark chips to succeed in areas such as eye wear, personalized medicine and cloud services. In an interview with the IDG News Service, she talked about the wearable market, Quark and partner relationships.
              IDGNS: Where do you see the wearable market going?
              James: I think it’s way beyond wearables, I think it’s about integrated computing. I don’t think we know the boundaries of that. The silicon patch — the thought of just ripping something off like a band-aid, putting it on your arm, your doctor being able to know what your vitals are at that moment, that sounds like science fiction, but it’s real. That’s where we are at. That’s today’s outer boundary of where we are going with computing.
              IDGNS: When do you see integrated computing becoming a practical market for Intel?
              James: For Intel it is a practical market right now, we have different products and platforms that are being developed. That is why we introduced Quark. We believe in the things that you saw — they are not three, five or 10 years out, they are in the next 12 to 18 months.
              IDGNS: Will you sell wearables directly to consumers? Intel is already planning to launch a TV service.
              James: We tend to believe that our business model is best helping other people build things. It’s in these really highly integrated designs, you need to build one to know that everything is working systemically. We tend to build reference platforms, and we’re going to stick with that.

              Insert of mine: nScreen Noise: Intel Media, UK kids love tablets 10/4/13 [Colin Dixon YouTube, Oct 3, 2013]

              Lots of bad news for Intel Media’s OnCue virtual pay-TV operator service. Will it every launch? OfCom in UK says kids love tablets. Same in the US?
              IDGNS: Quark is really low-power, but will it replace the Atom platform?
              James: No. It’s the low Atom. You should think of Core, Atom, Quark. I love the Quark name, it’s so nerdy and funny. Quark is intended to look below Atom. It’s 10 times more power efficient, and it’s five times smaller. Atom is teeny, Quark is the smallest thing we’ve ever built.
              IDGNS: Intel and low-power still raise a question mark today. How will Intel achieve low-power on Quark?
              James: No, no, Intel and low power are not a question mark. We have lots of low-power products. It’s not a question at all. Maybe that was five years ago. If you look… at Haswell 22-nanometer, that product is a four-watt product with Core i5 performance and Core i5-level graphics in fanless [devices]. That’s the most [power-efficient] product ever built, anywhere.
              IDGNS: Are you offering licensing or customizing Quark chips for third parties?
              James: What we are offering is the ability to connect their intellectual property around ours. We also are offering fully designed products as well. It’s a broad range that we’re going to offer to customers in this category.
              IDGNS: Intel is looking beyond Windows and moving to Android and Chrome for tablets and PCs. How is your relationship with Microsoft?
              James: Our relationship with Microsoft is as good as ever. They are going to participate in IDF and you will hear from them about what’s going on with Windows 8.1. I think it’s just a matter of balance. Microsoft is not the only client operating system anymore. The same way for years and years Microsoft balanced between Intel and AMD, we’re in the same situation now. Our customers want choice, and we offer choice.
              IDGNS: What’s the next big thing for Intel?
              James: Integrated computing is the next big thing, I think it is the future of what we are going to do. It’s not going to be necessarily about this device or that device, it’s going to be about what problems we solve through computation. The final barriers, the things we don’t understand, and what does it mean to have a mesh network of connected devices with cloud services and how does it change what we think about. That’s the final frontier.
              IDGNS: How important is your software background in leading a company that is traditionally focused on chips?
              James: It’s actually more useful than people would imagine. It’s very relevant to the level of integrated platforms that we see people starting to build, even the way PCs are built now, servers, different workloads, what happens in the cloud. More so than ever on a forward-looking basis, the way computing is developing is going to be about the application, the workload, the right kind of compute for the right kind of task. The other thing is building system-on-chips and products today is very software oriented.
              IDGNS: What is Intel’s direction in chip development?
              James: The direction for us is to continue with “tick-tock” for the microarchitecture, but to consider how to do derivativesusing the system-on-chip methodology.

              Intel President Renee James: Interview with the Wall Street Journal [Intel® Developer Zone, Aug 28, 2013] i.e. Intel’s own report 2 weeks later

              Intel President Renee James recently sat down for a video interview with the Wall Street Journal’s Rolfe Winkler. In this interview, Ms. James discussed a wide range of issues around Intel’s computing strategy, anything from mobile to what’s coming up at IDF in September. You can watch the entire video below:
              Intel’s New President Outlines Company’s Plans
              [WSJDigitalNetwork YouTube channel, Aug 14, 2013]
              Renee James sits down for a Big Interview with Rolfe Winkler. Photo: Getty Images.
              On mobile:
              Ms. James has been with Intel for 26 years, and worked closely with former Intel CEO Andrew Grove. She recently was named Intel President, and directs company-wide strategy with CEO Brian Krzanich. She noted that Intel wants people to know that “we love computing”, and aim to serve every segment, not just PCs.
              Intel’s new focus is on mobile, especially on the Atom power line for ultramobility. There will be increased efforts on Android, with an equalization of efforts between Windows and Android. Everyone currently in this market space has advantages, and Intel’s is design and integrated manufacturing, the combination of process technology, and communications. It’s the integration that counts; the combination of all these elements that makes Intel the winner in the market.
              In many ways Intel has led the exploration into mobility. James noted that “sometimes you don’t always know about the next thing being a disruption….it wasn’t the form factor, it was how people using computing changed – touch, voice, app models, all of that shifted. That combination with the new form factor really changed the way we look at computing.”
              On IDF:
              Intel’s premier developer conference is coming up September 10-12. There will be a lot of new things to see and talk about there as far as mobility, where Intel believes computing is heading, and future predictions on computer/human interactions.
              On Atom:
              Atom is a smaller, less expensive chip. James noted that the Intel point of view with this chip was that you didn’t need all the features and performance you need in more expensive chips since Atom is primarliy for phones, but now as mobile devices are becoming more important and prevalent, it’s also taken on more importance. Intel is building parts of Atom that come all the way up to the Core family with greater compatibility. All new Atom products run Windows.
              On transparent computing:
              People want their apps to perform no matter what platform they might be using. This aligns with the “Internet of Things” mentality; consumers want lower cost devices, but are also looking for compatiability with the rest of the software ecosystem.
              On the shift to a more mobile computing ecosystem:
              Mr. Winkler posed an interesting question: “As PCs are increasingly replaced by mobile devices, how do you navigate that transition?” Ms. James answered that Intel does not believe that PCs will ever be replaced, rather, different form factors will continue to emerge with the performance of the core product line in mobile devices. There are also different modes of usability in form factors such as the tablet, PC, 2 in 1’s, etc. It’s not a “one for one” replacement; James noted that these form factors are refreshing the market.
              On form factors:
              James noted there is a segmentation of tablets – the ones on the higher price point side generally offer more performance, and the ones on the lower price point side offer less. Intel has created Atom products that scale all the way up and down this ladder, with Haswell core-based products as well. These form factors overlap with price points, and some cannibalization is expected, but Intel is looking to create devices at every price point for more customer availability, opportunities, and innovation.
              On Moore’s Law
              When asked if Intel sees a finite ceiling as to how small chips can be produced, Ms. James replied that “we don’t see that”. There is more performance in a lower power envelope, and Intel has moved ahead multiple generations, becoming much more competitive in the mobile landscape.
              How small can the chips actually get? James replied that Intel has “line of sight” for a couple more generations, but after that the future is unclear.
              Data center
              The data center arm of Intel is an important business, currently holding a 90% market share and bringing in substantial profits for Intel. Mr. Winkler asked about avoiding server upsets, and Ms. James noted that there is a market shift with new competitors, and the way you react initially is how the dynamic is going to go. She mentioned that “it’s good for Intel to have competitors” because it makes the company as a whole better. Intel is not waiting for the industry to change, and has already announced SOC server products based on the Atom family.
              On Intel television
              What does Intel plan to bring in the television space? James replied that just like everything else, television has gone digital. It’s delivered over an IP network, which is an opportunity for data to be broadcast to devices. Intel can bring tech integration and leadership to this area, making it more cost effective. It’s also a new market opportunity and area of growth.
              Exciting times for Intel
              This interview with Ms. James was extremely informative, and gave a great overview of where Intel is headed. Be sure to register for IDF 2013 and hear more from Intel leadership on the future of the company.

              Which was reported by The Wall Street Journal as Intel Chips Away at Mobile, Wearable Computing [The CIO Report – WSJ, Aug 14, 2013] in the following way

              As consumers shift spending to smartphones and tablets from PCs, mobile processors made by rivals have chipped away at Intel Corp.’s sales and profits. Intel in July reported $2 billion in profit for the second quarter, a drop of 29% from a year earlier, on sales of $12.8 billion, down 5% for the same period. The chipmaker, which once milked its Intel Inside brand, can no longer rely as much on PC chips as its cash cow. While PC sales decline, rivals building low-cost, low-power chips based onARM Holdings plc. designs dominate the mobile chip market.
              Intel President Renee J. James admitted in an interview, Wednesday, that chips, as well as software for smartphones, tablets and embedded systems, are “markets that we need to go win.” Ms. James, who assumed her role in May after 25 years in various management roles at Intel, is particularly keen on Bay Trail, energy efficient chips she said will appear in tablets and convertible PCs this holiday season. Intel will unveil some of these products – and possible show off a wearable computer – at its developer forum next month. This is an edited transcript of a Q&A conducted with Wall Street Journal reporters and editors.
              As you push harder into mobile, you also have to keep a strong hold on the PC. What is your strategy there?
              We don’t see the PC going away overnight, but we do see a blending across the bottom end of [PC chips] and the high-end of the Bay Trail chips. You have to recreate the segmentation because [PCs and tablets] are overlapping now [with the proliferation of two-in-one, or convertible computers]. And 7-inch tablets and below are very much like phones and we have an objective in that market as well. By blending and having a shared goal for total compute, you start to think creatively about managing the transition. The suppliers and customers are the same.
              How do you steal market share from ARM?
              We believe we have better products, but we know we have better process technology. It will take us some time to get to the lowest end, but we have every intention of having products at every price point.
              What was gist of the presentation you and new CEO Brian Krzanich gave to the board of directors on how to point Intel in the right direction?
              We talked about getting back into the role of technology leader and really making sure that we’re leading into the next generation of where computing gets used. There’s a tremendous explosion in embedded computing, and the way people are thinking about computing, and we hear a lot about wearables, and there’s experimentation and new products like Google’s Glass. Our strategy is to win in every segment of computing and grow our share in overall compute. If it computes, we want to be in that market.
              Do you have any wearable computers now?
              None that are announced, but you should come to our developer conference in September. We’re going to be talking about where we see computing is going, where Intel is going, and a lot more about how we think computing will be used in the future, beyond the form factors you see today.
              What are you doing to advance the Internet of Things?
              We bought embedded software leader Wind River Systems, so we’ve done a lot of work creating combined product lines between Wind River and our embedded systems group. We’ve focused our work on specific vertical segments, such as in-vehicle entertainment, retail, point of sale and digital signage and infrastructure projects.
              What about Internet of Things in the home?
              We have not done as much in the home. I’m sure the team is working on things I don’t know about but… it’s a big opportunity.