Home » Posts tagged 'Microsoft' (Page 7)

Tag Archives: Microsoft

Proper Oracle Java, Database and WebLogic support in Windows Azure including pay-per-use licensing via Microsoft + the same Oracle software supported on Microsoft Hyper-V as well

While with the latter Hyper-V is gaining significant market advantage against the VMware vSphere it is even more important that Windows Azure is becoming a true open cloud computing platform, especially by fully supporting Java and Oracle developers (in addition to existing .NET and various web developers), and Oracle cloud offerings are also vastly extended, especially in the crucially important “pay-per-use” space as the cloud offerings of the Oracle software so far have been only:
Oracle [Public] Cloud (Larry Ellison’s Oracle Cloud Announcement Highlights [Oracle YouTube channel, July 6, 2012] for when it was finally delivered and TechCast Live Introducing Oracle Public Cloud [Oracle YouTube channel, Dec 9, 2011] when it was pre-announced) which has application solutions in the cloud as well
Amazon Relational Database Service (Amazon RDS) for Oracle available with “pay-per-use” (officially named “license included” by AWS, earlier named “on-demand hourly”) licensing since Q2 2011 (Amazon RDS for Microsoft SQL Server came a year later), as well as Oracle Fusion Middleware (which includes the GlassFish Java application server and the WebLogic web application server), and Oracle Enterprise Manager licensed in the AWS Cloud

image

The essence according to Java and other Oracle software heads to the Microsoft cloud [Ars Technica, June 24, 2013]

Microsoft and Oracle may compete head to head in many ways within the database realm, but today the two companies performed the most sweeping cross-join ever as executives from the two companies announced a broad partnership around cloud computing. In a conference call this afternoon, Microsoft CEO Steve Ballmer and Oracle President Mark Hurd discussed a partnership between the companies that will bring Oracle platforms—including Java middleware—into Microsoft’s Azure cloud. 

Oracle has moved to certify and support its products, including Oracle WebLogic, the Oracle database, and Oracle Linux, for Azure and Microsoft’s Hyper-V hypervisor. “At the highest level, this partnership extends Oracle’s support of Windows Server to also include Windows Hyper-V and Windows Azure as supported platforms,” Ballmer said.

Oracle will provide full license mobility, Ballmer added, so that customers can move existing Oracle software licenses from on-premises physical or virtual servers to virtual servers on Hyper-V and in the Azure cloud. “There’s an immediate benefit for our customers,” he said. Support of Oracle’s database and application server products, and of Oracle Linux, is available immediately starting today.

Microsoft also agreed to license Oracle’s enterprise Java run-time and APIs and make Java “a first class runtime in Windows Azure, fully licensed and fully supported by Oracle” according to Satya Nadella, Microsoft’s president of Microsoft Corporation’s Server and Tools Business. Previously, Microsoft offered open Java SDKs, he said. “Now we have the licensed [Oracle] Java stack, plus the [Oracle] middleware stack, available. We think it makes Java more first class within Azure.”

Hurd said that in addition to allowing existing licenses to be moved into the Azure cloud, Microsoft would provide a mechanism to obtain licenses on demand “for those who don’t have licenses for Oracle or Java.” Nadella emphasized that Microsoft would “make it easier to spin up Oracle software in Azure with pay-as-you-go licenses,” including pre-built Oracle Linux images that can be deployed in Azure as server instances.

Oracle has been pursuing its own cloud strategy, but Hurd said he saw “nothing but good” coming from a partnership with Microsoft. “I think it just makes sense for us to continue to improve our capabilities but also form partnerships like this,” he said. “Java is the most popular development platform in the world. The fact that more people will get access to our IP is favorable.”

A general business media opinion:
Rivals Microsoft, Oracle bonding in the cloud [The Seattle Times, June 24, 2013]

The partnership looks to be a good move for both companies, while being bad for mutual competitor VMware, said veteran Microsoft and Oracle analyst Rick Sherlund, of investment bank Nomura.

Back in the day, Microsoft and Oracle were bitter rivals, competing over providing database and server products and trading barbs during the U.S. government’s antitrust suit against Microsoft in the 1990s.

Now they’re holding hands and looking at a future together.

Microsoft and Oracle announced Monday a cloud partnership in which customers will be able to run Oracle software (including Java, Oracle Database and Oracle WebLogic Server) on Microsoft’s Windows Server Hyper-V or in Windows Azure. Oracle will provide certification and full support.

Oracle Linux will also be made available to Windows Azure customers.

“I think they need each other,” Sherlund said. “They’re cooperating in areas that are mutually beneficial.”

Microsoft is getting Oracle’s support for Hyper-V, Microsoft’s hypervisor technology, which allows companies to run virtual servers. That’s important because Hyper-V competes against VMware, which is dominant in the server virtualization market. And many of the businesses that would be interested in such technology already use some Oracle software.

“It’s an advantage for Microsoft to be able to say: ‘All this Oracle stuff runs on Hyper-V,’ ” said Sherlund, who added that Oracle does not support VMware’s vSphere.

The move likely also allows Microsoft to say it’s being open with its Azure platform.

“That’s the rap you have against Microsoft: That it’s all the Microsoft platform,” Sherlund said. “If you’re in the cloud, it’s good that you’re supporting other platforms.”

Oracle, meanwhile, has traditionally delivered its software to its customers’ own premises. Now that it’s focusing more on delivering its software as services, it’s “motivated to make sure that [the services are] available on a lot of different cloud platforms,” Sherlund said. “So that’s good for Oracle.”

… these days, both companies are battling newer competition from the likes of VMware and Seattle-based Amazon.com.

Ballmer and Oracle President Mark Hurd said during the conference call after Monday’s announcement that their two companies would continue to compete.

But, Ballmer said, “the relationship between the two companies has evolved … in a very positive and constructive manner on a number of fronts.”

Hurd said, “The cloud is the tipping point that made this all happen.”

Hurd said Oracle would continue to offer its own public, private and hybrid platforms. But the fact that Java will be accessible to programmers who work in Windows Azure “is a good thing for us. … The fact that more people get access to our IP is favorable,” he said. “It’s good for our customers and therefore good for Oracle.”

Oracle CEO Larry Ellison had also said last week that the company would be announcing partnerships with Salesforce.com and NetSuite.

And an ICT analyst opinion: ORACLE EMBRACING THE BROADER CLOUD LANDSCAPE [James Staten on Forrester blogs, June 24, 2013]

It’s easy to accuse Oracle of trying to lock up its customers, as nearly all its marketing focuses on how Oracle on Oracle (on Oracle) delivers the best everything, but today Ellison’s company and Microsoft signed a joint partnership that empowers customer choice and ultimately will improve Oracle’s relevance in the cloud world. 

The Redwood Shores, California software giant signed a key partnership with Microsoft that endorses Oracle on Hyper-V and Windows Azure, which included not just bring-your-own licenses but pay-per-use pricing options. The deal came as part of a Java licensing agreement by Microsoft for Windows Azure, which should help Redmond increase the appeal of its public cloud to a broader developer audience. Forrester’s Forrsights Developer Survey Q1 2013 shows that Java and .Net are the #2 and #3 languages used by cloud developers (HTML/Javascript is #1). The Java license does not extend to Microsoft’s other products, BTW. 

This deal gives Microsoft clear competitive advantages against two of its top rivals as well. It strengthens Hyper-V against VMware vSphere, as Oracle software is only supported on OracleVM and Hyper-V today. It gives Windows Azure near equal position against Amazon Web Services (AWS) in the cloud platform wars, as the fully licensed support covers all Oracle software (customers bring their own licenses), and pay-per-use licenses will be resold by Microsoft for WebLogic Server, Oracle Linux, and the Oracle database. AWS has a similar support relationship with Oracle and resells the middleware, database, and Oracle Enterprise Manager, plus offers RDS for Oracle, a managed database service.  

Bring your own license terms aren’t ideal in the per-hour world of cloud platforms, so the pay-per-use licensing arrangements are key to Oracle’s cloud relevance. While this licensing model is limited today, it opens the door to a more holistic move by Oracle down the line. Certainly Oracle would prefer that customers build and deploy their own Fusion applications on the Oracle Public Cloud, but the company is wisely acknowledging the market momentum behind AWS and Windows Azure and ensuring Oracle presence where its customers are going. These moves are also necessary to combat the widespread use of open source alternatives to Oracle’s middleware and database products on these new deployment platforms. 

While we can all argue about Oracle’s statements made in last week’s quarterly earnings call about being the biggest cloud company or having $1B in cloud revenue, it is clearly no longer up for debate as to whether Oracle is embracing the move to cloud. The company is clearly making key moves to cloud-enable its portfolio. Combine today’s moves with its SaaS acquisitions, investments in cloud companies and its own platform as a service, and the picture clearly emerges of a company moving aggressively into cloud.  

I guess CEO Ellison no longer feels cloud is yesterday’s business as usual.

Microsoft and Oracle announce enterprise partnership [joint press release, June 24, 2013]

Microsoft Corp. and Oracle Corp. today announced a partnership that will enable customers to run Oracle software on Windows Server Hyper-V and in Windows Azure. Customers will be able to deploy Oracle software — including Java, Oracle Database and Oracle WebLogic Server — on Windows Server Hyper-V or in Windows Azure and receive full support from Oracle. Terms of the deal were not disclosed.
As part of this partnership, Oracle will certify and support Oracle software — including Java, Oracle Database and Oracle WebLogic Server — on Windows Server Hyper-V and in Windows Azure. Microsoft will also offer Java, Oracle Database and Oracle WebLogic Server to Windows Azure customers, and Oracle will make Oracle Linux available to Windows Azure customers.
Java developers, IT professionals and businesses will benefit from the flexibility to deploy fully supported Oracle software to Windows Server Hyper-V and Windows Azure.
“Microsoft is deeply committed to giving businesses what they need, and clearly that is the ability to run enterprise workloads in private clouds, public clouds and, increasingly, across both,” said Steve Ballmer, chief executive officer of Microsoft. “Now our customers will be able to take advantage of the flexibility our unique hybrid cloud solutions offer for their Oracle applications, middleware and databases, just like they have been able to do on Windows Server for years.”
“Our customers’ IT environments are changing rapidly to meet the dynamic nature of the world today,” said Oracle President Mark Hurd. “At Oracle, we are committed to providing greater choice and flexibility to customers by providing multiple deployment options for our software, including on-premises, as well as public, private, and hybrid clouds. This collaboration with Microsoft extends our partnership and is important for the benefit of our customers.”
Additional information about support and the licensing mobility changes that went into effect today is available on Oracle’s blog at https://blogs.oracle.com/cloud/entry/oracle_and_microsoft_join_forces.

Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud [Oracle Cloud Solutions blog, June 24, 2013]

Oracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software.

Here are the key elements of the partnership:

  • Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure
  • Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure
  • Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery
  • Microsoft will offer fully licensed and supported Java in Windows Azure
  • Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure

Oracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform. Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments.

For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment.

Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below:

MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V – NEW

MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation – UPDATED

Explanation for that is in Partners in the enterprise cloud [Satya Nadella on the The Official Microsoft Blog, June 24, 2013]

As longtime competitors, partners and industry leaders, Microsoft and Oracle have worked with enterprise customers to address business and technology needs for over 20 years. Many customers rely on Microsoft infrastructure to run mission-critical Oracle software and have for over a decade. Today, we are together extending our work to cover private cloud and public cloud through a new strategic partnership between Microsoft and Oracle. This partnership will help customers embrace cloud computing by improving flexibility and choice while also preserving the first-class support that these workloads demand.
As part of this partnership Oracle will certify and support Oracle software on Windows Server Hyper-V and Windows Azure. That means customers who have long enjoyed the ability to run Oracle software on Windows Server can run that same software on Windows Server Hyper-V or in Windows Azure and take advantage of our enterprise grade virtualization platform and public cloud. Oracle customers also benefit from the ability to run their Oracle software licenses in Windows Azure with new license mobility. Customers can enjoy the support and license mobility benefits, starting today.
In the near future, we will add Infrastructure Services instances with preconfigured versions of Oracle Database and Oracle WebLogic Server for customers who do not have Oracle licenses. Also, Oracle will enable customers to obtain and launch Oracle Linux images on Windows Azure.
We’ll also work together to add properly licensed, and fully supported Java into Windows Azure – improving flexibility and choice for millions of Java developers and their applications. Windows Azure is, and will continue to be, committed to supporting open source development languages and frameworks, and after today’s news, I hope the strength of our commitment in this area is clear.
The cloud computing era – or, as I like to call it, the enterprise cloud era – calls for bold, new thinking. It requires companies to rethink what they build, to rethink how they operate and to rethink whom they partner with. We are doing that by being “cloud first” in everything we do. From our vision of a Cloud OS – a consistent platform spanning our customer’s private clouds, service provider clouds and Windows Azure – to the way we partner to ensure that the applications our customers use run, fully supported, in those clouds.
We look forward to working with Oracle to help our customers realize this partnership’s immediate, and future, benefits. And we look forward to providing our customers with the increased flexibility and choice that comes from providing thousands of Oracle customers, and millions of Oracle developers, access to Microsoft’s enterprise grade public and private clouds. It’s a bold partnership for a bold new enterprise era.

IMPORTANT: for Java developers this strategic partnership will be really important when the latest versions will be covered on Windows Azure, see:
Java EE 7 / GlassFish 4.0 Launch Coverage [Oracle’s The Aquarium blog, Jan 12, 2013]

Java EE 7, the standard in community-driven enterprise software, is now available. Back in April, Java EE 7 completed the JCP final approval ballot.  Today, developers can learn all about Java EE 7 during the Java EE 7 Live Web Event, and get some hands-on experience with the arrival of the Java EE 7 SDK and GlassFish Server Open Source Edition 4.0.  Of course, others have quite a bit to say about Java EE 7 as well, and this is just for starters:

Java EE 7 SDK and GlassFish Server Open Source Edition 4.0 Now Available [Arun Gupta, Miles to go … weblog among Oracle technical blogs, June 12, 2013]

Java EE 7 (JSR 342) is now final!

I’ve delivered numerous talks on Java EE 7 and related technologies all around the world for past several months. I’m loaded with excitement to share that the Java EE 7 platform specification and implementation is now in the records.

The platform has three major themes:

image

  • Deliver HTML5 Dynamic Scalable Applications
    • Reduce response time with low latency data exchange using WebSocket
    • Simplify data parsing for portable applications with standard JSON support
    • Deliver asynchronous, scalable, high performance RESTful Service
  • Increase Developer Productivity
    • Simplify application architecture with a cohesive integrated platform
    • Increase efficiency with reduced boiler-plate code and broader use of annotations
    • Enhance application portability with standard RESTful web service client support
  • Meet the most demanding enterprise requirements
    • Break down batch jobs into manageable chunks for uninterrupted OLTP performance
    • Easily define multithreaded concurrent tasks for improved scalability
    • Deliver transactional applications with choice and flexibility

This “pancake” diagram of the major components helps understand how the components work with each other to provide a complete, comprehensive, and integrated stack for building your enterprise and web applications. The newly added components are highlighted in the orange color:

image

In this highly transparent and participatory effort, there were 14 active JSRs:

  • 342: Java EE 7 Platform
  • 338: Java API for RESTful Web Services 2.0
  • 339: Java Persistence API 2.1
  • 340: Servlet 3.1
  • 341: Expression Language 3.0
  • 343: Java Message Service 2.0
  • 344: JavaServer Faces 2.2
  • 345: Enteprise JavaBeans 3.2
  • 346: Contexts and Dependency Injection 1.1
  • 349: Bean Validation 1.1
  • 352: Batch Applications for the Java Platform 1.0
  • 353: Java API for JSON Processing 1.0
  • 356: Java API for WebSocket 1.0
  • 236: Concurrency Utilities for Java EE 1.0

The newly added components are highlighted in bold.

And 9 Maintenance Release JSRs:

  • 250: Common Annotations 1.2
  • 322: Connector Architecture 1.7
  • 907: Java Transaction API 1.2
  • 196: Java Authentication Services for Provider Interface for Containers
  • 115: Java Authorization for Contract for Containers
  • 919: JavaMail 1.5
  • 318: Interceptors 1.2
  • 109: Web Services 1.4
  • 245: JavaServer Pages 2.3

Ready to get rolling ?

Binaries

Tools

Docs

A few articles have already been published on OTN:

And more are coming!

This blog has also published several TOTD on Java EE 7:

All the JSRs have been covered in the Java Spotlight podcast:

The latest issue of Java Magazine is also loaded with tons of Java EE 7 content:

image

Media coverage has started showing as well …

And you can track lot more here.
You can hear the latest and greatest on Java EE 7 by watching replays from the launch webinar:

image

This webinar consists of:

  • Strategy Keynote
  • Technical Keynote
  • 16 Technical Breakouts with JSR Specification Leads
  • Customer, partner, and community testimonials
  • And much more

Do you feel enabled and empowered to start building Java EE 7 applications ?
Just download Java EE 7 SDK that contains GlassFish Server Open Source Edition 4.0, tutorial, samples, documentation and much more.
Enjoy!

Previous situation:

image
From Oracle Database Cloud Service [Oracle presentation, Feb 15, 2013]

as well as: New Java Resources for Windows Azure! [Windows Azure blog, July 31, 2012]

… Make the Windows Azure Java Developer Center your first stop for details about developing and deploying Java applications on Windows Azure. We continue to add content to that site, and we’ll describe some of the recent additions in this blog.

Using Virtual Machines for your Java Solutions

We rolled out Windows Azure Virtual Machines as a preview service last month; if you’d like to see how to use Virtual Machines for your Java solutions, check out these new Java tutorials. …

New in Access Control

Included in the June 2012 Windows Azure release is an update to the Windows Azure Plugin for Eclipse with Java (by Microsoft Open Technologies). …

The Java part of this partnership is dating back to GlassFish and Java EE 6 everywhere, even in the Azure cloud! [Oracle’s The Aquarium blog, Jan 18, 2011]

imageMicrosoft’s technical architect David Chou has a detailed blog entry on how to run a recent GlassFish 3.1 build on the Microsoft Azure Platform (wikipedia). The article builds on this other recent blog entry on running Java applications in Azure and adds GlassFish-specific instructions.

In Azure terminology, the article discusses setting up a Worker Role using Visual Studio, reserving Ports, setting up a Startup Task (for the JVM), and configuring the Service, GlassFish in this case. This uses Windows Server 2008 (a GlassFish supported platform) and a zip install of GlassFish.

It’s early days (need best practices on working around some of the cloud-inherent limitations) but with this support of GlassFish, the Azure platform now has full support for Java EE 6!

which then was followed with a Java wishlist for Windows Azure [Arun Gupta, Miles to go … weblog among Oracle technical blogs, Feb 11, 2011]

TOTD #155 explains how to run GlassFish in Windows Azure. It works but as evident from the blog, its not easy and intuitive. It uses Worker Role to install JDK and GlassFish but the concepts used are nothing specific to Java. Microsoft has released Azure SDK for Java and AppFabric SDK for Java which is a good start but there are a few key elements missing IMHO. These may be known issues but I thought of listing them here while my memory is fresh 🙂

Here is my wish list to make Java a better on Windows Azure:

  1. Windows Azure Tools for Eclipse has “PHP Development Toolkit” and “Azure SDK for Java” but no tooling from the Java perspective. I cannot build a Java/Java EE project and say “Go Deploy it to Azure” and then Eclipse + Azure do the magic and provide me with a URL of the deployed project.
  2. Why do I need to configure IIS on my local Visual Studio development for deploying a Java project ?
  3. Why do I have to explicitly upload my JDK to Azure Storage ? I’d love to specify an element in the “ServiceConfiguration” or where ever appropriate which should take care of installing JDK for me in the provisioned instance. And also set JAVA_HOME for me.
  4. Allow to leverage clustering capabilities of application servers such as GlassFish. This will also provide session-failover capabilities on Azure 🙂
  5. Sticky session load balancing.
  6. If Windows VM crashes for some reason then App Fabric restarts it which is good. But I’d like my Java processes to be monitored and restarted if they go kaput. And accordingly Load Balancer switches to the next available process in the cluster.
  7. Visual Studio tooling is nice but allow me to automate/script the deployment of project to Azure.
  8. Just like Web, Worker, and VM role – how about a Java role ?
  9. And since this is a wishlist, NetBeans is the best IDE for Java EE 6 development. Why not have a NetBeans plugin for Azure ?
  10. A better integration with Java EE APIs and there are several of them – JPA, Servlets, EJB, JAX-RS, JMS, etc.
  11. The “happy scenario” where every thing works as expected is fine is good but that rarely happens in software development. The availabilty of debugging information is pretty minimal during the “not so happy scenario”. Visual Studio should show more information if the processes started during “Launch.ps1” cannot start correctly for some reason.

And I’m not even talking about management, monitoring, adminstration, logging etc.

Thank you Microsoft for a good start with Java on Azure but its pretty basic right now and needs work. I’ll continue my exploration!

Christmas is coming later this year … and I’ll be waiting 🙂

See also:

Windows Embedded is an enterprise business now, like the whole Windows business, with Handheld and Compact versions to lead in the overall Internet of Things market as well

OR Windows Embedded: Recommitting to x86 across all of the edge devices of the future intelligent systems of enterprise customers and consumers while pushing ARM along its current positions in mobile and real-time, which is essentially corresponding to the Windows 7 licensing and pricing described by this source 
OR Windows Embedded enterprise solutions strategy based on creating actionable operation intelligence extended to edge devices in retail and hospitality, healthcare, manufacturing, and automotive industries
OR Capitalizing on the Internet of Things [WindowsEmbedded YouTube channel, March 20, 2013] and Transforming Business 
OR Building Edge Devices & Intelligent Solutions [WindowsEmbedded YouTube channel, March 20, 2013]
OR (as stemming from The future of Windows Embedded: from standalone devices to intelligent systems [this same ‘Experiencing the Cloud’ blog, March 9-29, 2012], note however that ARM architecture support was delivered only in Handheld and Compact versions despite original hint included into that post)
An intelligent system built on Windows Embedded—with the expertise of the extensive community of established Windows Embedded partners—extends the power of Windows and Microsoft technologies to edge devices. Our portfolio of products powers solutions that meet unique industry needs and span enterprises of any size and complexity.
coinciding with:
1.  Microsoft betting on boosting Windows RT demand with top level ARM SoCs from its SoC partners, Windows 8.1 enhancements, Outlook addition to the Office 2013 RT and very deep tactical discounts to its OEM partners for tablet offerings of more value and capability [‘Experiencing the Cloud’, June 6, 2013] and
2. “Cloud first” from Microsoft is ready to change enterprise computing in all of its facets [‘Experiencing the Cloud’, June 4, 2013], as well as
3. Visual Studio 2012 Update 2 is here [The Visual Studio Blog, April 4, 2013] which according to ANNOUNCING VISUAL STUDIO 2012 UPDATE 2 CTP 2 [BlendInsider, Jan 30, 2013] providing the utmost effectivity in developer productivity, finally achieving uniformity in XAML based embedded user experience design as well with one version of Blend for everything (highlighted inserts are mine):
image

as Windows Azure is providing a leading cloud application platform for all that (download): image
and an excellent  testimony to that is given in Discovering Intelligent Systems at work in Manufacturing [Windows Embedded Blog, Nov 27, 2012] from which it is important to include the basic story (just substitute “’intelligent’screwdriver” with “any enterprise or consumer device enhanced with ‘intelligence’”, “larger network in the factory” with “classic and mobile Internet” and the “backend” with “Windows Azure” to understand the enormous potential which is becoming available for Microsoft in terms of the Internet of Things market):

Hey everyone, recently our Windows Embedded team was on a customer site visit in Europe, and we came across a fantastic example of Intelligent Systems in action. While we were touring an automobile manufacturing plant, we observed the line using electric screwdrivers like the one pictured below. They had two cables running into them. Power and Ethernet. We asked the tour director about the network cable, and they explained that the screwdriver was actually an ‘intelligent’ screwdriver.
We smiled at the thought of this basic piece of hardware actually being able to think about what it was doing. Then he explained it and we were amazed. The screwdriver was hung off a manufacturing line Windows Embedded Compact PC that was connected to a larger network in the factory. The backend provided the screwdriver engineering specs about the screw going into that location on the car, including the required torque and even the number of revolutions that Class 1 screw should take to achieve the desired torque. So, when the technician popped the screw into the chassis, all they had to do was fire the trigger, and everything was automatic. They even had some scenarios where this was done using robotic arms instead of people.
imageWhen the screw was installed in the car, a data point was generated that came back down the network cable and registered in the factory database. Basically, an ‘OK’, or ‘NOT OK’ was registered, and in the case of either the torque being missed, or that torque being achieved in an unexpected number of revolutions, a flag was popped to investigate further. In summary, the car would not get off the production line if the quality bar wasn’t met.
We have learned since this visit that a number of our partners, and several large automotive manufacturers have deployed this technology in their factories both here in North America and in Europe.
The volume of parts going into just one car is massive, a true big data story, and the business doesn’t necessarily want to know about the hundreds of thousands of screws installed in their factory. What they do want to know is when a threshold like an engineering spec is missed. This type of approach enables business critical data to be presented, relevant, and not washed out in the volumes of activities/events happening minute by minute on the factory floor. …

In the IDC iView, sponsored by Microsoft, The Rise of Intelligent Systems: Connecting Enterprises and Smart Devices in Seamless Networks [April 18, 2012], you can find the following market forecast:

embedded and intelligent systems represent a much larger opportunity than the PC, tablet, or even the smartphone market. IDC estimates that the intelligent systems market will grow from 19% of all major electronic system unit shipments in 2010, to more than 27% of all systems by 2016. Revenue for the intelligent systems market will grow from more than $649 billion in 2012, to more than $1.4 trillion in 2016 (PCs and smartphones excluded from market-size numbers).

On the market for more than five years and with more than 5 million cars sold already, but in joint development since 2005, Ford SYNC based on Microsoft embedded technology is the best showcase of both the market potential and the level of achievements possible in this post-PC market for Microsoft:

The Ford Focus now comes with the optional Ford Sync system, which provides voice control and smartphone app integration. Jason Johnson, of Ford, takes us through a detailed demonstration of the system, showing off its ability to recognise navigation and phone book commands, as well as its wireless hot spot feature.

Soon we will have further advancements: Ford to Show the Smarter Way to Get There at Computex 2013 [press release, May 23, 2013] which you can follow on a special Ford Motor Company – Computex FB site which currently contains teaser videos about Future Technology Trends, Open Innovation and Device Interaction featuring Microsoft as well(note that those things are quite necessary as competition is getting stronger)

  • Ford will be the only automaker at Computex 2013, the largest computer exhibition in Asia
  • Ford will make several major announcements on its smart technologies for both Taiwan and Asia Pacific and Africa markets
  • Ford to showcase its most advanced class-leading technologies, designed to take the driving experience for customers to a new level
Inserted later: Ford Press Conference Highlights at Computex 2013  [FordAPA YouTube channel, June 6, 2013]
Ford Motor Company today announced it will bring its Ford Developer Program to markets in Asia Pacific and Africa to allow developers to create voice-activated apps for the car, further reinforcing its position as a global leader in technology.
    TAIPEI, Taiwan, May 23, 2013 – Ford Motor Company will show the Smarter Way to Get There at Computex 2013 with the most advanced class-leading technologies to further enhance the driving experience for customers.
    Ford will be the only automaker at Asia’s biggest computer exhibition from June 4-8, 2013, where it will make several major announcements for both Taiwan and Asia Pacific.
    “As one of the world’s largest and most influential technology shows, Computex is the ideal platform for Ford to showcase how our smart technologies are improving the driving experience for our customers in the digital age, ” said John Lawler, chairman and CEO, Ford Motor China.
    “In a world where consumers want to be connected all the time, be it at home, in the office or in their cars, we have a great opportunity to provide driver-connect technologies in our vehicles which enable drivers to stay connected through voice commands while keeping their hands on the wheel and eyes on the road. The technologies we are bringing to our vehicles not only give our customers a connected driving experience, they also make that experience simple, safe and personalized.”
    The Ford stand at Computex will not only feature the company’s most advanced technology developments, but also the all-new Ford Kuga, dubbed by Ford as the “Smarter SUV” because of its fuel economy, versatility and new technology that makes driving easier and more fun.
    Inserted later: Ford at Computex 2013 – Panel Discussion
    [FordAPA YouTube channel, June 5, 2013]
    On June 4, Ford will be part of the Computex Smart Living Industry Forum at which Edward Pleet, Connected Services Director, Ford Motor Company, Asia Pacific and Africa, and Europe will discuss The Smart Living, Networked Society.
    The Ford booth will be located at Taipei Word Trade Center Nangang Exhibition Hall, 4th Floor, booth number M2005.

    image

    OR Windows Embedded: Recommitting to x86 across all of the edge devices of the future intelligent systems of enterprise customers and consumers while pushing ARM along its current positions in mobile and real-time, which is essentially corresponding to the Windows 7 licensing and pricing described by this source as (here only WIN7 COMPACT (CE) has ARM support as well):image
    (click here or on the above image to see the full table, note also that the true enterprise licensing via even cheaper SELECT and EA (Enterprise Agreement) programs is not shown in the table, for explanations also see WES 7 “E” & “P”, WES SKU Differences, FES, FES 7 Pro, FES 7 Ultimate, WES vs FES, FES Pro & Ultimate SKU Differences, Win7 Compact (CE), Win7 Compact (CE) SKU Differences, Win7 Compact (CE) OS Components and “SKU rationale” from Microsoft) on which I overlaid the corresponding Windows Embedded 8 products and their already known (like General Embedded / NR / Entry for Windows Embedded Compact 2013 to be generally available on June 13) or supposed (like Standard ?…? / Standard ?Enterprise? for the Windows Embedded 8 Standard) SKUs. 

    imageNote that the above table could be misleading since it is just representing low-volume purchases while Microsoft is using License Packs as well where the per unit price is non-linearly decreasing with the number of licenses in the Pack. Fortunately I’ve found current trade data records for WINDOWS EMBEDDED STANDARD 8 EMB ESD OEI RUNTIME -7WT-00094(N-77P-3153) [April 9, 2013] and WIN PRO EMBEDDED 8 EMB ESD OEI -42C-00051(N-77P-3154) [April 9, 2013] from Taiwan to India which I could use as Model 1 and Model 2 for supposed pricing of the Windows Embedded 8 Standard, see the results on the above right. This could certainly be not so steep in reality (e.g. the model numbers were “more decreased” in trade declarations for the larger License Packs representing higher absolute value in order to decrease the absolute tax even more) as it is only giving a kind of idea for License Packs.

    It is also important to include here the argumentation why Isn’t a Linux or Android solution cheaper? [one of FAQs answered by Avnet Embedded, May 1, 2013]:

    Linux or Android solutions may seem cheaper initially. However, the Total Cost of Ownership (TCO) should be taken into account as a useful metric for assessing the overall cost impact of your investment. For example:

    • Acquisition costs— Inexpensive comparable products can cost as much or more than Windows to acquire and support.
    • Total costs—Acquisition costs are a very small component of TCO. Even when the costs of different operating systems are comparable, research shows that Windows often offers a lower TCO because of cost advantages in the other, larger components, such as staffing and downtime.
    • Cost vs. Value—In addition to what you must pay for, if you are making an investment in IT you should also consider what you will get in return; including features or capabilities that improve productivity and deliver additional value.   

    To find out more about the TCO of Windows Embedded, read ‘The Total Cost of Ownership (TCO) benefits of Windows Embedded software’ ebook

    If the runtime license still looks too expensive than it is important to consider that we are talking here about very special types of devices with the x86 based Windows Embedded 8. Here is how Microsoft representing that x86-only focus on the top of “edge devices of the future intelligent systems of enterprise customers“: image
    This has even very strong industry focus: Retail (from Kiosk to ATM), Manufacturing and Health Care. So we can proceed to other post titles which are equally important to properly represent the redefined Windows Embedded positioning:

    OR Windows Embedded enterprise solutions strategy based on creating actionable operation intelligence extended to edge devices in retail and hospitality, healthcare, manufacturing, and automotive industries

    OR Capitalizing on the Internet of Things [WindowsEmbedded YouTube channel, March 20, 2013] and watch also: Transforming Business

    The Internet of Things is prompting businesses to re-think how they use their digital assets. Kevin Dallas, GM of Windows Embedded at Microsoft, tells GigaOM Research’s Adam Lesser how companies can build intelligent systems to take advantage of the data their devices are already generating, for better business intelligence.

    OR Building Edge Devices & Intelligent Solutions [WindowsEmbedded YouTube channel, March 20, 2013]

    To be a part of the Internet of Things, businesses need the right kinds of devices. Kevin Dallas, GM of Windows Embedded at Microsoft, tells GigaOM Research’s Adam Lesser what OEM/ODMs should think about as they help their customers build intelligent systems to take advantage of the data their devices are already generating.

    Other videos in the “Building Edge Devices & Intelligent Solutions” series:
    Dell Wyse, HP, Omnicell and ParTech, Inc. I will embedd here even Bravo Outdoor Advertising Reaches Greater Heights With Intelligent System [WindowsEmbedded YouTube channel, Feb 11, 2013] as it shows very well how the range of edge devices could be hugely extended over the years (here with digital signage on the public transport in Ireland):

    Adrian O’Farrell, former marketing director for Bravo Outdoor Advertising, describes the many benefits — flexibility, customization and cost — of digital signage as opposed to traditional advertising on the Dublin bus system.

    OR
    (as stemming from The future of Windows Embedded: from standalone devices to intelligent systems [this same ‘Experiencing the Cloud’ blog, March 9-29, 2012], note however that ARM architecture support was delivered only in Handheld and Compact versions despite original hint included into that post)

    An intelligent system built on Windows Embedded—with the expertise of the extensive community of established Windows Embedded partners—extends the power of Windows and Microsoft technologies to edge devices. Our portfolio of products powers solutions that meet unique industry needs and span enterprises of any size and complexity.

    From Learn more about intelligent systems subpage linked on Microsoft > Windows Embedded > Products and Solutions page [May 6, 2013] which page also contains:

    Unlock intelligence with the full breadth of Microsoft technologies
    What happens when devices at the edge of enterprise networks are connected to software and services in the back end or the cloud? Suddenly, a rich new source of information is available. The data has always been there—but today, an integrated stack of Microsoft technologies, extending from the server room to the customer’s fingertips, can help evolve business intelligence to operational intelligence by enabling enterprises to identify and act on opportunities that would otherwise be out of reach. For OEMs, the ability to harness the power of Microsoft technologies to capitalize on data gathered from edge devices translates to new and expanded potential for creating solutions for customers.

    [The Big Shift From Software to Cloud Services video of Nov 13, 2012 from WindowsEmbedded YouTube channel is quite important to embed here, since it clearly shows that Microsoft is shifting from being a software company to a hardware & services company:]

    Windows Azure Marketing General Manager Eron Kelly discusses Microsoft Corp.’s focus on delivering software through the cloud and the opportunity it creates for devices and intelligent systems.
    One Microsoft, everything you need
    When connecting industry devices powered by Windows Embedded to back-end systems running SQL Server on-premise—or secured by Azure in the cloud–business data is without boundaries. Those building intelligent system solutions will shorten development time, and simplify implementation and management by harnessing the full breadth of Microsoft technologies, from the rich, familiar experience of Windows, to simplified management with System Center and security with Forefront. Device manufacturers, evaluate your intelligent systems business capabilities with Microsoft.
    Devices at the network edge: critical infrastructure for intelligent systems
    Intelligent systems are revolutionizing business, and Microsoft is focused on driving innovation in a number of industries, including retail and hospitality, healthcare, manufacturing and automotive. Whether streamlining inventory management with industry handheld devices, securely handling medical records using a thin-client solution, reinventing the customer experience with point-of-sale devices, transforming factory efficiencies with embedded robots, or reimagining the driving experience with an in-car infotainment system, edge devices are all around us. Powering these devices with Windows Embedded harnesses Microsoft technologies to create customized solutions that address specific industry needs and drive innovation—and profits—forward.
    According to IDC, unit shipments of IP-connected embedded systems, excluding mobile phones and PCs, will more than double by 2015, growing from approximately 1.4 billion in 2010 to over 3.3 billion.

    Source: IDC, “Smart Tech Market Forecast and 2020 Vision.”

    Specialized devices in the marketplace

    Select an industry [with a latest video of May 6, 2013 embedded here for each from WindowsEmbedded YouTube channel, in order to let you see how Microsoft and Windows Embedded are providing the technology, strategic leadership and partner ecosystem that are driving innovation]

    As far as the Windows Embedded 8 is concerned we have a pretty clear picture now:
    Windows Embedded 8 [Microsoft > Windows Embedded > Products and Solutions > Windows Embedded Products page, May 6, 2013]

    From this page the basic offerings (based on Windows 8) are the following ones:

    The Windows Embedded 8 family of platforms and tools helps companies extend their operational intelligence [by harnessing the flow of data across industry devices on the edge and back-end systems], using their existing IT infrastructure and industry devices that securely exchange data with back-end systems. Offering the same rich, multi-touch experience as Windows 8 and Windows Phone 8 , Windows Embedded 8 delivers compelling user experiences on a range of industry devices.

    Windows Embedded 8 Pro

    The power and flexibility of Windows 8 in a platform designed specifically for building edge devices [digital signs or point-of-service terminals in a store environment, handheld devices, robots on the manufacturing floor, or thin client devices in hospitals to transform business intelligence to actionable operation intelligence] and intelligent systems solutions [such as kiosks, medical devices, digital signage and HMI (human machine interface)].

    • Deliver a user experience that’s identical to Windows 8.
    • Design custom apps that feature the fast, fluid behavior of Windows 8.
    • Security features such as Bitlocker and Trusted Boot.
    • Compatible with line-of-business and productivity apps.

    Learn more

    [The Next Generation Digital Signage on Display at Computex 2012 video of June 25, 2012 from WindowsEmbedded YouTube channel is quite important to embed here, since it clearly discusses the direction for digital signage systems where full Windows compatibility is essential:]

    Windows Embedded’s John Boladian and Intel’s Gark Tan show off today’s interactive digital signage that create an engaging and connected experience for customers through combined technologies from Kinect for Windows, Windows Embedded and Intel’s Core processors.

    Windows Embedded 8 Standard

    Offers flexibility for purpose-built devices, such as thin clients, kiosks [digital signage] and automated manufacturing solutions.

    • Compelling UI, powerful app support, security and manageability of Windows 8.
    • Modular format allows you to use only the components needed.
    • Ensure consistent configuration with embedded specific lockdown features.
    • Custom branding feature.

    Learn more 

    [The Demo: Windows 8 on Embedded Devices video of Nov 13, 2012 from WindowsEmbedded YouTube channel is quite important to embed here, since it clearly shows the actually best example of a purpose-built ruggedized device (from a long-time partner Motion Computing) based on Windows 8 which is a kind of prototype of similar “custom branded” devices based on Windows Embedded 8 Standard:]

    Embedded Group Manager John Coyne shows off an industry application on a PC running Windows Embedded 8.

    Windows Embedded 8 Industry

    A consistent, streamlined application platform that shortens development cycles for specific industry device scenarios in retail, manufacturing and other industries [such as POS terminals, ATMs, automated manufacturing solutions and medical devices].

    • Compelling UI, powerful app support, security and manageability of Windows 8.
    • Ensure consistent configuration with lockdown features.
    • Fixed platform provides a consistent development experience.
    • Plug and play peripheral capabilities with POS for .NET.

    Learn more

    [The Intelligent Systems Making Vending Machines Fun at Computex 2012 video of June 25, 2012 from WindowsEmbedded YouTube channel is quite important to embed here, since it demonstrates an interactive smart vending machine where retail peripheral support is essential:]

    Windows Embedded’s John Boladian and Intel’s Gark Tan discuss the value and growth of intelligent systems across devices and the cloud. By highlighting an interactive smart vending machine, they show that intelligent systems not only make the purchase experience fun, but give the vendor a competitive advantage through increased connectivity, data collection, manageability and business analytics

    [Read also: Windows Embedded 8 Industry: A Modern OS for Industry Devices [Windows Embedded blog, April 2, 2013] “On the heels of our recent release of the Windows Embedded 8 platform, we’re making another member of the Windows Embedded family available today — Windows Embedded 8 Industry. David Wurster, Microsoft Windows Embedded’s senior product manager, has details on how Windows Embedded has evolved beyond point-of-service (POS) systems in retail to do much more in the Windows 8 era.”]

    Compare Windows Embedded 8 products

    Features
    Pro
    Standard
    Industry
    Rich multitouch, multi-user interface
    clip_image002
    clip_image002[1]
    clip_image002[2]
    Connectivity features, including connected standby, mobile broadband and WiFi
    clip_image002[3]
    clip_image002[4]
    clip_image002[5]
    Powerful security features, including anti-malware support, BitLocker and Trusted Boot
    clip_image002[6]
    clip_image002[7]
    clip_image002[8]
    Lockdown support, including unified write filter, gesture and keyboard filters
     
    clip_image002[9]
    clip_image002[10]
    Retail peripheral support

     

     

    clip_image002[11]
    Custom branding
     
    clip_image002[12]
     
    Full Windows compatibility
    clip_image002[13]

     

     
    Fixed image
    clip_image002[14]

     

    clip_image002[15]
    Easy end-to-end device management with Microsoft System Center
    clip_image002[16]
    clip_image002[17]
    clip_image002[18]
    Modular OS
     
    clip_image002[19]
     

    In addition there are the following complementary offerings, which are not based on Windows 8, are shown on the same page as well:

    Windows Embedded 8 Handheld

    Built on Windows Phone 8 to offer intuitive line-of-business applications [such as package delivery, mobile point-of-service, communication and collaboration, and scanning and data capture], with proven integration and security for industry handheld devices.

    • Common application programming interfaces so that devices easily integrate.
    • Manage devices across the network through the use of Windows Intune and SCCM 2012.
    • Benefit from a large selection of Windows Phone 8 apps.
    • Use the Windows Phone 8 SDK and Visual Studio 2012 to create custom apps.

    Learn more

    [Read also: Windows Embedded 8 Handheld joins the Windows Embedded 8 family [Windows Embedded blog, Jan 14, 2013] “Windows Embedded 8 Handheld is more than just the successor to Windows Embedded Handheld 6.5. It’s a complete re-imagination of the enterprise mobile device. With Windows Embedded 8 Handheld, the platform is now based on the Windows Phone 8, which itself is built on Windows 8. In addition to the highly-praised Windows Phone 8 user interface, both Windows Phone 8 and Windows Embedded 8 Handheld now share a common kernel with Windows.”]

    Windows Server 2012 for Embedded Systems

    Binary identical to Windows Server, a proven, highly reliable operating system for embedded applications in server appliances [such as of telecommunications, medical imaging, industrial automation and corporate headquarters]

    • Enable informed, real-time decisions that keep your enterprise ahead of the competition.
    • New storage features optimize the reliability and efficiency of data stores and scale to meet demand and reduce costs.
    • Equip employees with insightful analysis and reporting services.

    Learn more

    Microsoft SQL Server 2012 for Embedded Systems

    A database management tool, binary identical to Microsoft SQL Server, for use with purpose-built hardware running the Windows Embedded Server operating system [such as in telecommunications, medical imaging, industrial automation and corporate headquarters].

    • Glean new business insights from data, and harness it in real time.
    • Provide access to powerful data analysis and visualization tools.
    • Flexibility and usability for auditing and security manageability across SQL Server environment.

    Learn more

    Windows Embedded 8 Mission and Vision (from  Microsoft’s Intelligent Systems [Microsoft > Windows Embedded > Intelligent Systems page, May 7, 2013])

    Actionable data fueled by intelligent systems is the new currency for business, and its value is expected to increase exponentially, improving how people live, learn and conduct business. Gartner predicts that big data will “deliver transformational benefits to enterprises” in the coming 2-5 years, and that by 2015, enterprises that employ big data strategies “will begin to outperform their unprepared competitors within their industry sectors by 20 percentage in every available financial metric.” (Source: Hype Cycle for Cloud Computing, August, 2012.) With intelligent systems, Microsoft is helping organizations access and transform critical data into operational intelligence by providing a wide range of operating systems, tools, and systems and services.

    Our mission is to drive business growth and competitive advantage for our enterprise customers and partners through technology innovations that capitalize on the vast potential of data. Your investment in Windows Embedded is backed by Microsoft’s proven commitment to intelligent systems through more than 15 years of experience in the market.

    Industry Focus (from Microsoft’s Intelligent Systems [Microsoft > Windows Embedded > Intelligent Systems page, May 7, 2013])

    Intelligent systems are revolutionizing business and Microsoft is focused on driving innovation in retail and hospitality, healthcare, manufacturing, and automotive industries. Customized solutions built with Windows Embedded harness Microsoft technologies to address specific industry needs by connecting devices on the edge of enterprise networks with existing IT infrastructureson a single platform. The resulting intelligent systems help retailers deliver personal, seamless and differentiated experiences to customers; manufacturers increase efficiencies at every level of the operation to deliver innovative services, implement best-practice operations and enhance planning and decision-making processes; healthcare institutions optimize patient care and outcomes by bringing people, processes and information together; and automakers evolve “intelligent car” experiences, allowing drivers to access innovative in-car communication, infotainment, navigation and fuel-efficiency features.

    Our Solutions Approach (from Microsoft’s Intelligent Systems [Microsoft > Windows Embedded > Intelligent Systems page, May 7, 2013])

    Microsoft’s tools and technologies for intelligent system solutions extend beyond a software package or device; the great power and flexibility of industry devices running the Windows Embedded platform is that it works in concert with Microsoft’s cloud products and services, and with existing IT infrastructure to customize a complete connected system.

    Windows Embedded minimizes risk and complexity by providing one trusted platform with which to build solutions and broaden business opportunity. Windows Embedded fits with your needs, connecting data across a diverse set of technologies, providing compatibility across your existing systems, and enabling customization through a worldwide network of partners, to increase ease of use and drive efficiency. And a Microsoft solution extends the intelligence of your organization, increasing opportunities for your workforce to act on data and insights that would otherwise be out of reach.

    On the adjacent to the above “Windows Embedded Products” page there is a “Product Lifecycles” [May 7, 2013] page which contains the following

    Road map for intelligent systems

    With Windows Embedded 8, Microsoft extends Windows 8 to intelligent systems, creating the next wave of enterprise tools and technology. The release schedule includes the Windows Embedded 8 family of device operating systems, each with a distinct feature set that includes the building blocks for an intelligent system across hardware, software and services.

    image

    It means that from the whole portfolio the “Windows Embedded Compact 2013” was missing on May 7 as it was to be delivered in Q2 2013. When clicking on its “+” sign one gets the following description (corrections came in the first week of June with v.3 mark deleted after Blend and “sensory input and Kinect for Windows” deleted, and XAML for Windows embedded, multi-core support as well as Snapshot Boot added; it also coincided with “Microsoft Windows Embedded Compact 2013 ISO-TBE” availability for download):

    imageA streamlined, componentized device operating system, Windows Embedded Compact 2013 gives developers all the tools they need to create the next generation of intelligent systems solutions. Compact 2013 provides the flexibility and real-time support to reduce time to market, while creating an easy-to-use, multi-touch experience that helps enterprise customers improve worker productivity.

    • Access to up-to-date tools such as Platform Builder, Visual Studio 2012 and Expression Blend v.3 helps developers to streamline development.
    • Support for XAML for Windows embedded, multi-touch, sensory input and Kinect for Windows and multi-core support enables the creation of immersive applications.
    • Leverage the power of cloud computing through Windows Azure Application Services, giving customers a greater ability to extend their intelligence.
    • Improved file system performance and Snapshot Boot gives companies the confidence that their devices will always be available, whatever their current state .

    The first description of it was given in Windows Embedded Compact v.Next uncovered [Windows Embedded Blog, Nov 14, 2012] as follows

    Posted By David Campbell
    Program Manager

    Woo hoo, it’s finally time to share more information about the upcoming release! First, the release now officially has a name: Windows Embedded Compact 2013. (I know that folks probably have questions around why we chose this name. We thoroughly considered a long list of potential names, including Windows CE again, and Windows Embedded Compact 2013 really did receive the best response.)

    I’ll be doing a number of posts about the various key features and changes in Windows Embedded Compact 2013 over the next few posts, but I want to start with arguably the most interesting of the new features: the investments made for Visual Studio 2012 support, both ISV/app development via Visual Studio directly; and the OEM/device development experience with Platform Builder, now hosted in Visual Studio 2012!

    With all development now in Visual Studio 2012, there is no longer a need for multiple versions of Visual Studio to support Compact development alongside other Windows platforms. Plus, you’ll get many of the new features and productivity improvements available in Visual Studio 2012 when developing for Compact! We now have the same C++ toolset and standards supported everywhere. (And of course Visual Studio 2012 includes the new features from Visual Studio 2010, which were not previously to Compact developers.)

    We also have a new CRT, which has key new functionality aligned as well. (The existing CRT on Compact hasn’t been updated in some time.) And the new optimizer supports functionality like auto-parallelization of your code and auto-vectorization–so if your processor has FP registers, the optimizer will automatically generate code to use vector FP. The 2012 C++ compiler also includes many of the language features from the new C++11 standards.

    C++11 has new language features that allow you to write better performing, safer code and code it faster than ever before. For example, RValue references let you operate on data without having to copy it. And C++11 brings in functional semantics to make writing code more efficient, like having anonymous functions. We also support range based loops, letting you iterate over members of a list directly. More information is available on the Visual Studio team blog.

    .Net CF has also been upgraded to 3.9, which inherits the support Windows Phone updates while still being app compatible with 3.5. This upgrade improves performance significantly in a number of ways. .Net CF 3.9 has greatly improved performance overall, as well as memory allocation and garbage collection using the generational garbage collector. This not only improves performance, but also provides more predictability in the execution of applications. The memory footprint of the runtime is also smaller for both the framework and applications, using what is known as “the sharing server,” allowing loaded code to be reused across applications. The runtime itself is also multi-core enabled, which can improve the performance of all your applications. More information on the updated .Net CF is available on the .NET Framework blog.

    The embedded developer experience improvements of bringing the new features of Visual Studio 2012 to Windows Embedded Compact are amazing, and I’m sure you’ll be as excited as I am to get started using the new features of Visual Studio 2012, Platform Builder and the new Compact OS.

    For information on the upcoming Windows Embedded Compact release, visit www.windowsembedded.com.

    Previous versions with some important new features (my own judgement + Windows CE Wikipedia article + other inputs):

    CE7: Windows Embedded Compact 7 (March 2011)
    – Silverlight for Windows Embedded (UX C++ XAML API): application development made easy, synching designers and developers.
    – Windows Phone 7 IE with Flash 10.1 support: panning, zooming, multitouch and viewing bookmarks using thumbnails, etc
    – Multi-core support
    CE6: Windows Embedded CE 6.0 (September 2006)
    – Significant change in architecture over previous versions of CE (process address space is increased from 32 MB to 2 GB, number of processes has been increased from 32 to 32,768 etc.)
    – Incremental updates to features as R1, R2 and R3 releases
    – Silverlight introduced, Microsoft Office and PDF viewers support too.
    CE5: Windows CE 5.0 (August 2004)
    – Remote Desktop Protocol (RDP) introduced
    – Updates to Graphics and Multimedia support
    CE4: Windows CE 4.x (Jan 7, 2002)
    – .Net Compact Framework introduced
    – Since Windows CE.NET 4.2 system uses a new shell with integrated Internet Explorer
    CE3: Windows CE 3.0 (June 15, 2000)
    – Major recode that made CE hard real time down to the microsecond level
    – Base for the Pocket PC 2000, Handheld PC 2000, Pocket PC 2002 and Smartphone 2002
    CE2: Windows CE 2.x (Sept 29, 1997)
    – Real-time deterministic task scheduling
    – Architectures: ARM, MIPS, PowerPC, StrongARM, SuperH and x86
    CE1: Windows CE 1.0 (November 1996)

    Related post: Introducing NETCF 3.9 in Windows Embedded Compact 2013 – a faster, leaner and multi-core runtime! [.NET Framework blog, Nov 16, 2012] 

    Ever since .NET Compact Framework was introduced at the PDC conference in 2001, programming with .NET has scaled from some of the smallest devices to the largest servers. With C# and Visual Basic, developers can apply the same skills to program both devices and servers to form a complete end-to-end solution. As the devices become more prevalent in our daily lives, .NET is evolving too. Abhishek Mondal, the program manager for .NET Compact Framework [note that Abdishek Mondal was the program manager for GC as well], shares the following highlights of the latest version. –Brandon

    NETCF 3.9 advances the Windows Embedded Compact Platform

    We are happy to announce that we will be including the .NET Compact Framework 3.9 in Windows Embedded Compact 2013, as part of its upcoming release. We have made major updates in this version of the .NET Compact Framework, which deliver benefits in startup time, application responsiveness and memory utilization. You should see significantly better performance characteristics of your applications on both x86 and ARM devices, running Windows Embedded Compact 2013. You can read more about the Windows Embedded Compact 2013 release at the Microsoft News Center.
    The .NET Compact Framework is a version of the .NET Framework for embedded devices. It provides .NET development support for low-end embedded devices that run the Windows Embedded Compact 2013 OS. NETCF provides a familiar and rich development platform for embedded application development, with a small foot print and an extensive set of .NET functionality. For clarity, the other Windows Embedded OSes use the desktop .NET Framework, the same version that is included with desktop Windows.
    NETCF 3.9 is based on the NETCF version that shipped with Windows Phone 7.5. The following features are the key advances in NETCF 3.9, all big steps forward for app performance:
      • New Generational Garbage Collector for more responsive apps
      • NETCF runtime is now multi-core safe to take advantage of multi-core hardware  
      • Sharing Server feature that reduces working set and improves app launch
        Another major benefit of NETCF 3.9 is Visual Studio 2012 support! You will be able to use the same tools for Windows Embedded Compact 2013 development as you use for Windows, Windows Phone and Windows Azure development. Visual C++ development for this new Windows Embedded Compact version will also be supported in Visual Studio 2012, as reported on the Visual C++ team blog.

        Applications run (a lot) faster with NETCF 3.9

        NETCF 3.9 is a faster and leaner runtime for Windows Embedded Compact 2013. We have made many changes that should enable your apps to run much faster. NETCF is also multi-core safe, enabling you to take advantage of multiple cores on embedded devices. Multiple cores are increasingly available on today’s devices, and can be an important part of delivering a compelling experience to your customers. Let’s take a more in-depth look at some of the additional improvements that are part of NETCF 3.9.
        Faster app performance
        NETCF 3.9 has greatly improved performance overall. There are three key features that will speed up your apps. Let’s start with the new garbage collector in NETCF. We have observed app performance in the lab that shows 50-60% drops in GC time. We no longer see GC pauses significantly affecting app responsiveness, in our lab apps, which was a problem that was reported in the past. The new GC is a lot faster!
        For apps that use floating point arithmetic code, you may notice an additional performance boost, since NetCF takes advantage of ARM VFP instructions.
        Last, we’ll look at the new Sharing Server feature. Sharing Server enables a significant improvement in the warm start-up time of your app, particularly in scenarios where multiple applications run on a device. It is able to achieve this benefit by sharing loaded assemblies and JITed machine code across apps (including re-launching the same app).
        Efficient memory utilization of managed application
        The Sharing Server feature also enables lower memory use for NETCF 3.9 apps. As already discussed, the Sharing Server allows code to be reused across applications. In addition to benefiting app launch performance, this feature significantly lowers the aggregate memory use of devices in scenarios where multiple apps are in use.

        Developing apps with NETCF 3.9

        You will find that NETCF is a great addition to a modern development environment. You can use Visual Studio 2012 for development, including features such as Team Foundation Server for source control and feature management.
        Visual Studio 2012 will support Windows Embedded Compact development
        The single most compelling attraction of this release for many of you is the support for embedded development in Visual Studio 2012. This support will simplify development if you are already developing for both Windows or Windows Phone and Windows Embedded Compact, since you can do all of your work in a single Visual Studio environment.
        If you develop exclusively for the embedded platform, then Visual Studio 2012 support will enable you to use ALM tools and TFS in your development environment. There are also other benefits to Visual Studio 2012 such as performance improvements and other tools, which you can explore and enjoy.
        Here is a snapshot of a sample managed application developed using NETCF 3.9 with VS2012:
        imageA simple “Hello World” application on NETCF 3.9
        You can see this same app, running in Hyper-V, stopped at a breakpoint in Visual Studio 2012, using remote debugging:
        imageDebugging a NETCF 3.9 app on Windows, using Hyper-V
        NETCF 3.9 is source compatible with NETCF 3.5
        NETCF 3.9 is a big step forward for performance, however, the APIs that you’ve used in NETCF 3.5 stay the same. As a result, you can move existing .NETCF source code and investments forward to NETCF 3.9.
        You may wonder why NETCF 3.9 is source compatible with NETCF 3.5 and not binary compatible, since .NET assemblies are compiled to the machine-independent MSIL format. There is no change in our assembly format, nor are there any compatibility checks in NETCF 3.9.
        We chose a single compatibility message for Windows Compact Embedded 2013, for both native and .NET development, which is source compatibility. The single biggest driver of this support policy is C++, which needs to be re-compiled for the ARM thumb2 instruction set, with the new Visual Studio 2012 Visual C++ compiler. We have also found that many of you pair your managed code with native implementations. As part of re-compiling that native code, we expect that you may make API changes that would affect your P/Invoke declarations. As a result, we decided that a single compatibility policy for the whole release was the best choice.

        Wrapping Up

        If you are an embedded developer, I’m sure that you are excited that we are making NETCF 3.9 available to you for your embedded apps. We have already talked to a set of developers, who are looking forward to this big update, to significantly improve the runtime performance of the apps that you run on your devices. We look forward to seeing your new devices and the rich experiences that they deliver, after Windows Embedded Compact 2013 is released.
        NETCF 3.9 will be made available with the Windows Embedded Compact 2013 OS, when it is released. It isn’t available at this time. It will also be included in the SDK for the OS, for use with Visual Studio 2012. Watch the Windows Embedded Newsroom for future updates on release plans.
        Follow us or talk to us on Twitterhttp://twitter.com/dotnet.

        Then here is Windows Embedded Compact 2013 presentation @Embedded World 2013 [kojtp2 YouTube channel, March 3, 2013]

        [2:55] The “Silverlight for Windows Embedded” name was changed to “XAML for Windows Embedded” because Silverlight was associated with a browser plug-in technology in developers’ minds while here we have nothing like that.

        Since this video has bad voice recording quality it is also worth to watch the Windows Embedded Compact 2013 Technical Overview of what’s new [Microsoft Webinar Live Meeting record, April 30, 2013] from which I will include the following slide screenshots and some transcripts of my own:

        image

        image

        image

        image

        image

        image

        [21:35] Very cool news: „an entirely rewritten and upgraded .NET Compact runtime

        clip_image002[83]

        [25:25] „XAML for [Windows] Embedded” [changed the name to XAML from Silverlight] allowing UI developers to write using Silverlight, in Expression Blend 5.0 with this release [vs Blend 3 in the previous], which will generate XAML describing the user inerface and the user interactions. We link that in with native C++ code in the back-end, and that allows for extremely powerful interfaces while still allowing for high performance that we use native code plus there’s nothing between us and the operating system, and there’s nothing between us and the hardware, so we have clip_image002[85]much better performance, from real-time perspective as well, not just general performance perspective. … There is increased functionality [in this release] in terms of data binding and data context. … We’ve got new triggers that are supported. … This is still a very, very important area for Microsoft, and frankly from embedded perspective XAML gives you in many ways a superior user interface description environment compared to HTML5. … [27:30]

        clip_image002[87][36:00] .. all-up general SKU … NR SKU for personal navigation devices … and we are coming with a brand new SKU „Windows Embedded Compact 2013 Entry” SKU. And this is the SKU for smaller devices that don’t need XAML capability. .. We haven’t announced our pricing for the 3 SKUs yet. That will be announced around the general availability [GA] timeline. … Windows Embedded Compact 2013 is still on schedule to ship in the first half of 2013. That means June. So we will be shipping and announcing the product in June. What we are giving now is a kind of sneak preview which will give you a technical introduction to the product. [27:53]

        From Q&A

        Using the same rendering engine as before

        XAML for Windows Embedded is not supporting C#

        More information:
        Windows Embedded Compact 2013 [MSDN Library, April 26, 2013]
        from which of particular interest are:
        What’s New (Compact 2013) [MSDN Library, April 26, 2013]
        Expression Blend and XAML for Windows Embedded (Compact 2013) [MSDN Library, April 26, 2013]
        XAML for Windows Embedded Application Development (Compact 2013) [MSDN Library, April 26, 2013]
        Developer Guides (Compact 2013) [MSDN Library, April 26, 2013]
        from which of particular interest is:
        .NET Compact Framework (Compact 2013) [MSDN Library, April 26, 2013]

        Windows Embedded Compact 7 [Windows Embedded Products Overview > Windows Embedded Compact 7 Product Details page, May 7, 2013]

        Windows Embedded Compact 7 is a componentized, real-time operating system designed for small-footprint devices at the edge of enterprise networks. With support for x86 and ARM architectures, Windows Embedded Compact 7 allows devices to leverage the latest innovations in hardware, and equips developers and device manufacturers with the tools they need to create nimble, enterprise-class intelligent system solutions, while reducing time to market.

        Top features 

        Rich user interface
        Includes XAML for Windows Embedded, a powerful technology that allows you to build interfaces that incorporate touch and gesture support.
        Flexible architecture
        Real-time operating system supports an array of hardware requirements and key processor architectures, including x86 and ARM, to power everything from tiny controls to fully automated factories.
        Secure and reliable
        One-tier security model feature is SDL compliant and helps ensure that only authenticated applications can run on an industry device, with reliable wireless connectivity and networking performance.
        Ease of development
        Familiar tools like Visual Studio and Expression Blend allow you to create attractive and intuitive user interfaces, and bring differentiated devices to market faster than ever before.

        Things you can do

        For Enterprises

          One trusted platform
          Devices running on Windows Embedded Compact 7 are covered under a 10-year support program from Microsoft. You can deploy industry devices with the assurance that technical support will be there, when and if it’s needed. And because you can continue using your existing applications based on Windows Embedded CE 6.0, there is a smooth upgrade path for using current applications while moving to the next generation of touch-enabled apps that provide an easy-to-use experience for getting things done more quickly.
          Meets your needs
          Arm your employees with a new breed of business applications harnessing touch and gesture input that showcase your company’s work and give employees better tools to get things done with intuitive access to information. Windows Embedded Compact 7 also provides a flexible device platform that can run on the smallest of devices, or power rich device experiences. And with the capabilities of a real-time operating system, you can be confident of its ability to meet the most exacting of industry requirements.
          Extend business intelligence
          Windows Embedded Compact 7 supports a variety of connectivity options, providing more flexibility for connecting industry devices to your company’s network. Support for enhanced WiFi, Ethernet, Bluetooth and USB enables you to deploy devices across your corporate network, where they can help automate business processes and generate data that leads to greater insight. Collectively, these devices can provide you with greater visibility into what’s happening throughout your company. As critical components of an intelligent system, these devices can help you make decisions in real-time, as well as formulate long-term plans for the growth of your business.

          For OEMs

          One trusted platform
          With Windows Embedded Compact 7, you can develop industry devices within the integrated environment of Platform Builder, to allow adjustments on the hard, real-time operating system while working on specific projects simultaneously. And support for Visual Studio 2008, Expression Blend 3.0 and the .NET Compact Framework 3.5 provides access to the tools that OEMs rely on. Windows Embedded Compact 7 also ensures a consistency of APIs and SDKs, making it possible to leverage past investments and current skillsets to create products that are supported by a 10-year support program from Microsoft, along with the assurance that Windows Embedded Compact 7 will be available for 15 years from the time it was first released.
          Create differentiated devices
          Windows Embedded Compact 7 includes a development framework based on XAML and supports a range of architecture options, including ARM, MIPS and x86. As a result, you have greater flexibility to create devices that match your customers’ specifications. Creating these experiences is simplified with tools such as 3D transformation and Pixel/Shader effects. Your devices will give customers the ability to seamlessly share content on business networks, as well as network devices. And the introduction of touch gesture interface allows developers to create a more natural, interactive experience.
          Extend business intelligence
          Create an experience that helps companies get more done. With Windows Embedded Compact 7, you can design a solution that’s more seamless, making it easier for companies to synchronize content with their Windows PCs. And with the Connection Manager feature and multiple connectivity options, you can ensure that businesses have the optimum level of connectivity across the workplace. Support for enhanced WiFi, Ethernet, USB and Bluetooth virtually guarantees that your device will connect with the other devices, PCs and servers already running in the enterprise. With this connectivity in place, employees will be able to remotely access Microsoft Outlook via Microsoft Air Sync. And the ability to view Adobe and Microsoft Office files will help them stay current on business developments.

            Sample device types

            Human machine interface (HMI)
            The devices provide the ability to monitor automated processes, such as manufacturing, to safegueard against diminished product quality or equipment breakdowns
            RFID scanners
            Speed the completion of common tasks such as inventory, shipping and receiving with these devices
            Medical devices
            Sonograms and other medical devices enable doctors to monitor a baby’s health in utero and send images to researchers in real time via a wireless network
            GPS devices
            Help people stay on course to their destination with these navigation devices

            News: Building the intelligent car of the future
            [Microsoft feature story, May 7, 2013]
            Microsoft: Working with automotive industry to design an updateable car that’s easier to use and responds to the driver’s needs.

            In the 1920’s, carmakers started offering an accessory that would revolutionize the driving experience: the radio. While tooling down the road you could tune into the nightly newscast, a live jazz performance or the seventh game in the series. It provided a connected experience that replaced the steady drone of the four liter under the hood with the soaring notes of Duke Ellington’s bugle or the crack of Babe Ruth’s bat as the ball hurtled toward the right-field stands.
            Since then, the notion of the connected car has changed. Features such as streaming music from your smartphone and using voice commands to control the stereo and environment are standard equipment in many models. And Microsoft has a vision for in-car technology that takes us beyond the confines of the cockpit to what they call the intelligent car — a scenario in which telematics data can help improve the driving experience, and the design of the vehicle.
            Led by Group Program Manager Pranish Kumar, the Windows Embedded Automotive team is focused on fulfilling this vision and, in the process, developing an upgradeable technology solution that extends the useful life of the vehicle.
            Says Kumar: “The automotive industry faces a lot of unique challenges, perhaps first of which is that cars must be supportable for much longer than consumer electronics devices — 10 or 20 years, in most cases. I think we’ve developed a solid understanding of some of these challenges and how technology can address them, while providing drivers with a better experience.”
            Microsoft’s Pranish Kumar and his team work to develop reliable in-car experiences, not by sitting at a desk but by working behind the wheel of a fleet of test vehicles.

            A relationship built on experience and trust
            Microsoft’s involvement in the automotive industry stretches back 15 years to 1998 when the company partnered with Clarion to announce the Auto PC, a first-of-its-kind solution that gave drivers access to email, driving directions, paging and traffic alerts, and their entertainment system. And in 2003 Microsoft developed the Microsoft TBox, a telematics device that went on to power infotainment systems for a variety of carmakers.
            When it came to working directly with carmakers, Kumar says it was an uphill battle to gain their trust. Many had tried to design their own infotainment system and were convinced that it couldn’t be done in a shorter time than seven or eight years. Microsoft has since proven itself by reducing development time down to just two to three years.
            Kumar’s team also adopted the same level of rigor and many of the testing methodologies that carmakers use when conducting customer road tests. Making this change gave the team a “greater degree of confidence” that their development and reporting processes met the carmaker’s need and that the finished product would meet or exceed the driver’s expectations.
            From the connected car to the intelligent car
            For carmakers, the Promised Land lies in giving drivers the ability to access information and services anywhere they live, whether an app on their smartphones, a music file on their tablet at home, or customer contact information on their computer at work or in the cloud. Over time, members of the Windows Embedded Automotive team have earned a reputation for providing solid insight to help make these experiences a reality.
            Together with Kumar, Creative Director John Hendricks, Principal Program Manager Jay Loney, Partner Development Manager David Kelley, and Experience Designers David Walker and Melissa Quintanilha are part of a larger team developing and designing the future of Microsoft’s automotive technologies.
            Top Gear U.S.’s Tanner Foust talks with Microsoft engineers and designers about their vision for the future intelligent car.
            In doing so, they are moving away from a focus on creating in-dash technologies, such as the entertainment or navigation systems, to an emphasis on creating a solution that would power these technologies as part of an overall user experience. Taking this approach has given carmakers the ability to provide periodic updates that refresh the driving experience and extend compatibility to the latest consumer devices.
            In the future Microsoft wants to take that experience a step further. Whereas today consumers demand a car that’s more connected — to their phones, their music and their services — Windows Embedded Automotive is focused on designing intelligent cars that respond to the driver’s needs.
            One example that Kumar cites involves the difficulty of pairing new phones, which is one of the most frequent problems facing car owners. According to IDC, 722 million smartphones were shipped globally in 2012, a 46.1 percent increase over the previous year.[1] As demand for smartphones continues, ensuring compatibility between new models and infotainment systems will remain a challenge.
            A Windows Embedded-based system could transmit data about the unsuccessful pairing to Microsoft and overnight a solution could be identified and downloaded to the car. When the owner gets in his car the next morning, his phone would automatically pair. Over time, that same data could be used to design a user experience that’s not only easier to use but that performs tasks on your behalf, such as tuning to your favorite station or rescheduling a meeting due to traffic delays.
            Drivers also stand to gain from the availability of data. Many vehicles contain sensors that monitor factors such as speed, braking, fuel consumption, tire pressure and environmental conditions. Drivers can already use this information to assess their performance and get recommendations on how to improve fuel efficiency or vehicle maintenance.
            Using the same data, carmakers could augment the existing battery of tests that are part of their proving process. So in addition to putting a vehicle through the environmental extremes of Northern Sweden or California’s Death Valley, they could evaluate its performance in day-to-day conditions. Engineers and product planners could get a head start on the next year’s model through insights around where design improvements are needed or where a car has been over-engineered. They could even fine tune an engine over-the-air to improve fuel economy of the current model year.
            Kumar believes that many of the systems are already in place to make this vision a reality. Using technologies such as Windows Update, cars could be automatically updated — in much the same way as smartphones automatically update when you activate them. And the combination of big data and machine learning could lead to cars that develop an understanding of your preferences and driving behavior to become more responsive to your needs.
            “We’ve come a long way in terms of creating a product that works reliably and meets the quality standards of the automotive industry. And we’re continuing our work with carmakers to reach the full potential of in-car technology,” says Kumar. “Through a combination of software, hardware and user-centric design, we believe that car owners will experience driving like never before possible.”
            [1] IDC Worldwide Mobile Phone Tracker, Jan. 24, 2013

            See also: Maximizing Internet Explorer in Windows Embedded Compact 7 [Windows Embedded blog, June 11, 2012]

            Windows Embedded Compact has a customized version of Microsoft’s Internet Explorer named Internet Explorer (IE) for Embedded. This powerful browser can be used in a number of ways in an embedded system to enhance the functionality of the system. This post will discuss the various ways to tune, customize and even embed IE for Embedded inside embedded applications.
            IE for Embedded is a customized version of Internet Explorer 7 for the desktop with performance enhancements from IE 8 added as well. Specifically, the JScript engine brought from IE 8 provides a 400% performance improvement over the original IE 7 scripting engine. In addition gesture support along with zoom and pan support is in this browser.
            Internet Explorer for Embedded is fundamentally an HTML rendering engine. As such, the user input surrounding the engine, (the “chrome”) isn’t really part of IE for Embedded. Windows Embedded Compact comes with two examples of IE for Embedded; one with classic “Windows” controls and the other one with the chrome rendered with the XAML-driven, Silverlight for Embedded framework. Both of these examples come with the source code that demonstrates how to host the IE control. They also both illustrate that almost all the functionality of these Browsers is contained within the control itself. The chrome only provides input from the user and a platform for returning feedback.
            The classic browser example, IESample, supports a favorites list, browser history and URL completion. It incorporates an internet control panel that can tune how the browser connects to the web as well as setting security settings. The XAML-based browser, IEExr, has a vastly different look and feel. However, it too handles a favorites list, history and control pane. IEExr even supports tabbed browsing using a thumbnail page to switch between pages. The reason the two examples have similar features is that most of the functionality, is incorporated in the IE ActiveX control itself.

            Silverlight for Windows Embedded (Windows Embedded Compact 7) [MSDN Library, Jan 23, 2013]

            Microsoft Silverlight for Windows Embedded is a native (C++) UI development framework for Windows Embedded Compact powered devices that is founded on Microsoft Silverlight 3. You can use Silverlight for Windows Embedded to do the following:

            • Separate programming logic and UI design.
            • Define visual UIs for applications in XAML.
            • Add, modify, and customize the UI at run time.
            • Create interactive multimedia UIs.
            • Collaborate with designers who use Microsoft Expression Blend 3 projects.
            • Simultaneously develop applications for Microsoft Silverlight 3 and Silverlight for Windows Embedded with a common UI defined in XAML files.
            Silverlight for Windows Embedded is compatible with Silverlight 3 XAML and provides a set of equivalent classes for supported XAML elements. For information about Silverlight 3, see http://www.silverlight.net/.
            Silverlight for Windows Embedded is also compatible with existing Windows Embedded Compact window controls, so you can use your existing window controls.
            To add this feature to your OS, see Silverlight for Windows Embedded Catalog Items and Sysgen Variables.
            For reference information, see Silverlight for Windows Embedded Reference.

            For step-by-step guidelines and code examples to help you learn how to create a UI by using Silverlight for Windows Embedded, see Silverlight for Windows Embedded Application Development.
            For recommendations on which hardware to use with Silverlight for Windows Embedded, see Silverlight for Windows Embedded Hardware Recommendations.

            More information:
            Differences Between Microsoft Silverlight 3 and Silverlight for Windows Embedded [MSDN Library, Jan 23, 2013]
            Silverlight for Windows Embedded Application Development [MSDN Library, Jan 23, 2013]

            Microsoft Silverlight for Windows Embedded is a “UI development framework” for “embedded devices” and is based on Microsoft Silverlight for the desktop browser. By using Silverlight for Windows Embedded, you can create an application that supports features such as storyboard animations, transformations, interactive controls, a layout system, and a visual tree.
            Silverlight for Windows Embedded is a native C++ development framework in which you can design a UI for the shell and applications for a Windows Embedded Compact device. You can use Microsoft Expression Blend 3 to quickly design a UI in Extensible Application Markup Language (XAML), which you can then convert, or you can build your application from scratch in Microsoft Visual Studio 2008 by using one of the Smart Device project templates. In the native C++ source files for your application, you can use the rest of the features of Windows Embedded Compact 7, including any existing window controls.
            By using Silverlight for Windows Embedded, you can create a UI that provides advanced visual effects for your Windows Embedded Compact device shell and applications. Silverlight for Windows Embedded makes this possible by supporting a subset of Silverlight XAML elements and by supplying a set of C++ classes that provide access to these elements.

            Graphics and Performance in Silverlight for Windows Embedded (Windows Embedded Compact 7) [MSDN Library, Jan 23, 2013]
            Hardware Acceleration in Silverlight for Windows Embedded (Windows Embedded Compact 7) [MSDN Library, Jan 23, 2013]

            Many modern device platforms include on-board graphics processing units (GPUs) with two-dimensional or three-dimensional capabilities or both. Microsoft Silverlight for Windows Embedded provides support for using a GPU to accelerate certain types of animations. Hardware acceleration is accomplished by using the GPU (rather than the CPU) to do some critical composition steps in the rendering process. Silverlight for Windows Embedded supports hardware-based acceleration of graphics for both Microsoft DirectDraw and OpenGL.
            For information on how to implement hardware acceleration, see Implement Hardware Acceleration for Graphics in Silverlight for Windows Embedded [Reference].

            With Visual Studio 2012 Update 2 Now Available [Somasegar’s blog on MSDN, April 4, 2013]

            It includes support in Blend for SketchFlow, WPF 4.5, and Silverlight 5.

            which according to ANNOUNCING VISUAL STUDIO 2012 UPDATE 2 CTP 2 [Blend Insider, Jan 30, 2013]

            Blend for Visual Studio [as part of a consolidated designer/developer offering retained only from previous Expression products that were phased out with the Visual Studio 2012] now support WPF, Silverlight and SketchFlow projects in the same version of Blend (support for these was previously available only as a standalone Preview release of Blend). With this CTP release, Blend now supports developing Windows Store, Windows Phone, WPF and Silverlight apps without needing to have multiple versions of Blend on the same machine. The table below highlights the various platforms that are now supported in Blend for Visual Studio 2012:

            TARGET PLATFORM

            VERSIONS SUPPORTED

            SPECIFIC REQUIREMENTS

            Windows Store XAML and HTML

            Windows 8

            Windows 8

            Windows Phone

            Windows Phone 8, Windows Phone 7.5

            Windows Phone 8 SDK

            WPF

            3.5, 4.0, 4.5

             

            Silverlight

            4, 5

             

            SketchFlow

            WPF 4.0 and Silverlight 4

            Visual Studio 2012 Premium or higher

            Additional details:
            – in Silverlight 5 Beta – available now! [Silverlight Team blog on MSDN, April 22, 2011] for Silverlight 5 Available for Download Today [Silverlight Team blog on MSDN, Dec 9, 2011]
            – in What’s New in Silverlight for Windows Phone [MSDN Library] (which is part of Silverlight for Windows Phone [MSDN Library])

            Silverlight for Windows Phone OS 7.1 is based on Silverlight 4. That means if you create a new Silverlight for Windows Phone application that targets Windows Phone OS 7.1, you can take advantage of several new features. You can still write applications that target Windows Phone OS 7.0, but to take advantage of the new features, you must target Windows Phone OS 7.1. Applications that target Windows Phone OS 7.0 will run on devices running Windows Phone OS 7.1. This topic introduces some of the new features and improvements in Silverlight for Windows Phone.

            – in What Version is Windows Phone Mango? [Shawn Wildermuth blog, Aug 19, 2011]

            In finishing up my new Windows Phone book, I had to deal with the confusing version problem. There are three version numbers to be aware of:

            • Windows Phone 7.5
            • Windows Phone OS 7.1
            • Windows Phone SDK 7.1

            So what is Mango? It comes down to this:

            • Windows Phone 7.5: The marketing name of the phone. This is the phrase you’ll see in the ads to consumers.
            • Windows Phone OS 7.1: The name of the actual operating system. When you create a new application in Visual Studio (or upgrade an existing one), you’ll see this version.
            • Windows Phone SDK 7.1: The name of the Mango tools.

            So get your nomenclature right and stop being confused.

            Features Differences Between Silverlight and Silverlight for Windows Phone [MSDN Library]
            Implementation Differences Between Silverlight and Silverlight for Windows Phone [MSDN Library]

            Microsoft betting on boosting Windows RT demand with top level ARM SoCs from its SoC partners, Windows 8.1 enhancements, Outlook addition to the Office 2013 RT and very deep tactical discounts to its OEM partners for tablet offerings of more value and capability

            … especially valuable for small businesses, and even enterprises of different, larger sizes thanks to new enhancements in manageability, networking, and security announced at TechEd North America 2013 (see “Cloud first” from Microsoft is ready to change enterprise computing in all of its facets [this same ‘Experiencing the Cloud’ blog, June 4, 2013]).

            Relevant excerpts from Nick Parker, Tami Reller, Antoine Leblond and Steve Guggenheimer: COMPUTEX 2013 Keynote Transcript [Microsoft, June 5, 2013]

            The full record of the keynote from Notebookitalia which contains the below excerpts between [10:49] and [19:50] as indicated.

            Tami Reller, Chief Marketing Officer and Chief Financial Officer, Windows:

            [10:49] Bringing the power of Windows to tablets is a really big part of the vision of Windows 8 and of Windows RT, really a new class of tablets that offers more value and capability than today’s tablets. […]

            [15:00] Windows tablets are an important part of the Windows 8 vision, and Windows tablets do more.

            Completing that promise of do more, I’m pleased to announce that starting with the back-to-school lineup, and in some cases even earlier, Windows x86 tablets will come with Office. That’s Word, that’s Excel, PowerPoint, and OneNote in the box. We’re making that possible through new OEM offerings that were introduced earlier this spring.

            Even with the value of Office built-in to these Windows tablets, these new offerings are going to allow our partners to build opening price point tablets, as well as great premium tablets.

            Additionally, we’ve opened up support for small tablets with Windows 8, and we’ll do more with Windows 8.1. You’ve seen the first of those tablets here at COMPUTEX. Congratulations to Acer on their announcements earlier this week.

            And coming with 8.1, building on our support for small tablets, we’re really committed to completing the scenario, including full portrait support.

            One of the top requests from Windows RT customers has been Outlook. I’m very pleased to announce that with the Windows RT 8.1 update Microsoft Outlook will be in-box.

            With 8.1 we’re again embracing the very latest technology, and the very latest on the silicon roadmap. Specifically this includes Bay Trail-T, Qualcomm 8974 [one of Snapdragon 800 SoCs coming in commercial devices of H2 2013, see more details in Snapdragon 800 Product Brief], and NVIDIA T40 [or Tegra 4 first in the already announced HP SlateBook x2 to be available in August 2013].

            And we’re expanding our ARM program to provide more component flexibility, creating more opportunities for partners to build competitive ARM tablets running Windows. [17:15 …]

            [19:10] Windows 8.1 is easy for our customers to get. It’s free to Windows RT and Windows 8 customers so that whether a customer has Windows 8 today or is buying a PC or a tablet or any other device in the near future, it will be one click away and very easy to get Windows 8.1. We’ll deliver it through the Windows Store, including the preview, which will come at the end of June. And the final product will be available later this calendar year. [19:50 …]

            New ecosystem opportunities, Windows 8.1 updates shared at Computex [Blogging Windows from Microsoft, June 5, 2013]

            Antoine Leblond, corporate vice president of Windows program management joined Tami and other top Microsoft executives on stage to give our very first public demo of the upcoming Windows 8.1 update – touching upon many of the exciting improvements Antoine highlighted in his blog post from last week. You can see some of the highlights of what to expect in Windows 8.1 for yourself in this short demo video featuring Jensen Harris from the Windows User Experience Team:

            Jensen Harris from the Windows Team shows some highlights of what to expect in Windows 8.1 coming later this year as a free update for Windows 8 customers. http://bit.ly/10OM2Th

            Additionally, Tami announced that Outlook 2013 RT will be coming to Windows RT tablets as part of Windows 8.1. Windows running on ARM architectures has enabled an exciting new category of mobile-first, instant-on tablets that are thin and lightweight, with amazing battery life. We know that the addition of Outlook for those using ARM-based Windows devices such as the Surface RT, Dell XPS 10, Lenovo Yoga 11, and ASUS VivoTab RT as well as new tablets to come in the future has been a popular request from consumers and businesses alike. As Tami said in her keynote address, we’ve listened and Outlook will be joining the other Office applications currently available on Windows RT, including Word, Excel, PowerPoint, and OneNote.

            Our commitment to Windows on ARM doesn’t stop with the addition of Outlook 2013 RT. We announced a number of other enhancements with Windows 8.1, earlier this week at TechEd North America, including new manageability, networking, and security capabilities that will make Windows RT an even more compelling option for enterprises.

            Eight questions about Windows 8 for Microsoft manufacturing chief Nick Parker [PCWorld, June 5, 2013] 

            IDG: So you just announced you’ll be including Outlook with the next version of Windows RT, what was the thinking behind that?

            NP: Outlook is one of those apps people love, and when you start thinking about RT in the small business environment, or for heavy email users, Outlook is one of those high value solutions. That was the one we got the most feedback about.

            IDG: The reception for Windows RT has been a bit lukewarm, what are some of the reasons for that and to what extent will adding Outlook will improve the situation?

            NP: If you look at what we did with RT—it’s completely new silicon, a new hardware platform, and Windows 8 is a new OS. So first you just have a natural growth curve when you’re starting at zero. Then you start seeing new apps appear, the killer apps that people want, like Outlook. And the ecosystem gets more familiar with it—they learn how to code to it and how to certify parts for it.

            We get so used to the tremendous success we’ve had on PCs for years, you just think you can flip a switch and the platform’s going to change. I think it’s just the incremental growth of a new platform. And we should be a bit humble about how we go to market and talk about the new capabilities. I think we could maybe have inspired people a bit more with some of the RT devices and some of our marketing.

            IDG: There’s a lot of downward pressure on tablet pricing—Asus showed an Android tablet this week for $129. Do you expect to see Windows 8 tablets getting down to those sort of prices?

            NP: That’s a question to ask our OEMs [original equipment manufacturers, or basically PC makers]. I think people are prepared to pay for value and we see tablets with higher price points having better capabilities and features. I think buyers are getting smart about what’s good quality. But OEMs will choose their own prices.

            imageThe Acer Iconia W3-810 tablet

            IDG: We saw the first 8-inch Windows tablet launch this week from Acer. What are some of the things you’re doing to provide a better Windows experience on those smaller devices?

            NP: For any device you can hold in one hand, one of the things you need is portrait mode—so, the ability for the apps to work in the same way, to move and to flow nicely. And for our OEMs, we’re giving them the ability to have buttons on the side of the device, because when you’re holding it in one hand you might want to push a button on the side. You have to make the OS extensible. So those are the types of things.

            IDG: Will that all be part of Windows 8.1?

            NP: Yes, we talked about that today.

            IDG: I’ve never thought of Windows as being designed for smaller screens; the netbook experience wasn’t particularly great. What are you doing to improve the software experience?

            NP: In terms of how the display scales up and down, and in terms of the zooming capabilities—as soon as the preview [of Windows 8.1] comes out you should play with it.

            IDG: There’s a tremendous variety of form factors out there right now—all kinds of laptops and tablets and convertibles. When you look ahead a few years, do you expect them to coalesce around a few winning designs or will there always be that much variety?

            NP: In terms of capabilities, I think touch is going to be the new standard. People aren’t going to want to carry around hundreds of devices. You’ll have a phone, and I think the phablet is an interesting space. But for two-in-one detachables—I’m seeing the interest in those ramp. People want the best of both worlds. You can have a tablet and sit there and surf, then you plug it into a keyboard and you’re off working.

            IDG: Is the keyboard here to stay, or will people eventually get used to typing on touchscreens?

            I think the keyboard is here to stay, you’ve got that physical feedback. You may see a lot of innovation around keyboards but I think they’re here to stay.

            Google search on “Computex Windows ARM discount” between June 5 and 6 was yielding the following items:
            One year after debut, Windows RT is a Computex no-show | The Verge | OSNews | I4U News
            New ecosystem opportunities, Windows 8.1 updates shared at Computex | Blogging Windows [from Microsoft]
            Microsoft to include Outlook app with update to Windows 8 RT | ARN [Australia]
            Microsoft Aims to Lure More Users to Windows | WSJ.com
            Microsoft To Give More Tablet Makers Windows 8 Discounts | NASDAQ.com | 4-Traders | Capital.gr
            Microsoft to Offer Discounted Windows and Office for Small Tablets | AllThingsD | CELLIFONE.com
            AMD breaks from Windows exclusivity, adopts Android and Chrome OS | Facepunch.com
            Forget Haswell: Why tablet processors mean more to Intel at Computex | The USA News Online 
            Computex 2013: low-cost tablets, high-res laptops steal the show | Techgoondu
            Microsoft says Outlook is coming to Windows RT this year | ZDNet
            Microsoft demonstrates Windows as a platform for small tablets, touch and mobility at Computex 2013 | Virtualization Journal [replica of Microsoft press release]

            Windows RT is a Computex no-show:

            Three days into Computex Taipei, Asia’s biggest computer show, not a single manufacturer has announced a Windows RT device. … The Computex show floor has been dominated by devices running Windows 8 on Haswell and other chips from Intel, but ARM-powered units have been conspicuous in their absence.

            However, the upcoming Windows 8.1 update and its RT counterpart could provide a shot in the arm to the fledgling OS. Qualcomm has pledged support for RT 8.1 with its new Snapdragon 800 processor, which president and COO Steve Mollenkopf described in a presentation today as offering “about 75 percent better performance than the S4 Pro.”

            The Verge has heard that manufacturers may be holding back RT devices for Qualcomm’s new chip and the 8.1 update, which is also designed to improve the experience on smaller-screened devices.

            include Outlook app with update to Windows 8 RT:

            Outlook will be included with version 8.1 of Windows RT, previously dubbed Windows Blue, Microsoft announced at the Computex trade show in Taipei on Wednesday. The 8.1 update is scheduled for release later this year as a free update to Windows 8.

            “We’re always listening to our customers and one piece of feedback was that people want the power of Outlook on all their Windows PCs and tablets,” Microsoft said. […]

            Support for RT from hardware makers has been limited, however, with several PC makers, such as Acer, Asustek Computer and Hewlett-Packard, not yet supporting the OS.

            Microsoft hopes to change that by addressing one of the criticisms of Windows RT — that it doesn’t include a version of its popular Outlook email client. Nvidia CEO Jen Hsun Huang has been vocal about the importance of adding Outlook to RT.

            “If Outlook were to show up on RT, my life would be complete,” he said recently, lamenting the slow sales of Windows RT tablets. “I am one Outlook away from computing nirvana. Outlook god, please…”

            Lure More Users to Windows:

            Until now, people with Windows RT devices—which use different kinds of computer chips than those common in personal computers—have only been able to use a new type of email app that has been panned by users.

            A Microsoft executive, speaking at the Computex computer trade show in Taiwan, also acknowledged the company is cutting the prices it charges computer makers for Microsoft software.

            The executive, Nick Parker, didn’t detail the size of the software discounts. But people familiar with Microsoft’s pricing strategy have said for Windows RT devices, Microsoft is cutting by two-thirds the cost to license Windows and Office software, or roughly $100 before marketing rebates Microsoft offers to PC makers.

            Microsoft’s discounts apply to tablets smaller than 10.1 inches, Mr. Parker said. The company said it started offering discounts to some tablet makers in April.

            The discounts and addition of Outlook underscore how hard Microsoft is trying to boost the appeal of devices that run Windows RT, a product whose development marked a major break from company tradition. […]

            “This is an exciting development that we believe will deliver a much more robust and full-featured experience to Windows RT users,” wrote Mark Aevermann, an Nvidia product manager, in a blog post.

            Microsoft executives have said they would push harder to bolster sales by explaining more clearly the attributes of Windows RT and ARM chips.

            We are very committed to ARM,” said Tami Reller, the Windows chief financial officer and chief marketing officer, in an interview last month.

            Windows executives also recently suggested Windows RT devices might in the future lose the dual modes that have been a polarizing feature of the new Windows.

            Windows 8 and Windows RT devices operate in both a traditional Windows “desktop” and a new mode that looks and functions more like a smartphone screen. The Windows executives, Jensen Harris and Antoine Leblond, suggested in a May interview that it might be appropriate to junk desktop mode entirely on Windows RT devices.

            Windows 8 Discounts:

            Nick Parker, vice president of Microsoft’s OEM division, said at the Computex trade show in Taipei Wednesday that the Redmond, Wash. company is now expanding its discount program to include tablets that run on Windows RT, a version of Windows 8 running on ARM Holdings PLC (ARMH, ARM.LN) chips. The discount will also apply to an upgraded version of its Windows 8 system dubbed Windows 8.1. The discounts will only apply on tablets that are between 7 and 10.1 inches. The executive declined to comment on the size of the discounts but Mr. Parker said they will come in the form of a cut in licensing fees and free Office software for hardware makers.

            Microsoft said it started offering discounts to some tablet makers in April and there is no specific time frame for when the discounts might end.

            The Wall Street Journal reported in early March, citing people familiar with the situation, that Microsoft had been offering price breaks on its Windows 8 and Office software to help spur the development of small, touch-enabled laptop computers.

            In the latest discount program, tablets with screens bigger than 10.1 inches will not be eligible for the discount, Mr. Parker said. But he didn’t elaborate.

            Analysts said the discounts could help bring down retail prices of smaller Windows tablets and help Microsoft better compete with Apple Inc. (AAPL) and Google Inc. (GOOG).

            Discounted Windows and Office for Small Tablets:

            Second, Microsoft is cutting some sort of deal with computer makers that want to bundle Windows 8 and Office Home and Student onto a seven- or eight-inch tablet. Microsoft isn’t going into detail on what it is charging PC manufacturers, but it is clearly low enough to enable some pretty inexpensive tablets.

            The first of these tablets to be announced, Acer’s Iconia W3, has a $379 sticker price. That’s pretty darn cheap for a machine that includes full-blown Windows and Office.

            Microsoft isn’t saying which other computer makers may also be working on small tablets, but with the PC market struggling, it seems reasonable to think we will see a number of such tablets in short order.

            And while Microsoft’s bundle program appears limited to small tablets, one could conceivably hook up the tiny tablet to a monitor and keyboard and use it as a home PC.

            low-cost tablets, high-res laptops steal the show:

            Since Amazon’s Kindle Fire and Asus’ Nexus 7 came out last year, the idea of a cheap, small tablet has taken hold like few expected. This year, the cheap is going to get cheaper, with Asus’ MeMo Pad HD7 starting from just US$129 for an 8GB version.

            image

            Now, this may not be as cheap as some models you’d find in Shenzhen, but this model from Asus will win over many users looking for an affordable but well-made tablet.

            The new MeMo Pad HD7 also seems like an updated version of the successful Nexus 7. There is the 1,280 x 800 screen, now coupled with a quad-core Arm Cortex A7 CPU, and a microSD card slot to pop in memory cards, which the Nexus 7 did not have. No idea of when this is coming, but expect to save some money for a budget tablet this holiday season. […]

            An interesting idea, which may not turn out to be a major trend, is small Windows tablets. Acer surprised many visitors with its 8-inch Windows 8 tablet, probably the first such mobile option.

            The Iconia W3 runs an Intel Atom chip, has 2GB RAM and either 32GB or 64GB storage. The 1,280 x 800 resolution is not too bad on the small screen.

            image

            What’s a little hard to see is the Windows desktop, when you fire up your traditional Windows programs, like Excel. During a quick hands-on, I can tell that the screen was too small for serious editing. Don’t even think of sharing programs on the screen. It’s just too small.

            Which leaves you in mostly the Metro touch interface on Windows 8. Sadly, there aren’t many apps here yet, compared to either an Apple iPad mini or an Android tablet.

            Not just that, while the US$379 asking price isn’t unreasonable for the hardware, the question is on usage. If you’re using the machine mainly as a small tablet, Android tablets are getting cheaper all the time, as Asus’ MeMo Pad HD7 shows.

            Outlook is coming to Windows RT:

            Owners of existing RT devices will receive the updates for free.

            Despite weak sales of its own ARM-powered Surface and even more tepid support from hardware partners, Microsoft doesn’t appear to be backing away from Windows RT. The addition of Outlook will undoubtedly convince some previously recalcitrant business buyers that Windows RT tablets make sense, as will the announcement at the Tech-Ed conference this week of management tools that allow greater control over Windows RT devices. And Microsoft also announced support for additional types of Virtual Private Networks (VPNs) on Windows RT.

            But there are still dealbreakers that stand in the way of widespread deployments of Windows RT. Office 2013 RT has many of the same features as its x86/x64 counterpart, but it lacks the ability to handle custom macro code. In addition, some features are missing from the RT programs, including the ability to embed audio and video in OneNote notebooks.

            And Office is the only desktop app that Microsoft has officially ported to Windows RT. Third-party developers don’t have that option, which means any business that requires a third-party desktop app or a browser plugin other than Adobe Flash is out of luck. Likewise, Windows RT still doesn’t support some widely used third-party VPN clients.

            There’s also the pesky issue of licensing. The version of Office included with Windows RT is Office Home and Student 2013, which is licensed for noncommercial use only. If you want to stay in the good graces of Microsoft’s licensing agreement, you need to add commercial use rights, through a volume license or by way of a subscription to a business edition of Office 365.

            Today’s announcement is also noticeably silent on the question of when Microsoft plans to release native tablet versions of its Office programs, for both Windows 8.1/RT as well as alternative platforms like the iPad and Android tablets. The fact that the desktop version of Outlook is a key part of this fall’s update suggests that Office for tablets won’t appear until 2014, and one recent rumor says late 2014 is the likely target date for those apps.

            Windows as a platform for small tablets [Microsoft press release replicated]:

            “We want to be the best partner to all hardware manufacturers, from the way we engage and invest on new product designs to the experience we jointly deliver to customers,” Parker said. “This new wave of Windows devices from our partners, combined with our software, apps and services, reflects that commitment.”

            Most notable of the devices Parker showed were the new 7-inch and 8-inch Windows tablets: the Acer Iconia W3 that launched on June 3 in Taipei, and three other small tablets from top original device manufacturer (ODM) and original equipment manufacturer (OEM) partners expected to ship for the holiday season. These small tablets provide a Windows experience with Office Home & Student 2013 delivering even more options to experience all that Windows can offer in a smaller form factor. […]

            Tami Reller, chief financial officer and chief marketing officer of Microsoft’s Windows Division, joined Parker onstage … “Windows 8.1 furthers the bold vision of Windows 8 by responding to customer feedback and adding new features and functionality that advance the touch experience and mobile computing’s potential,” Reller said.

            As part of this commitment, Reller announced that Outlook 2013 RT will be available on Windows-based ARM tablets with the Windows 8.1 update later this year. “Windows on ARM is a core part of our strategy today and moving forward, and the addition of Outlook further enriches this world of new on-the-go opportunities for partners and customers,” Reller said.

            “Cloud first” from Microsoft is ready to change enterprise computing in all of its facets

            … represented by these alternative/partial titles explained later on in this composite post:
            OR Choosing the Cloud Roadmap That’s Right for Your Business [MSCloudOS YouTube channel, June 3, 2013]
            OR Microsoft transformation to a “cloud-first” (as a design principle to) business as described by Satya Nadella’s (*) Leading the Enterprise Cloud Era [The Official Microsoft Blog, June 3, 2013] post
            OR Faster development, global scale, unmatched economics… Windows Azure delivers [Windows Azure MSDN blog, June 3, 2012] which is best summarized by Scott Guthrie (*) as the following enhancements to Windows Azure
            OR as described by Brian Harry (*) in Visual Studio 2013 [Brian Harry’s MSDN blog, June 3, 2013]
            OR as described by Brad Anderson (*) in TechEd 2013: After Today, Cloud Computing is No Longer a Spectator Sport [TechNet Blogs, June 3, 2013]
            OR as described by Quentin Clark (*) in SQL Server 2014: Unlocking Real-Time Insights [TechNet Blogs, June 3, 2013]
            OR as described by Antoine Leblond (*) in Continuing the Windows 8 vision with Windows 8.1 [Blogging Windows, May 30, 2013], and continued by Modern Business in Mind: Windows 8.1 at TechEd 2013 [June 3, 2013] from Erwin Visser (*) describing some of the features that businesses can look forward to in Windows 8.1
            OR putting all this together: Microsoft unveils what’s next for enterprise IT [press release, June 3, 2013]

            First watch how this whole story was presented in the keynote to TechEd North America 2013 on June 3, 2013:

            Brad Anderson was the keynote speaker so besides the overall topic and his two particular topics he is taking care of all the introductions/recappings to detailed/particular parts delivered by other executives from Microsoft Server & Tools Business as well. His keynote is starting at [3:18]. He first invites Iain McDonald to deliver the Windows 8.1 Enterprise presentation starting at [6:36] in the video. After that at [28:57] Brad is talking about “Empower people-centric IT” based on “Personalized experience”, “Any device, anywhere” and “Secure and Protected” leading to System Center Configuration Manager 2012 R2 + Windows Intune which are demonstrated from [38:00] to [46:50] by Molly Brown, Principal Development Lead for those products. Bringing consistence experience across PCs, iOS devices and Android devices supporting the BYOD trend for client device managers. Then he starts talking about “Enable modern business applications” based on “Time to market”, “Revolutionary technology” and “Organizational readiness” by focusing on “Rapid lifecycle”, Multi-device”, “Any data, any size” and “Secure and avalable”. Then comes (at [50:00]) a current state-of-the-art overview of the Windows Azure business and a customer testimonial from a budget airliner Easy Jet at [52:10] about moving to the “allocated seating mode” for which they indeed required the peak load support capability of Windows Azure to meet the sudden rush of customer reservations for things like putting everything on sale at slashed down prices. He then at [56:45] invites Scott Guthrie to talk about the Windows Azure application platform leading to announcements like “Windows Azure per minute pricing” and “Windows Azure MSDN offer”. Brian Harry replaces Guthrie on the stage (at [1:05:02]) to continue the same topic with the upcoming Visual Studio 2013 offering a number of new additions for team development. Brad is back at [1:14:40] to talk about “Unlock insight from any data” based on “Data explosion”, “New types and sources of data” and “Increasing user expectations” by focusing on “Easy access to data”, “Powerful analytics for all” and “Complete data platform”. To shed specific details on that he invites Quentin Clark at [1:16:44] who is talking about the upcoming SQL Server 2014 and joined by his marketing partner Eron Kelly demonstrating the new things coming with that product. At [1:36:15] Brad Anderson is back to talk about how to “Transform the datacenter” based on “Cloud options on demand”, “Reduced cost and complexity” and “Rapid response to the business”. First he talks about the cloud platform itself (as an infrastructure) exploiting a customer testimonial from Trek Corporation at [1:39:24]. Then at [1:41:24] he announces Windows Server 2012 R2 and System Center 2012 R2, followed with Windows Azure Pack announcement encompassing a number of things which ar demonstrated at [1:44:44] by Clare Henry, Director of Product Marketing. At [1:49:25] Brad is back to talk about the fabric of this infrastucture for which he also invites at [1:51:30] Jeff Wolser, Principal Program Manager for looking into the storage, live migration, HyperV replica etc. From [2:01:27] Brad is delivering the final recap.

            The final recap by Brad Anderson well represented the story shown in the keynote:

            1. [2:03:20] Microsoft’s cloud vision is the Cloud OS in which they have 4 promises:
              image
              which was fully covered in the keynote (actually in that order) and
            2. [2:03:50] with the new announcements demonstrating execution on those promises:
              image

            Then here is the alternative/partial information which became also available:

            OR Choosing the Cloud Roadmap That’s Right for Your Business [MSCloudOS YouTube channel, June 3, 2013]

            Introductory information: Built From the Cloud Up [MSFTws2012 YouTube channel, Nov 20, 2012]

            Experience Microsoft’s vision for the Cloud OS with Satya Nadella (*) and see how it is made real today with Windows Server 2012 and Windows Azure. Learn more at http://microsoft.com/ws2012

            OR Microsoft transformation to a “cloud-first” (as a design principle to) business as described by Satya Nadella’s (*) Leading the Enterprise Cloud Era [The Official Microsoft Blog, June 3, 2013] post:

            Two years ago we bet our future on the cloud and quietly refocused our 19 billion-dollar software business by completely transforming our products, culture and practices to be cloud-first. We knew the journey would be long and challenging with plenty of doubters. But we forged ahead knowing that the cloud transition would change the face of enterprise computing. […]

            To enable this transformation we had to make deep changes to our organizational culture, overhauling how we build and deliver products. Every one of our division’s nearly 10,000 people now think and build for the cloud – first. […]

            We are already seeing this bet deliver substantial returns. Windows Azure is going through hyper-growth. Half the Fortune 500 companies are using Windows Azure. We have over 1,000 new customers signing up every day and over 30,000 organizations have started using our IaaS offering since it became available in April. We are the first multinational company to bring public cloud services to China. Ultimately we support enormous scale, powering some of the largest SaaS offerings on the planet.

            (*) Satya Nadella is President, Server & Tools Business, a US$19 billion division that builds and runs the company’s computing platforms, developer tools and cloud services. The whole above mentioned post contains the email he sent to employees about the progress they’ve made completely transforming Microsoft products, culture and practices to be cloud-first.

            Introductory information: Enable Modern Apps [MSFTws2012 YouTube channel, Nov 20, 2012]

            Scott Guthrie (*) demonstrates how Windows Server 2012 and Windows Azure provide the world’s best platform for modern apps. Learn more at http://microsoft.com/ws2012

            OR Faster development, global scale, unmatched economics… Windows Azure delivers [Windows Azure MSDN blog, June 3, 2012] which is best summarized by Scott Guthrie (*) as the following enhancements to Windows Azure:

            Windows Azure: Announcing New Dev/Test Offering, BizTalk Services, SSL Support with Web Sites, AD Improvements, Per Minute Billing [ScottGu’s blog, June 3, 2013]

            • Dev/Test in the Cloud: MSDN Use Rights, Unbeatable MSDN Discount Rates, MSDN Monetary Credits
            • BizTalk Services: Great new service for Windows Azure that enables EDI and EAI integration in the cloud
            • Per-Minute Billing and No Charge for Stopped VMs: Now only get charged for the exact minutes of compute you use, no compute charges for stopped VMs
            • SSL Support with Web Sites: Support for both IP Address and SNI based SSL bindings on custom web-site domains
            • Active Directory: Updated directory sync utility, ability to manage Office 365 directory tenants from Windows Azure Management Portal [regarding this read also: Making it simple to connect Windows Server AD to Windows Azure AD with password hash sync [Active Directory Team Blog, June 3, 2013]
            • Free Trial: More flexible Free Trial offer
            (*) Scott Guthrie, Corporate Vice President (CVP) of Program Management leading the Windows Azure Application Platform Team in the Server & Tools Business

            OR as described by Brian Harry (*) in Visual Studio 2013 [Brian Harry’s MSDN blog, June 3, 2013]

            Today at TechEd, I announced Visual Studio 2013 and Team Foundation Server 2013 and many of the Application Lifecycle Management features that they include. … I will not, in this post, be talking about many of the new VS 2013 features that are unrelated to the Application Lifecycle workflows. Stay tuned for more about the rest of the VS 2013 capabilities at the Build conference. […]

            We are continuing to build on the Agile project management features (backlog and sprint management) we introduced in TFS 2012 and the Kanban support we added in the TFS 2012 Updates. With TFS 2013, we are tackling the problem of how to enable larger organizations to manage their projects with teams using a variety of different approaches. … The first problem we are tackling is work breakdown. … We are also enabling multiple Scrum teams to each manage their own backlog of user stories/tasks that then contributes to the same higher-level backlog. […]

            We’ve been hard at work improving our version control solution. … We’ve added a “Connect” page to Team Explorer that makes it easier than ever to manage the different Team Projects/repos you connect to – local, enterprise or cloud. …We’ve also built a new Team Explorer home page. …The #1 TFS request on User Voice. … So, we have introduced “Pop-out Team Explorer pages”. …  Another new feature that I announced today is “lightweight code commenting”. […]

            As always, we’ve also done a bunch of stuff to help people slogging code every day. The biggest thing is a new “heads up display” feature in Visual Studio that provides you key insights into your code as you are working. We’ve got a bunch of “indicators” now and we’ll be adding more over time. It’s a novel way for you to learn more about your code as you read/edit. … Another big new capability is memory diagnostics – particularly with a focus on enabling you to find memory leaks in production. […]

            In addition to the next round of improvements to our web based test case management solution, today I announced a preview of a brand new service – cloud load testing. […]

            At TechEd today, perhaps my biggest announcement was our agreement to acquire the InRelease release management product from InCycle Software. I’m incredibly excited about adding this to our overall lifecycle solution. It fills an important gap that can really slow down teams. InRelease is a great solution that’s been natively built to work well with TFS. […]

            With TFS 2013 we are trying a new tact to facilitate that called “Team Rooms”. A Team Room is a durable collaboration space that records everything happening in your team. You can configure notifications – checkins, builds, code reviews, etc to go into the Team Room and it becomes a living record of the activity in the project. You can also have conversations with the rest of your team in the room. It’s always “on” and “permanently” recorded, allowing people to catch up on what’s happened while they were out, go back and find previous conversations, etc. […]

            (*) Brian Harry, Microsoft Technical Fellow working as the Product Unit Manager for Team Foundation Server (TFS).

            Introductory information: Empower People Centric IT [MSFTws2012 YouTube channel, Nov 20, 2012]

            Brad Anderson (*) shows how Windows Server 2012 helps enable personalized experiences across devices. Learn more athttp://microsoft.com/ws2012

            OR as described by Brad Anderson (*) in TechEd 2013: After Today, Cloud Computing is No Longer a Spectator Sport [TechNet Blogs, June 3, 2013]

            We are now delivering on our vision with a wave of enterprise products built with this cloud-first approach: Windows Server & System Center 2012 R2 and the update to Windows Intune bring cloud-inspired innovation to the enterprise, and enable hybrid scenarios that cannot be duplicated anywhere in the industry.

            With this new wave, our partners and customers can do four key things:

            • Build a world-class datacenter without barriers, boundaries, or limitations.
            • Use a Cloud OS to innovate faster and better than ever before.
            • Embrace and control the countless ways users circumvent IT, but still enable productivity.
            • Get serious about the cloud with a partner who takes the cloud seriously.

            These developments shatter the obstacles which once stood in the way of turning traditional datacenters into modern datacenters, and which inhibited the natural progression to hybrid clouds. These hybrid scenarios are especially exciting – and Microsoft’s comprehensive support for them sets us apart from each and every other competitor in the tech industry.

            (*) Brad Anderson, Corporate Vice President (CVP) of Program Management leading the Windows Server and System Center Group (WSSC) in the Server & Tools Business. The rest of his above post will shed more light on the Microsoft achievements delivered in his sphere of activity. See also his In the Cloud blog for more details.

            Follow-up information: Transform the Datacenter [MSFTws2012 YouTube channel, Nov 20, 2012]

            Bill Laing (*) shows how Windows Server 2012 helps increase agility and efficiency in the datacenter. Learn more athttp://microsoft.com/ws2012
            (*) Bill Laing, Corporate Vice President (CVP) for Server and Cloud [Development]. Read also his Announcing New Windows Azure Services to Deliver “Hybrid Cloud” [Windows Azure blog, June 6, 2012] post.

            Introductory information: Webcast: From Data to Insights [sqlserver YouTube channel, April 2, 2013]

            To better understand the impact of big data on the future of global business, Microsoft hosted an exclusive webcast briefing, “From data to insights”, produced in association with the Economist. In the webcast, you’ll hear from Tom Standage, digital editor of the Economist, on the social and economic benefits of mining data, followed by a moderated discussion featuring two Microsoft data experts, VP/Technical Fellow for Microsoft SQL Server Product Suite, Dave Campbell, and Technical Fellow, Server and Tools, Raghu Ramakrishnan, for an insider’s view of the trends and technologies driving the business of big data, as well as Microsoft’s big data strategy. To learn more about Microsoft big data solutions, visithttp://www.microsoft.com/bigdata

            OR as described by Quentin Clark (*) in SQL Server 2014: Unlocking Real-Time Insights [TechNet Blogs, June 3, 2013]

            The next version of our data platform – SQL Server 2014 – is a key part of the day’s news. Designed and developed with our cloud-first principles in mind, it delivers built-in in-memory capabilities, new hybrid cloud scenarios and enables even faster data insights. […]

            Today, we’re delivering Hekaton’s in-memory OLTP in the box with SQL Server 2014. For our customers, “in the box” means they don’t need to buy specialized hardware or software and can migrate existing applications to benefit from performance gains. … SQL Server 2014 is helping businesses manage their data in nearly real-time. The ability to interact with your data and the system supporting business activities is truly transformative. […]

            Insert: Edgenet Gain Real-Time Access to Retail Product Data with In-Memory Technology [MSCloudOS YouTube channel, June 3, 2013]

            To ensure that its customers received timely, accurate product data, Edgenet decided to enhance its online selling guide with In-Memory OLTP in Microsoft SQL Server 2014.

            End of Insert

            Delivering mission critical capabilities through new hybrid scenarios SQL Server 2014 includes comprehensive, high-availability technologies that now extend seamlessly into Windows Azure to make the highest level of service level agreements possible for every application while also reducing CAPEX and OPEX for mission-critical applications. Simplified cloud backup, cloud disaster recovery and easy migration to Windows Azure Virtual Machines are empowering new, easy to use, out of the box hybrid capabilities.

            We’ve also improved the AlwaysOn features of the RDBMS with support for new scenarios, scale of deployment and ease of adoption. We continue to make major investments in our in-memory columnstore for performance and now compression, and this is deeply married to our business intelligence servers and Excel tools for faster business insights.

            Unlocking real-time insights Our big data strategy to unlock real-time insights continues with SQL Server 2014. We are embracing the role of data – it dramatically changes how business happens. Real-time data integration, new and large data sets, data signals from outside LOB systems, evolving analytics techniques and more fluid visualization and collaboration experiences are significant components of that change. Another foundational component of this is embracing cloud computing: nearly infinite scale, dramatically lowered cost for compute and storage and data exchange between businesses. Data changes everything and across the data platform, we continue to democratize technology to bring new business value to our customers.

            (*) Quentin Clark, Corporate Vice President of Program Management leading the Data Platform Group. The rest of his above post emphasizes the great progress of the Microsoft SQL Server for which he also includes the below diagram:
            image

            Introductory information: Selling Windows 8 | Windows 8 business apps as big bet [msPartner YouTube channel, March 1, 2013]

            We recently sat down to talk Windows 8 with partners Scott Gosling from Data#3, Danny Burlage from Wortell and Carl Mazzanti from eMazzanti Technologies. In a conversation led by Erwin Visser (*), Windows Commercial, and our own Jon Roskill and Kat Tillman we discussed the business potential of Windows 8 and why apps are key. In this segment, learn why Windows 8 business apps are a big bet.

            TechEd North America 2013 – Windows 8.1 Enterprise Build 9415 [lyraull [Microsoft Spain] YouTube channel, June 4, 2013]

            During the keynote address, Iain McDonald, partner director of program management for Windows, [starting at [6:36]] detailed key business features in the recently announced Windows 8.1 update — including advances in security, management, mobility and networking — to offer the best business tablets with the most powerful operating system for today’s modern business needs.

            OR as described by Antoine Leblond (*) in Continuing the Windows 8 vision with Windows 8.1 [Blogging Windows, May 30, 2013]

            Windows 8.1 will advance the bold vision set forward with Windows 8 to deliver the next generation of PCs, tablets, and a range of industry devices, and the experiences customers — both consumers and businesses alike — need and will just expect moving forward. It’s Windows 8 even better. Not only will Windows 8.1 respond to customer feedback, but it will add new features and functionality that advance the touch experience and mobile computing’s potential.

            Windows 8.1 will deliver improvements and enhancements in key areas like personalization, search, the built-in apps, Windows Store experience, and cloud connectivity. Windows 8.1 will also include big bets for business in areas such as management and securitywe’ll have more to say on these next week at TechEd North America. Today, I am happy to share a “first look” at Windows 8.1 and outline some of the improvements, enhancements and changes customers will see. […]

            (*) Antoine Leblond, Corporate Vice President (CVP) of Windows Program Management. His above post from last Thursday was continued by Modern Business in Mind: Windows 8.1 at TechEd 2013 [June 3, 2013] from Erwin Visser (*) describing some of the features that businesses can look forward to in Windows 8.1 such as

            Networking features optimized for mobile productivity. Windows 8.1 improves mobile productivity for today’s workforce with new networking capabilities that take advantage of NFC-tagged and Wi-Fi [Miracast etc.] connected devices […]

            Security enhancements for device proliferation and mobility.Security continues to be a top priority for companies across the world, so we’re making sure we continue to invest resources to help you protect your corporate data, applications and device […]

            Improved management solutions to make BYOD a reality. As BYOD scenarios continue to grow in popularity among businesses, Windows 8.1 will make managing mobile devices even easier for IT Pros […]

            More control over business devices. Businesses can more effectively deliver an intended experience to their end users – whether that be employees or customers. … Windows Embedded 8.1 Industry Our offering for Industry devices like POS Systems, ATMs, and Digital Signage that provides a broader set of device lockdown capabilities. […]

            On June 26, at the Build developer conference in San Francisco, Microsoft will release a public preview of Windows 8.1 for Windows 8, Windows RT and Windows Embedded 8.1 Industry. Upgrading to Windows 8.1 is simple as the update does not introduce any new hardware requirements and all existing Windows Store apps are compatible. […]

            (*) Erwin Visser, Senior Director of Windows Commercial Business Group

            OR putting all this together: Microsoft unveils what’s next for enterprise IT [press release, June 3, 2013]

            New wave of 2013 products brings it all together for hybrid cloud, mobile employees and modern application development.

            NEW ORLEANS — June 3, 2013 — At TechEd North America 2013, Microsoft Corp. introduced a portfolio of new solutions to help businesses thrive in the era of cloud computing and connected devices. In today’s keynote address, Server & Tools Corporate Vice President Brad Anderson and fellow executives showcased how new offerings across client, datacenter infrastructure, public cloud and application development help deliver the most comprehensive, connected enterprise platform.

            “The products and services introduced today illustrate how Microsoft is the company that businesses can bet on as they embrace cloud computing, deliver critical applications, and empower employee productivity in new and exciting ways,” Anderson said. “Only Microsoft connects the dots for the enterprise from ‘client to cloud.’

            Today’s keynote featured several customers, including luxury car manufacturer Aston Martin. The company is an example of the many enterprises that use the full range of Microsoft products and cloud platforms for IT success.

            Driving Strategy and Innovation with the Power of the Microsoft Cloud OS Vision [MSCloudOS YouTube channel, June 3, 2013]

            Behind every luxury sports car produced by Aston Martin is a sophisticated IT Infrastructure. The goal of the Aston Martin IT team is to optimize that infrastructure so it performs as efficiently as the production line it supports. This video describes how Aston Martin has used cloud and hybrid-based solutions to deliver innovation and strategy to the business.

            “Our staff’s sole purpose is to provide advanced technology that enables Aston Martin to build the most beautiful, iconic sports cars in the world,” said Daniel Roach-Rooke, IT infrastructure manager, Aston Martin. “From corporate desktops and software development to private and public cloud, Microsoft is our IT vendor of choice.”

            Fueling hybrid cloud

            At TechEd, Microsoft introduced upcoming releases of its key enterprise IT solutions for hybrid cloud: Windows Server 2012 R2, System Center 2012 R2 and SQL Server 2014. Available in preview later this month, the products break down boundaries between customer datacenters, service provider datacenters and Windows Azure. Using them, enterprises can make IT services and applications available across clouds and scale them up or down according to business needs. Windows Server 2012 R2 and System Center 2012 R2 are slated to release by the end of calendar year 2013, with SQL Server 2014 slated for release shortly thereafter.

            With advances in virtualization, software-defined networking, data storage and recovery, in-memory transaction processing, and more, these solutions were engineered with Microsoft’s “cloud-first” focus, including a faster pace of development and release to market. They incorporate Microsoft’s experience running large-scale cloud services, connect to Windows Azure and work together to provide a consistent platform for powerful hybrid cloud scenarios. More information can be found at blog posts by Anderson about Windows Server and System Center and by Quentin Clark about SQL Server.

            Further showcasing Microsoft’s hybrid cloud advantage, today the company also announced the public preview of Windows Azure BizTalk Services for enterprise integration solutions, both on-premises and in the cloud. In addition, Windows Azure now offers industry-leading, per-minute billing for virtual machines, Web roles and worker roles that improves cloud economics for customers. More information is available at the Windows Azure blog.

            Windows 8.1: Empowering modern business

            During the keynote address, Iain McDonald, partner director of program management for Windows, detailed key business features in the recently announced Windows 8.1 update — including advances in security, management, mobility and networking — to offer the best business tablets with the most powerful operating system for today’s modern business needs.

            New networking features in Windows 8.1 aim to improve mobile productivity for today’s workforce, with system-on-a-chip (SoC)-integrated mobile broadband, native Miracast wireless display and near field communication (NFC)-based pairing with enterprise printers. Security is also enhanced in the new update to address device proliferation and to protect corporate data and applications with fingerprint-based biometrics, multifactor authentication on tablets and remote business data removal to securely wipe company data from a device. And improved management capabilities in Windows 8.1 give customers more flexibility with supported options such as System Center Configuration Manager 2012 R2 and new mobile device management (MDM) solutions with third-party MDM partners, in addition to updated Windows Intune support.

            On June 26, at the Build 2013 developer conference in San Francisco, Microsoft will release a public preview of the Windows 8.1 update for Windows 8 and Windows RT customers. More information on new features found in Windows 8.1 for businesses, including updated Windows deployment guidance for businesses, is available on the Windows for your Business blog.

            Fostering modern application development

            Microsoft today also introduced Visual Studio 2013 and demonstrated new capabilities for improving the application lifecycle, both on-premises and in the cloud. A preview of Visual Studio 2013, with its new enhancements for agile portfolio planning, developer productivity, team collaboration, quality enablement and DevOps, is slated for release in the coming weeks, timed with the Build conference.

            Furthermore, Microsoft today announced an agreement to acquire InCycle Software Inc.’s InRelease Business Unit. InRelease is a leading release management solution for Microsoft .NET and Windows Server applications. This acquisition will extend Microsoft’s offerings in the application lifecycle management and DevOps market. More information is available on S. Somasegar’s blog.

            In addition, the company today announced new benefits that enable Microsoft Developer Network (MSDN) subscribers to more easily develop and test more applications with Windows Azure. New enhancements include up to $150 worth of Windows Azure platform services per month at no additional cost for Visual Studio Professional, Premium or Ultimate MSDN subscribers and new use rights to run select MSDN software in the cloud.

            Founded in 1975, Microsoft (Nasdaq “MSFT”) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.

            Read More: SQL Server, Brad Anderson, Enterprise, IT Professionals, BUILD, Cloud Computing, Windows Server 2012, S. Somasegar, Windows Azure, Windows Intune, Visual Studio, .NET, TechEd North America 2013, TechEd 2013

            Deep technical evangelism and development team inside the DPE (Developer and Platform Evangelism) unit of Microsoft

            It is a fantastic gig – we’re working with developers, designers, and IT pros from across the industry – from the consumer to enterprise to startups to hobbyists – helping them create amazing next generation apps, build the frameworks that make all this easier, and share our experiences with the community.

            [John Shewchuk, Technical Fellow at Microsoft, Chief Technology Officer for the Microsoft Developer Platform]

            Source: My New Gig [JohnShew‘s MSDN Blog, May 12, 2013] from which the following excerpts will add more information to the above mission statement:

            To do this work I have an incredible team with people like Eric Schmidt, who leads our consumer applications efforts and has done ground-breaking work on projects like [NBC’s] Sunday Night Football (which is up for a Sports Emmy for Outstanding Live Sports Series).

            [In fact on May 7 the Sports Emmy was awarded, already 5th time from which the last four awards were won with the program using technology started with Silverlight 3.0 and IIS Smooth Streaming in 2009 for Sunday Night Football live streaming with highly advanced and customized viewing experience. This lead to a continously evolving and expanding cooperation which culminated on April 9th 2013 in the announcement that Microsoft Corp. and NBC Sports Group are partnering to use Windows Azure Media Services across NBC Sports’ digital platforms, including NBCSports.com, NBCOlympics.com and GolfChannel.com. The new alliance aims to deliver live and on-demand programming of more than 5,000 hours of sporting events plus Sochi 2014 Olympic Games for NBC Sports’ digital platforms. More details about that see later on.]

            Patrick Chanezon just joined us from VMware where he was driving their cloud and tools developer relations – he has a ton of expertise in the open source space which will be increasingly important given our new Azure IaaS support for Linux.

            … we also get to play with all the newest and coolest technologies we’re delivering to developers these days – everything from Windows to Xbox to Windows Phone – and we connect it to the latest cloud services from Azure, Office, and Bing.

            James Whittaker [now as Partner Technical Evangelist at Microsoft] – a known industry disruptor and incredible speaker joins us from Bing where he has been leading the development team making Bing knowledge available programmatically – many people may know him from his viral blog post on why he left Google for Microsoft.

            As far as John Shewchuk himself is concerned he is describing his latest achievement in the same post as:

            As many of you know, for the last few years I’ve been plugging away deep in the plumbing of enterprise identity and Reimagining Active Directory for the cloud.  It’s been a great experience and I couldn’t be more proud of all the cool stuff that has gone on across the industry to enable the world of claims-based identity and identity as a service.  Over the years I’ve gotten to know many identity leaders including Kim Cameron, Craig Burton, and Andre Durand and have worked with many other great people at companies like Shell, Sun, IBM, Google, and Facebook.
            Building on all this collaboration, just a few weeks ago here at Microsoft we reached a major milestone with the official release of Windows Azure Active Directory (AAD). Today all of Microsoft’s major organizational cloud services build on AAD – this includes Azure, Office 365, and Dynamics. AAD supports almost 3 million organizations through 14 global data centers with 99.97% availability.  This level of scale and availability is unprecedented for a turnkey identity management service – it’s a huge accomplishment.  Although I love the SaaS and scale aspects of AAD, I’ve spent my career working with developers – so I’m stoked that we have made all this available to developers through new technologies like the AAD Graph API.
            It is always sad to move on from a great project, but with the release of AAD it is an ideal time to transition and start a new role.  So I’m happy to announce that I’m headed to Microsoft’s Developer & Platform Evangelism (DPE) team, working for Steve Guggenheimer.  My role is to lead the team doing the deep technical evangelism and development here in DPE.

            If one adds to that John Shewchuk’s all contributions from his Experience profile on LinkedIn:

            Technical Fellow
            Microsoft
            March 2008Present (5 years 2 months)
            Current responsibilities include delivering Windows Azure Identity, Access, and Directory Services and defining platform strategy for Microsoft’s Business Productivity Online Services (BPOS).
            Recent deliverables include Windows Azure Access Control and Application Messaging / Service Bus Services, SQL Azure, and Active Directory.
            Member of the Server and Tools Business (STB) Technical Leadership Team. Key participant in the definition of overall technical and business strategy for several divisions across STB.
            Distinguished Engineer
            Microsoft
            20052008 (3 years)
            Delivered Windows Communication Foundation (WCF).
            Responsible for Active Directory technical strategy. Worked to unify Active Directory product suite. Released Active Directory Federation Services (ADFS).
            Software Engineer
            Microsoft
            19962005 (9 years)
            Member of architecture team that drove the first and subsequent releases of .NET.
            Drove transformation of Visual Studio to enable web development.
            Authored and drove technical strategy for Web standards. Responsible for key cross-industry collaborations with IBM, Sun, and many others. Key participant in defining strategy for enterprise development
            Group Program Manager
            Microsoft
            19931996 (3 years)
            Drove the first release of Visual Studio.
            Delivered web development tools including Visual InterDev. Later these became the basis for Visual Studio web tools and web execution platform.
            Delivered advanced browser features including 2D layout and progressive rendering. Broad range of patents covering many core web technologies.
            Vice President and Founder
            Daily Planet Software
            19901993 (3 years)
            Microsoft acquired Daily Planet Software in Q4CY93 [and morphed it into “Blackbird,” the online-content authoring system for MSN].

            so after adding all those contributions, not only to Microsoft but to software engineering in general, only then one can really understand how much John Shewchuk is a true larger than life figure. Also note that Microsoft’s DPE unit never had such an outstanding contributor on its staff, not even the units organisationally preceding it (DRG (Developer Relations Group) formed in 1984, ADCU (Application Developer Customer Unit) introduced in 1997, evolved into DPE in October 2011). It is also the first time as Microsoft DPE has a developer related CTO organization properly staffed with excellent contributors. The size of this central to DPE team could be over 100 people and growing, this is the unofficial information. At the moment we know only the leadership figures of the CTO organization:
            James Whittaker for the partner activities (as coming from his new LinkdIn title given above)
            Patrick Chanezon “initially focused on the enterprise market” (as described by Chanezon in the below details)
            Eric Schmidt leading the consumer applications efforts (as explicitly stated by Shewchuk above)

            So at this point we can understand this extremely important, we might say strategic addition to the DPE unit only via the professional stance of its leadership figures, including the leader of the team Shewchuk himself. This is why instead of the details sections I am providing here the following one:

            More light on the leaders of the new the deep technical evangelism and development team:

            James Whittaker’s Quality Software Crusade from Academia to Microsoft, then Google and now back to Microsoft [this same ‘Experiencing the Cloud’ blog, March 14 – April 12, 2012]
            James Whittaker‏ @docjamesw 8:19 AM – 8 Apr 13

            I gave a blunt, incendiary talk at MS. My punishment: they made it my day job. Watch out world, Microsoft just gave me a speaking role.

            James Whittaker‏ @docjamesw 3:54 PM – 8 May 13

            I finally “met” the famous @maryjofoley …nice talking to you today.

            from which Mary Jo Foley published the following in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]

            Whittaker’s most recent gig at Microsoft was development manager for the Microsoft knowledge platform as part of the Bing team. 

            “When Microsoft talks about devices and services, that’s a two-legged stool,” said Whittaker. The third leg is knowledge. We’re embedding knowledge into everything from Xbox, to Office, to third-party products.”

            Whittaker said “dev platform” is no longer simply the operating system and related application programming interfaces (APIs). It’s the whole ecosystem, he said, including information that Bing extracts from the Web, like catalogs, weather, and maps. The goal is to make this available inside applications built by both Microsoft and third-party developers. 

            “Actions can be performed on these entities. We have hundreds of millions of things we can provide that go beyond the blue links (in search engines),” Whittaker said.

            A New Era of Computing [Channel 9 video of the ALM Summit 3 plenary session by James Whittaker, Jan 30, 2013], click on the image to watch (highly recommended)

            History will look back and identify September 2012 as the dawn of a new computing paradigm and the official end of the “Search-and-Browse” era [of the 2000s] that Google dominated. James Whittaker talks about this momentous event, shares some history about prior eras, and looks ahead to what this new era brings.

            image

            Explanation from the video:

            [19:58] September 2012 is “when total search volume went down first. We don’t need to search anymore. It turns out that if you search long enough you find a bunch of stuff, and you hav’nt to search for it anymore.”

            [21:00] “Apps are ingesting the web too. Apps are better at searching than browsers and search engines.”

            [22:08] “Apps are fundamentally a better way to search because they’re only looking at the part of the web you’ve been interested in. How do we know you are interested in? Because you are using the app.

            So our habits are changing and this era has ended.”

            In more than the middle [38:26 – 40:00] he is emphasizing the 3 “Experiences” out of Google’s current Top 10 revenue earners rather than “Apps” in the era “when the web goes away” as leading to “Data is currency” for the new era:

            image

            In the very end of his presentation (from [46:09] to [52:20]), as forward looking “Know & Do” experience, he is describing and a kind of “screenshot demonstrating” the “I need a vacation” experience which should naturally start in one’s calendar and ending there as well.

            Hello Microsoft! [Patrick Chanezon’s blog, May 13, 2013]

            On april 29th 2013, I joined Microsoft’s legendary Developer and Platform Evangelism team, where I will initially focus on the Enterprise market. I will report to Technical Fellow John Shewchuk, joining his new team of top-notch technical evangelists, like Xoogler James Whittaker and Microsoft veteran Eric Schmidt. Mary Jo Foley wrote a nice piece about our team on ZDNet today. I will be based in theMicrosoft San Francisco office.

            How did it happen?
            I spent most of my career competing with Microsoft, at Netscape, Sun, Google and VMware. Competition builds respect, competitors force you to question your assumptions and to constantly evolve. For many of my friends, this move came as a total shock. What made me open to the idea of joining Microsoft is a presentation from Scott Guthrie about Windows Azure at NodeConf 2012 last summer. He presented from a Mac laptop, launched Google Chrome, went to the Cloud9 IDE, edited a Node app pulled from Github, and pushed it to Azure from the cloud IDE: to me this indicated a real change of mentality at Microsoft, and a new openness. Clearly they had listened to what developers ask from a cloud platform. Later on, when my friend Srikanth Satyanarayana pinged me to start conversations with Microsoft, I was open to it. I met with Satya Nadella, and realized that our visions for where the cloud was going were very aligned. Further conversations with Scott Guthrie about Azure, John Shewchuk and Steve Guggenheimer about developer evangelism convinced me this was an adventure I had to take!
            Why Microsoft?
            Joining Microsoft boils down to 4 reasons: People, Learning, Technology, Impact.
            People: in my late 30′s I realized that the people you work with, for and around are as important as what you’re working on. Microsoft has many people I have admired from the outside, like Dare Obasanjo, Eric Meijer, Scott Guthrie, Jon Udell, Scott Hanselman, Jeff Sandquist, Andrew Shuman or Anders Hejlsberg. The team I join has a fantastic roster of A-players with whom I’ll have fun and from whom I will learn.

            image

            Learning: I’m a learner at heart. I am curious, I read a lot, and I like to learn from people I work with. I also love to share what I learned with others. My kids loved this book called My Friends, by Taro Gomi, which goes like this: “I learned to walk from my friend the cat, I learned to jump from my friend the dog…”.
            In my career it worked the same way: I learned algorithmic from my teacher Christian Vial, I learned internet protocols from my friend Nicolas Pioch, I learned open source from my friend Alejandro Abdelnur, I learned social media from my friend Loic Lemeur, I learned developer relations from my friend Vic Gundotra, I learned platform strategy and storytelling from my friend Charles Fitzgerald… I love doing developer relations, and my two mentors in this area over the past 8 years, Vic and Charles, both came from the Microsoft DPE team. I’m coming to the source for more learning. This team is more than a 1000 people worldwide, and over the past 10 years they defined what tech evangelism is about: they operate at a larger scale and cover a wider scope than any of the teams I worked with. I am very excited to join them.
            Technology: Windows Azure is Enterprise ready, more open than people think, and is a complete platform, from infrastructure to services, mobile and Big Data. Azure has matured a lot in the past few years, it covers IaaS, PaaS and Saas, their Paas service is multi-framework and multi-service, with a marketplace of add-ons, it has a mobile backend as a service for Windows Phone, iOS, Android and HTML5, and includes Hadoop and Big Data services. It is in production today, has been battle tested for years as the base for many Microsoft first party apps and services, and is ready for the Enterprise, with a true public/private/hybrid solution: with Windows Server 2013, System Center and Azure you can start building your hybrid cloud today.. The team ships important new features regularly, my favorite being the point to site and software vpn features announced a few weeks ago, which will drastically lower the barrier to create hybrid clouds. Azure is not a Windows/.NET only platform, it is more open than people give it credit for: you can provision Linux VMs, and the PaaS supports .NET, Java, PHP, Node, Python, Ruby, with open source (Apache 2 license) SDKs on Githuband an Eclipse plugin, built by the Microsoft Open Technologies team. Scott Guthrie gives a very good overview of Windows Azure in this video from the Windows Azure Conf 3 weeks ago.

            image

            Impact: as a kid, I was reading a lot of science fiction, and got my first computer (a TRS-80) when I was 10 years old. As I explain in many of my presentations (like Portrait of the developer as The Artist), my childhood dreams were to change the world through technology, and more specifically computers. My dreams are far from being fulfilled today: it is true that we have more powerful machines and software tools, and technology changed the world in many aspects, but machines are still hard to program, and software engineering needs to evolve to let us work at a higher level of abstraction.
            The move to a devices and services world is an important architecture change like we see every 20 years in the software industry. Cloud platforms have the potential to help developers build smarter applications faster, and change entire areas of the human experience. It has started to happen in the consumer applications space, but the next big wave of change is the consumerization of Enterprise IT, where developers and IT professionals can completely transform the way enterprises work, driving business value faster, enabling new capabilities and business models. My goal is to help them in this transformation, and Microsoft is the place where I can have the most impact.
            Here’s a quick video to summarize it all: developers, developers, developers, think big and look up at the sky, its color is Azure!
            Developers, Developers, Developers A homage to you, developers I interacted with around the world, in the past 8 years doing developer relations at Google and VMware. http://wordpress.chanezon.com/2013/05/10/goodbye-vmware/
            If you have never tried Azure, or have tried it a year ago, sign up for a free trial and give it a go! I hope to see many of you at the Build conference in June in San Francisco.

            – Mary Jo Foley published the following about Chanezon in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]:

            We’re at a deep architectural inflection point right now in the enterprise,” said Chanezon. “Devs need new ways of working, new apps and new frameworks. There’s the whole dev-ops movement, plus the move to become more agile.”

            Chanezon said he joined Microsoft because he felt the company’s new devices plus services strategy really embraces these changes. He said while Google had devices and services, too, it didn’t have the private/hybrid cloud component which Microsoft also brings to the enterprise-dev table. As a big believer in the power and potential contribution of open source, he said he was encouraged to see that Azure has become a very open-source-friendly platform.

            – Mary Jo Foley published the following about Schmidt in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]:

            Schmidt joined DPE six years ago [as director of DPE’s Media and Advertising Initiatives team], bringing his media specialization to the media and entertainment, social and gaming verticals. These are “where people are thinking about attaching devices to a lifestyle,” he said. 

            A big target for Schmidt is mobile developers, specifically those writing for iOS and Android who may not know how their skills can be transferred to Windows 8 and Windows Phone 8. “We’re showing them how what they already know is correlated,” he said, while playing up the message that the iOS and Android gold mines are drying up.

            Silverlight delivers online viewing experience for Sunday Night Football [Silverlight and Windows Phone SDK blog, Sept 10, 2009]

            The NFL and NBC will be delivering the entire Sunday Night Football season by using Silverlight 3.0 and IIS Smooth Streaming. The first game of the season will be broadcast tonight, with the Tennessee Titans vs. the Pittsburg Steelers. Game starts at 5:00pm PST and you can watch online for free: http://snfextra.nbcsports.com/.

            image

            Here are a few of the benefits Silverlight delivers:

            • A full screen video player that is capable of delivering 720p HD video. TV quality on the web.
            • A main HD video feed, plus 4 user-selectable alternate synchronized camera feeds that allows users to switch camera angles themselves. Your TV can’t do that.
            • Adaptive smooth streaming of live HD video, which enables the video player to automatically switch bitrates on the fly depending on networking/CPU conditions. No buffering/stuttering experience.
            • DVR support of the live video, including Pause, Instant Replay, Slow Motion, Skip Forward/Back. You can pause and rewind on live video.
            • Play-by-play data (touchdowns, fumbles, etc) inserted as tooltip chapter markers on the scrubber at the bottom allowing you to quickly seek to key moments. A smarter, contextual DVR.
            • Highlights of major plays created within minutes of the play. NBC is cutting on-demand highlights and publishing them on-the-fly with Smooth Streaming.
            • Sideline interviews with the players. No more channel surfing, you are one click away from additional content.
            • Game statistics. These are live stats coming directly in real-time from the NFL.
            • Game commentary and Q&A with the SNF hosts. Chat with the live TV broadcasters.

            Enjoy! http://snfextra.nbcsports.com/

            Microsoft Silverlight and NBC Bring Winter Games to the Web in High Definition [Microsoft feature story, Feb 12, 2010]

            Microsoft Silverlight is the player of choice for NBC’s online viewing experience of the 2010 Winter Games in Vancouver.

            REDMOND, Wash. —Feb. 12, 2010 — NBC and Silverlight have once again teamed up to bring Winter Games coverage to the Web – this time in high definition.
            For the next 16 days, people all over the world will watch the Winter Games on television. Increasingly, they’ll be tuning in online as the world’s top athletes compete for gold and glory.
            NBC will once again use Silverlight, Microsoft’s fast-growing, smooth-streaming video and animation plug-in for browsers, to bring full coverage and highlights to NBCOlympics.com. In 2008 for Beijing, the NBC-Silverlight partnership yielded not only revolutionary Web coverage of a sporting event, but a record number of viewers: 52.1 million people logged on to watch 9.9 million hours of video.
            At that time the Silverlight platform was so new that NBC also offered Windows Media Player alongside it. After the success of Beijing and with nearly 50 percent of Internet-connected devices running Silverlight, NBC decided to consolidate on Silverlight for the Vancouver Games.

            imageMicrosoft employees Jason Suess (left) and Eric Schmidt take
            a break in an NBC production studio.

            In addition, NBC and Silverlight teams are working together on other major sporting events such as Wimbledon and NFL Sunday Night Football.

            “It’s really been amazing to see that partnership and friendship with NBC grow over the last year and a half,” says Jason Suess, principal technical evangelist for Silverlight. “I expect many more events as our partnership gets tighter and tighter.”
            With Silverlight, viewers can rewind and fast forward the action, or use pause and slow-motion. The player also scales the quality of the video to whatever a user’s machine can handle, delivering up to 720p – the highest resolution possible under current digital television standards.
            “After Beijing, what we heard loud and clear was if you can provide a higher quality experience, users will definitely spend more time in that experience,” Suess says.
            The Silverlight team also worked with NBC to provide special behind-the-scenes tools for the network, including the ability to insert mid-stream advertising, and a rough cut editor that allows NBC personnel to quickly edit and post highlights on the Web.
            “With Michael Phelps going for eight gold medals in Beijing, every time he’d win there would be a massive rush to the site to see him winning the latest gold,” Suess says. “The challenge there was for NBC to have the content on the site in time to meet the demand. Now editors can go in literally while a (video) stream is happening and cut a highlight.”
            Suess said the Winter Games are at a different scale from the massive Summer Games, with far fewer events and more niche sports. Still, Microsoft has worked hard to provide the most engaging photo and video experience possible, he says.

            Silverlight Powered Emmy Nominated Sunday Night Football [Silverlight Team on Silverlight Blog, April 19, 2010]

            This NFL season, NBC thrilled football fans by broadcasting Sunday Night Football on 2 screens – television and online. And now, as a result of this great work, Sunday Night Football Extra and NBC Sports have been nominated for a 2010 Sports Emmy® Nomination! NBC Sports teamed with Microsoft Silverlight and Vertigo to design and develop a visual stunning, interactive online video experience. The Sunday Night Football Extra Player featured Microsoft Smooth Streaming technology providing a customized viewing experience that smoothly and automatically adjusted to individual users’ bandwidth and computer’s performance in real time. The SNF Extra Player also touted an interactive user experience featuring an unprecedented five synchronized camera angles all in true 720p HD, slow-motion replay, full DVR controls, real time key plays integration, real-time statistics, and live interaction with commentators.

            The Sports Emmy® Awards will be held in New York City on Monday, April 26, 2010, and will recognize outstanding achievement in sports television coverage. This nomination is really the culmination of the innovative thinking, hard work and dedication demonstrated by the team that NBC Sports, Vertigo and a select team of key partners brought together for Sunday Night Football Extra — and Silverlight is the engine that made it possible. If you want to learn more about the nomination, you can also visit Vertigo’s site at http://bit.ly/vertigo-snf.
            The Result?
            • Number of Games: 17 football games streamed via Silverlight
            • Average time tuned in: 29 minutes (about 24 minutes longer than average time spent tuning in on broadcast TV)
            • Number of Viewers: Over 2.2 million football fans tuned in on NBCSports.com to watch the Season live and in full HD
            • Hours of Video: Approximately 1 million hours of video streamed
            • Peak users: 38,500 total peak concurrent users
            • What technology made this possible😕 IIS 7, IIS Media Services and Silverlight Rough Cut Editor
            Tons of great information about how SNF came together online can be found in the case study and whitepaper live on Microsoft.com.
            The Sports Emmy® Awards will be held in New York City on Monday, April 26, 2010, and will recognize outstanding achievement in sports television coverage. This nomination is really the culmination of the innovative thinking, hard work and dedication demonstrated by the team that NBC Sports, Vertigo and a select team of key partners brought together for Sunday Night Football Extra — and Silverlight is the engine that made it possible. If you want to learn more about the nomination, you can also visit Vertigo’s site at http://bit.ly/vertigo-snf.

            Interactive Media Player to Bring PDC to Developers Worldwide [Microsoft feature story, Oct 27, 2010]

            A new interactive media player will enable developers worldwide to virtually attend this week’s Professional Developers Conference at microsoftpdc.com. Using Silverlight and Windows Azure, Microsoft is providing many of the features NBC used when broadcasting the Olympics online.

            With the player, Microsoft is introducing a new way of bringing a live, in-person event to a much broader audience, said Eric Schmidt, Microsoft’s senior director of Developer Platform Evangelism. “The goal is to narrow the gap between audience and speaker,” he said.

            Schmidt heads up the team that has helped stream a number of major events recently, including the 2010 U.S. Open Golf Championship, the 2010 Wimbledon Championship, and NBC’s Sunday Night Football. The team’s objective has been to reach large online audiences with immersive and interactive experiences. Along the way, they developed new ways of delivering multi-camera video and built new interactive models inside what has traditionally been just a video player. The team also built out frameworks so that customers and partners can create similar experiences leveraging Microsoft’s platform technologies in a turnkey manner.

            With the PDC10 virtual player, Microsoft is doing things it couldn’t have done just a few years ago, said Schmidt. All session content will be available live and on-demand in HD quality, and viewers will have the ability to pause and rewind the video at any point. They also can toggle back and forth between different camera feeds, allowing a viewer to cut between a presenter and the presentation material.

            The PDC player has a number of built-in interactive features. Real-time polling will enable speakers to query both the online and in-person audience for live feedback. Live Q&A will help the audience interact with the presenters while they’re delivering a session. And an inline Twitter feed will extend the conversation beyond the online player and into the Twitter domain.

            NBC SPORTS GROUP COLLECTS 11 SPORTS EMMY AWARDS, MOST OF ANY SPORTS MEDIA COMPANY [press release, May 7, 2013]

            London Olympics Garners Five Awards, Including Outstanding Live Event Turnaround

            Sunday Night Football Wins Fifth Consecutive Emmy for Outstanding Live Sports Series; Super Bowl XLVI Wins for Outstanding Live Sports Special

            Bob Costas, Al Michaels, Cris Collinsworth and Pierre McGuire Honored

            NEW YORK – May 7, 2013 – NBC Sports Group won 11 Sports Emmy Awards, the most of any sports media company for the third straight year; the London Olympics received five Emmys, including Outstanding Live Event Turnaround; Super Bowl XLVII won for Outstanding Live Sports Special; Sunday Night Football won its fifth consecutive award for Outstanding Live Sports Series; and Bob Costas, Al Michaels, Cris Collinsworth and Pierre McGuire were all honored in their respective categories at the 34th Annual Sports Emmy Awards, presented tonight by the National Academy of Television Arts and Sciences at Frederick P. Rose Hall, Home of Jazz at Lincoln Center.
            MARK LAZARUS, NBC SPORTS GROUP CHAIRMAN: “We could not be more proud of our dedicated team. Tonight is particularly special because we were recognized for our coverage of the London Olympics and the NFL, two properties that touch virtually everyone in the NBC Sports Group – and our on-air commentators. It’s rewarding to know that our talent continues to be recognized year in and year out by our peers.”
            Formed in January, 2011, the NBC Sports Group consists of NBC Sports, NBC Sports Network, Golf Channel, NBC Olympics, 11 NBC Sports Regional Networks, two regional news networks, NBC Sports Radio and NBCSports.com.
            NBCUniversal’s coverage of the London Olympics was honored with a total of five Emmy Awards in the following categories:
            • Outstanding Live Event Turnaround;
            • The George Wensel Technical Achievement Award – NBC, NBC Sports Network, NBCOlympics.com, Bravo, CNBC, MSNBC, Telemundo;
            • Outstanding Technical Team Studio;
            • The Dick Schaap Outstanding Writing Award;
            • Outstanding New Approaches, Sports Programming – NBCOlympics.com.

            For the fifth consecutive year, NBC Sports won Outstanding Live Sports Series for Sunday Night Football. NBC Sports has now won the award in six of the last seven years, also winning in 2007 for its NASCAR coverage.

            NBC Sports was also honored with the Emmy for Outstanding Live Sports Special for its coverage of Super Bowl XLVI. NBC Sports also received the Emmy in this category for its coverage of Super Bowl XLIII.
            Bob Costas was awarded his 25th career Emmy and fifth consecutive for Outstanding Sports Personality-Studio Host. Costas hosted the London Olympics, is the host Football Night in America, NBC Sports’ acclaimed NFL studio show, and Costas Tonight, which airs on NBC Sports Network. He won the Emmy in the same category last year for his work on Football Night.

            Al Michaels was awarded the Emmy Award for Outstanding Sports Personality – Play-by-Play, for his work on Sunday Night Football. For Michaels, who received the Lifetime Achievement Award at the 32ndAnnual Sports Emmy Awards in 2011, this marks his seventh career Emmy Award.

            Cris Collinsworth was awarded his fifth consecutive Emmy for Outstanding Sports Personality-Sports Event Analyst. This marks Collinsworth’s 14th career Emmy, which includes wins in 2007 and 2008 in the Studio Analyst category for work on Football Night in America.
            Pierre McGuire, NBC Sports Group’s “Inside the Glass” analyst for its NHL coverage, was awarded his first career Emmy for Outstanding Sports Personality – Sports Reporter.

            Microsoft Teams Up With NBC Sports Group to Deliver Compelling Sports Programming Across Digital Platforms Using Windows Azure [press release, April 9, 2013]

            New alliance aims to deliver live and on-demand programming of more than 5,000 hours of sporting events plus Sochi 2014 Olympic Games for NBC Sports’ digital platforms.

            LAS VEGAS — April 9, 2013 — Today at the National Association of Broadcasters Show, Microsoft Corp. and NBC Sports Group announced they are partnering to use Windows Azure Media Services across NBC Sports’ digital platforms, including NBCSports.com, NBCOlympics.com and GolfChannel.com.

            Through the agreement, which rolls out this summer, Microsoft will provide both live-streaming and on-demand viewing services for more than 5,000 hours of games and events on devices, such as smartphones, tablets and PCs. These services will allow sports fans to be able to relive or catch up on their favorite events and highlights that aired on NBC Sports Group platforms.
            Rick Cordella, senior vice president and general manager of digital media at NBC Sports Group discusses how they use Windows Azure across their digital platforms.
            “NBC Sports Group is thrilled to be working with Microsoft,” said Rick Cordella, senior vice president and general manager of digital media at NBC Sports Group. “More and more of our audience is viewing our programming on Internet-enabled devices, so quality of service is important. Also, our programming reaches a national audience and needs to be available under challenging network conditions. We chose Microsoft because of its reputation for delivering an end-to-end experience that allows for seamless, high-quality video for both live and video-on-demand streaming.”
            NBC Sports Group’s unique portfolio of properties includes the Sochi 2014 Winter Olympic Games, “Sunday Night Football,” Notre Dame Football, Premier League soccer, Major League Soccer, Formula One and IndyCar racing, PGA TOUR, U.S. Open golf, French Open tennis, Triple Crown horse racing, and more.
            “Microsoft is constantly looking for innovative ways to utilize the power of the cloud, and we see Windows Azure Media Services as a high-demand offering,” said Scott Guthrie, corporate vice president at Microsoft. “As consumer demand for viewing media online on any available device grows, our partnership with NBC Sports Group gives us the opportunity to provide the best of cloud technology and bring world-class sporting events to audiences when and where they want them.”
            Microsoft has a broad partner ecosystem, which extends to the cloud. To bring the NBC Sports Group viewing experience to life, Microsoft is working with iStreamPlanet Co. and its live video workflow management product Aventus. Aventus will integrate with Windows Azure Media Services to provide a scalable, reliable, live video workflow solution to help bring NBC Sports Group programming to the cloud.
            NBC Sports Group and iStreamPlanet join a growing list of companies, including European Tour, deltatre, Dolby Laboratories Inc. and Digital Rapids Corp., which are working with Windows Azure to bring their broadcasting audiences or technologies to the cloud.
            In addition to Media Services, Windows Azure core services include Mobile Services, Cloud Services, Virtual Machines, Websites and Big Data. Customers can go tohttp://www.windowsazure.com for more information and to start their free trial.

            – Mary Jo Foley published the following about Shewchuk, the head of the team in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]:

            ‘The platform’ is now a collection of capabilities across all of our products,” said John Shewchuk, the head of the recently formed technical evangelism and dev team. Our job is “helping devs stitch together solutions with these technologies.”

            “Devs” also is a much broader target audience for Microsoft than it once was. Back in the early DPE days, devs meant professional, full-time programmers. The target audience for Microsoft’s new deep-tech team includes anyone who writes a consumer, business or hybrid application. That means startups, enterprise customers and top consumer and business independent software vendors (ISVs).

            The Microsoft toolbox from which devs can choose to mix and match includes many technologies that didn’t exist a decade, or even just a few years, ago. They include everything from Windows Azure technologies, to Bing programming interfaces and datasets, to the WinRT framework underlying Windows 8 and Windows Server 2012. Microsoft’s next Xbox, Kinect, Windows Phones, Surfaces, Perceptive Pixel multitouch displays are among the targets for these technologies.

            “This is a playground. We get to work with stuff from all the different Microsoft business groups,” said Shewchuk. “It’s like geek heaven.”

            The idea of creating this kind of deep-tech team has been percolating since October 2012, when Microsoft veteran Steve Guggenheimer returned to Microsoft to head up DPE, according to Microsoft execs. Guggenheimer, in conjunction with Server and Tools Business chief Satya Nadella and with the blessing of CEO Steve Ballmer, set out to recruit some deeply technical evangelists with far-flung specializations.

            Shewchuk, a 20-year Microsoft veteran and one of the company’s Technical Fellows, agreed to spearhead the team. (Microsoft isn’t saying how large the new team is, but I’ve heard it could be over 100 people in size and growing.) Shewchuk, who is now the Chief Technology Officer for the Microsoft Developer Platform, was working for the last several years on Windows Azure, where he helped the company build Windows Azure Active Directory, Service Bus and SQL services. Shewchuk also was a key contributor to a number of other Microsoft dev technologies, including .Net, Visual Studio, Windows Communication Foundation and the WIndows Identity Foundation.

            The idea is to bridge our inside developers to outside developers,” Shewchuk said. “We want to get the top developers to adopt our platform.”

            Shewchuk described the new deep-tech team as a place where Microsoft pulls together its own “world-class” developers to exchange ideas among themselves and with the outside world. Because Microsoft’s new stack of technologies are all at different places, in terms of their maturity cycle, the Microsoft tech team will do everything from build new frameworks; develop code to tie together disparate products; and make available code and templates for external use using services like GitHub or CodePlex. In some cases, the “developers” who take advantage of these pieces may be Microsoft’s own product teams who may want to incorporate code (and even the developers who wrote it) directly into their units.

            More information:
            John Shewchuk’s Profile [MSDN, May 2013]

            John Shewchuk is a Technical Fellow and the CTO for the Microsoft Developer Platform. John leads the team responsible for technical evangelism and development in DPE; his team partners with developers, designers, and IT pros to build next gen applications using Microsoft’s devices and services and they share those experiences with the developer community. John has been with Microsoft for almost 20 years. Most recently John focused on Azure developing key platform services including Windows Azure Active Directory, Service Bus, and SQL services. He has been a key contributor on wide range of technologies including; Visual Studio, .NET, WCF, WIF, IE, and AD. John is an advocate and contributor to open source and Web standards – most recently he drove many of the contributions Microsoft made to OAuth 2. John has BS in Electrical Engineering from Union College and an MS in Computer Science from Brown University. He lives in Redmond with his wife and four children.

            Microsoft Big Brains: John Shewchuk [Mary Jo Foley for All About Microsoft blog of ZDNet, Nov 20, 2008]

            Claim to Fame: One of the masterminds behind “Zurich,” a key component of Microsoft’s Azure cloud infrastructure, and a key player in Microsoft’s Federated Identity work [see also: Ozzie foreshadows ‘Zurich,’ Microsoft’s elastic cloud [same author, same place, July 24, 2008]

            Bytes by MSDN: John Shewchuk and Rob Bagby discuss “Project Dallas” [on YouTube MrAbdoul9 channel, Jan 29, 2010; on Channel 9, Aug 29, 2010] this is where OAuth is first mentioned

            John Shewchuk, a Microsoft Technical Fellow, leads the Project Zurich architecture and strategy teams, which are focused on extending Microsoft’s .NET application development technologies to the Internet “cloud.” Shewchuk works in Microsoft’s Connected Systems Division (CSD) where he leads the technical strategy team. Over the last several years Shewchuk and his team have developed a wide range of Internet-based application messaging and identity federation technologies. Additionally, he was a co-founder of the Windows Communication Foundation (WCF) team and has been a key contributor to cross-industry interoperability initiatives. Working in conjunction with others on his team, Shewchuk developed Web services specifications and managed technical collaborations with IBM, Sun, SAP, and many others. He also has been a key leader and contributor to Microsoft’s efforts in federated identity, access control, and privacy. Previously, Shewchuk worked on Microsoft’s development tools and runtimes and played a key role in the development of Visual Studio and .NET. Earlier in his career, he played a role in the development of many Internet technologies including stylesheets, browser behaviors, and Web server controls.

            Microsoft unveils AD Azure strategy, ID management reset [John Fontana for Identity Matters blog of ZDNet, May 25, 2012]

            After two years of work, Microsoft has unveiled details and its strategy around Active Directory for the cloud, anointing it the centerpiece of a comprehensive online identity management services strategy it thinks will profoundly alter the ID landscape.
            The company said changes to the current concepts around identity management need a “reset” to handle the “social enterprise.” Microsoft says it is “reimagining” how its Windows Azure Active Directory (WAAD) service helps developers create apps that connect the directory to SaaS apps and cloud platforms, corporate customers and social networks.
            “The term ‘identity management’ will be redefined to include everything needed to provide and consume identity in our increasingly networked and federated world,” Kim Cameron, an icon in the identity field and now a distinguished engineer working on identity at Microsoft, said on his blog. “This is so profound that it constitutes a ‘reset’.”
            At the center is WAAD, which is in use today mostly with Office 365 and Windows Intune customers. WAAD is a multitenant service designed for high availability and Internet scale.
            In a companion blog post to Cameron’s, John Shewchuk [see also Part 2 of that], a Microsoft Technical Fellow and key cog in the company’s cloud identity engineering, provided some details on WAAD, including new Internet-focused connectivity, mobility and collaboration features to support applications that run in the cloud.
            Shewchuk said the aim is to support technologies such as Java, and apps running on mobile devices including the iPhone or other cloud platforms such as Amazon’s AWS.
            Shewchuk said WAAD will be the cloud extension to on-premises Active Directory deployments enterprises have already made. The two are married using identity federation and directory synchronization.
            He said Microsoft made “significant changes to the internal architecture of Active Directory” in order to create WAAD.
            As an example, he said, “Instead of having an individual server operate as the Active Directory store and issue credentials, we split these capabilities into independent roles. We made issuing tokens a scale-out role in Windows Azure, and we partitioned the Active Directory store to operate across many servers and between data centers.”
            Some analysts are already noting the challenges Microsoft will have with its cloud directory.
            Mark Diodati, a research vice president at Gartner focusing on identity issues, told me in a conversation about changes the cloud is forcing on enterprise ID management that, “the addition of tablets and smartphones into the enterprise device mix exceeds Active Directory’s management capabilities and there is an impedance mismatch using Kerberos across the cloud.”
            While Shewchuk laid out the set-up for a Part 2 [see here: Part 2 where OAuth 2 is first mentioned as: “we currently support WS-Federation to enable SSO between the application and the directory. We also see the SAML/P, OAuth 2, and OpenID Connect protocols as a strategic focus and will be increasing support for these protocols”] of his blog that will focus on enhancements to WAAD, Kim Cameron painted the bigger picture on cloud identity going forward.
            He said companies adopting cloud technology will see dramatic changes over the next decade in the way identity management is delivered. “We all need to understand this change,” he stressed.
            Cameron said identity management as a service “will use the cloud to master the cloud”, and will provide the most reliable and cost-effective options.
            “Enterprises will use these services to manage authentication and authorization of internal employees, the supply chain, and customers (including individuals), leads and prospects. Governments will use them when interacting with other government agencies, enterprises and citizens.”
            And he added that enterprises will have to move beyond concepts that have guided their thinking to date.

            Identity & Access [MSFTws2012 YouTube channel, Nov 20, 2012]

            John Shewchuk talks about how to overcome identity and access challenges brought about by continuous services and connected devices. Learn more about Active Directory, Direct Access, and Dynamic Access Control at http://aka.ms/Ytidentity
            Current state-of-the-art:
            Welcome to the Active Directory Team Blog [MSDN blogs, April 15, 2013]
            Announcing some new capabilities in Azure Active Directory Graph Service [Windows Azure Active Directory Graph Team blog on MSDN, May 15, 2013]

            BUILD 2013, Windows 8.1, and Microsoft’s Deep-Tech Team: Hopeful News for Devs [Tim Huckaby on DevPro, May 16, 2013]

            It’s hard to change a culture. Having worked for or with Microsoft for over 20 years, I can tell you that I have a myriad of colleagues that are Microsoft employees, most of whom I call my friends and respect very much. Over the last several months, I’ve had several discouraging private conversations about where the developer goals, mission, and strategy were headed for Microsoft. I could see the problems and mistakes. Microsoft employees could see them, too. You probably saw them, too. It’s been frustrating. When the head guy in charge of Microsoft development ignores feedback that includes internal feedback from Microsoft and external feedback from folks such as me and you, then that builds a culture of secrecy and fear. Although that head guy is gone now [obvious reference to Steven Sinofsky, ex Microsoft: The victim of an extremely complex web of the “western world” high-tech interests [‘Experiencing the Cloud, Nov 13-20, 2012], it’s still taken a long time to change that culture back to where it should be.
            In all honesty, I can tell you that I haven’t been encouraged about the developer platform at Microsoft in a while. However, today I’m encouraged for the first time in a long time. I see the culture changing. I hear people at Microsoft saying that the culture is changing. And there’s several encouraging announcements that are emerging. Suddenly, I’m now excited about the Microsoft’s BUILD 2013 developer conference that’s being held in San Francisco from June 26 to June 28, and I’m not the old guy saying, “Get off my lawn!” However, I’d first like to present you all with some background that made me discouraged in the first place.
            Microsoft’s Development Woes
            I painfully read a recent blog post about Microsoft’s developer issues. I don’t even know who wrote it. This guy or gal didn’t put his or her name on the blog post. It’s painful because this person makes a ton of good points. Within this blog post, the author goes far enough back to put Win16 into perspective. It’s a very interesting read if you want to talk about the context of Microsoft’s developer problems through time and the speculation surrounding those problems. One of the main points in this article is that Microsoft has hung onto an obsolete Win32 API even though, a decade ago Intel took a completely different tact with the GPU and multi-core processors when it could have picked several versions of Windows over time to start over. However, Microsoft didn’t choose to do this, which has caused developers a lot of pain.
            Related: Windows 8 Start Button Shenanigans
            Most recently that developer pain has manifested with the introduction of the modern API in Windows 8. The modern API has many developers so confused and angered. A lot of these developers are experiencing anger because the most successfully adopted and beloved developer technology in Microsoft history was seemingly killed by this new modern API: Silverlight. Also seemingly killed was XNA. Several developers are also confused because Microsoft seems to be pushing the message to get users to build enterprise applications in HTML5 and deliver them through the Windows Store.
            But, alas, there is hope! Recent announcements and speculations have me really encouraged.
            Encouraging Announcements from Microsoft
            On May 14, Microsoft officially announced the long rumored Windows Blue, which is officially called Windows 8.1. It will be a free update to Windows 8. Windows 8.1 promises to fix several different problems that folks have been complaining about. It’s important to note that Windows 8.1 isn’t a service pack. It’s a full blown upgrade to the OS. Microsoft promises several exciting things for the developer to be announced at BUILD, which includes the public release of Windows 8.1.
            This month a minor Internet hysteria phenomena occurred with the revelation of the Microsoft deep-tech team. Mary Jo Foley wrote it best describing it as Microsoft’s new plan for reaching out to top-tier developers of all sizes to get them to take a look at the new and expanded Microsoft toolbox. There’s several “big guns” who will be leading the effort.
            John Shewchuk is one of those “big guns.” I know John from a prior life at Microsoft. He’s a 20-year Microsoft veteran and one of the company’s Technical Fellows. He’s leading the team and serving as the Chief Technology Officer for the Microsoft Developer Platform. This is good news.
            My guess is that the deep-tech team was the brainchild of Microsoft veteran Steve Guggenheimer, who took the reins of heading the Developer and Platform Evangelism (DPE) team in October 2012. Affectionately known as “Guggs,” Steve Guggenheimer has a long and storied career at Microsoft.
            Patrick Chanezon is a new hire to Microsoft who will lead the enterprise evangelism efforts in Microsoft’s DPE unit from San Francisco. He joined Microsoft from VMware just weeks ago. This is a key hire that also seems to be really good news.
            More about those Microsoft people I respect; the people who get it; the people who affect change.  Scott Guthrie is one of them. But everyone knows who knows the Microsoft Platform knows who Scott Guthrie is. Another one of them is Gabor Fari. You probably don’t know his name. But Gabor is one of the many Microsoft folks who “gets it.” Internally, he’s willing to criticize the company he works for and loves when it deserves it. He’s also the first to garner praise where Microsoft deserves it. Gabor’s title is Director of Life Sciences Solutions, and his grasp of the developer platform at Microsoft is his passion. When discussing the problems of the past and the excitement of the future with Gabor he left me with this, and I believe it’s the perfect way to end this article:
            “I am very excited about the latest developments and news that has been released, and I am eagerly anticipating additional news from the BUILD conference. The slumbering lion still has spectacular fangs and teeth; and now he has woken up and is ready to roar.”

            Regarding Gabor Fari I will include here the following link:
            Sanofi: Global Healthcare Leader Deploys Intelligent Content Framework, Speeds Time-to-Market [Microsoft Case Study, April 16, 2013] from which the following excerpts describe Fari’s involvement and role in strategic developments the best:

            In January 2011, Sanofi launched a program called CRUISE—Content Re-Use Information System for Electronic Health. Through CRUISE, the company set out to develop a content management solution that transverses the company’s research and development efforts. The program charter of CRUISE is to implement processes and tools that enable stakeholders to author, assemble, review, approve, reuse, publish, and deliver high-quality, consistent, and compliant content and documentation throughout the product development life cycle—aiding the submission to regulatory agencies and other industry audiences. “The idea is to find ways to intelligently and seamlessly manage content authoring and production,” says Bhanu Bahl, Senior Manager of Clinical Sciences and Operation Platform at Sanofi. “The key business objective is to reduce the effort required to prepare documents through a synergy of optimized processes and enabling technologies.”
            CRUISE has three pillars. One pillar involves simplifying the documentation process in a way that makes it possible to reuse content in various materials. Another pillar revolves around services that involve the many different documentation deliverables. The third pillar focuses on the technology solution, which is designed as a content library that tags and classifies information so that it can be easily assembled and searched. “With CRUISE, we are not doing a process redesign,” says Bahl. “We’re building something more tangible, more simplified, and more standardized.”
            To address the CRUISE mandate, Sanofi worked closely with Microsoft as well as two members of the Microsoft Partner Network, DITA Exchange and the ArborSys Group. Microsoft provided the Intelligent Content Framework (ICF) and underlying technologies based on Microsoft SharePoint Server and Microsoft Office. DITA Exchange delivered a solution that enables organizations to establish and maintain a “single source of truth” for their strategic content, and to deliver that content consistently across outputs. The ArborSys Group consulted on the tool and process redesign and helped achieve an end-to-end business and technology implementation for regulated industries.

            Gabor Fari, Director of Life Sciences Solutions at Microsoft, served as an evangelist in helping to put together the CRUISE team. DITA Exchange had been working closely with Microsoft since 2008 to develop the ICF for regulated industries. It completed the first version of the XML-based solution in February 2009.
            As the technology pillar of CRUISE and the engine of EnCORE, DITA Exchange software elevates SharePoint to an XML-based component content management and single-source publishing solution. It enables its customers to comply with regulatory requirements with tools for reusing content in a consistent and accurate way throughout the product development life cycle in the life sciences space. “Microsoft promoted our work to several pharmaceutical companies,” says Andersen. “It led the way in terms of bringing innovative ideas around SCM solutions.”
            DITA Exchange began working on the CRUISE implementation in April 2011. The partner participated in planning and supplied the solution used to manage the document output maps, topics, and linking of topics to the maps. “DITA Exchange helped us with content design and the governance structures of information design,” says Allred. “The people at DITA Exchange are masters of their technological domain. They have experience in regulated industries and the knowledge required to get our vision into an operational model.”
            The ArborSys Group joined the effort in April 2011. This partner provides business consultancy and technical implementation and helped Sanofi achieve measurable and sustainable results through the implementation of flexible IT solutions that can be adapted for change in a dynamic business climate.
            The two partners collaborated on developing the EnCORE platform. The ArborSys Group scoped processes, integrated service management roles and extensions, and trained internal resources.
            “Microsoft, DITA Exchange, and the ArborSys Group all provided expertise and leadership in terms of how we define processes and address the three pillars of CRUISE,” says Bahl. “The various disciplines they provided really helped us strategize our best opportunity in terms of development. We share a common vision that has resulted in a very rich, cutting-edge offering that other pharmaceutical companies will probably adopt three to five years from now.”
            While many other regulated industries have embraced SCM in recent years, life science organizations have lagged. “It’s no secret that the pharmaceutical industry is conservative,” says Andersen. “People think very carefully before they start anything. Sanofi is absolutely the leader in innovating in the pharmaceutical content management space.”

            Microsoft: With cloud services investments starting to pay off Windows 8 and Windows Blue will bring more competitive devices particularly in new smaller form factors targeting the tablet market

            This statement is the essential  summary of Microsoft current performance according to the results  in the first 3 months of 2013 and the near term actions the company declared now to further improve its stance, particularly in the market which Microsoft critics are calling “PC market”, but not Microsoft, as its Earnings Call discussion was started with the following:

            Before I dive into more details on our progress …, I want to address what’s top of mind for many of you, which is our Windows business.
            There is no doubt that the device market is evolving. Consumers and businesses are increasingly shifting their focus to touch and mobility, and as a result, they want touch-enabled computing devices that are ultrathin, lightweight, and have long battery life. While Windows revenue has been impacted by the transition from the traditional PC to a new era of computing devices, the overall addressable markets are growing, and we are excited by the opportunities ahead of us.
            We built Windows 8 with touch and mobility at the center of the experience, which positions us well in this new era. However, the transition is complicated, given the size of our hardware and software ecosystem. We still have an immense amount of work to do, yet we feel good about the foundation we have laid and are optimistic about the long term success of Windows.
            I want to take some time now to be clear about where we are in this journey, and what we are doing to help drive this change. With Windows 8, we are setting a new, accelerated pace for updates and innovations, as we focus on making the Windows experience richer and better.
            Since launch we have delivered several important updates to improve our mail, storage, search, music, and video services. During the quarter, we also added to the Surface family of devices with Surface Pro, which combines the performance capabilities of a PC with a modern tablet design.

            This means that Microsoft is showing clear signs of staying relevant unlike some recent conclusions just stemming from the initial market difficulties of the new Windows 8 platform:

            When Gartner issued its “Forecast: Devices by Operating System and User Type, Worldwide, 2010-2017, 1Q13 Update.” on April 4 and stated in its related press release that:
            Traditional PC Market Predicted to Decline 7.6 Percent as Change in Consumers’ Behavior Drives Transition to Tablets and Ultramobiles
            The proliferation of lower-priced tablets and their growing capability is accelerating the shift from PCs to tablets. “While there will be some individuals who retain both a personal PC and a tablet, especially those who use either or both for work and play, most will be satisfied with the experience they get from a tablet as their main computing device,” said Carolina Milanesi, research vice president at Gartner. “As consumers shift their time away from their PC to tablets and smartphones, they will no longer see their PC as a device that they need to replace on a regular basis.”
            the Daily Ticker of Yahoo! Finance came to conclusion that Microsoft Could Be Obsolete By 2017: Gartner Report.

            So let’s examine all this in detail:

            Microsoft Reports Third-Quarter Results [press release, April 18, 2013], the sales revenue historic diagram is from qz.com while that of the Online Services Division operating income from businessinsider.com

            Microsoft Corp. today announced quarterly revenue of $20.49 billion for the quarter ended March 31, 2013. Operating income, net income, and diluted earnings per share for the quarter were $7.61 billion, $6.06 billion, and $0.72 per share.

            image

            The bold bets we made on cloud services are paying off as people increasingly choose Microsoft services including Office 365, Windows Azure, Xbox LIVE, and Skype,” said Steve Ballmer, chief executive officer at Microsoft. “While there is still work to do, we are optimistic that the bets we’ve made on Windows devices position us well for the long-term.”

            The Microsoft Business Division posted $6.32 billion of revenue, an 8% increase from the prior year period. Adjusting for the net recognition of revenue related to the Office Upgrade Offer and Pre-Sales, Microsoft Business Division non-GAAP revenue increased 5%. During the quarter, we launched the new Office, enhancing productivity and the user experience through new mobility, social, and cloud features.

            The Server & Tools business reported $5.04 billion of revenue, an 11% increase from the prior year period, driven by double-digit percentage revenue growth in SQL Server and System Center.
            Our enterprise business continues to thrive,” said Kevin Turner, chief operating officer at Microsoft. “Enterprise customers are increasingly turning to Microsoft for their IT solutions and as a result, we continue to take share from our competitors in key areas including hybrid cloud, data platform, and virtualization.”

            The Windows Division posted revenue of $5.70 billion, a 23% increase from the prior year period. Adjusting for the recognition of revenue related to the Windows Upgrade Offer [see: How Microsoft got Windows revenue to go up despite PC sales going down [The Guardian, Feb 19, 2013]], Windows Division non-GAAP revenue was flat. During the quarter, we added to the Surface family of devices with Surface Pro.

            The Online Services Division reported revenue of $832 million, an 18% increase from the prior year period. Online advertising revenue grew 22% driven by an increase in revenue per search.
            image
            The Entertainment and Devices Division posted revenue of $2.53 billion, an increase of 56% from the prior year period. Adjusting for the recognition of revenue related to the Video Game Deferral, the division’s non-GAAP revenue increased 33% for the third quarter. Xbox LIVE now has over 46 million members worldwide, an 18% increase from the prior year period.
            “Our diverse business continues to deliver solid financial results, even as we navigate the evolving device market,” said Peter Klein, chief financial officer at Microsoft. “Looking ahead, we will continue to invest in long-term growth opportunities to drive our devices and services strategy forward and deliver ongoing value to shareholders.”

            Microsoft’s Management Discusses F3Q 2013 Results – Earnings Call Transcript [Seeking Alpha, April 18, 2013] from which I extracted the following excerpts as the most notable ones:

            Peter Klein – Chief Financial Officer

            I think one of the main takeaways for me is in particularly some of our cloud services, we’re really starting to get scale. Bing continues to improve, their margins. And Office 365 is really starting to get to scale. So those things are really encouraging.

            Ed Maguire – CLSA

            You just launched your Azure infrastructure as a service. Just went generally available this week. And I’d love to get some color on how much that’s figuring in the growth in long term contracts, and what your expectations might be for more traditional infrastructure as a service uptake over the next several quarters.

            Peter Klein – Chief Financial Officer

            Great question. It’s clearly a key enabler of our [unintelligible] cloud OS story, and how we’re driving what we’re doing with enterprise in the data center. We have infrastructure as a service, we have now the most complete end-to-end offering through platform, and software, identity, and access.

            But having the infrastructure is a key enabler, and I think a real accelerator for the Windows Azure strategy, and really more broadly the cloud OS strategy. We now have a complete end-to-end story through the data center, from private, to hosted, to public, from infrastructure to platform, so I think it’s, again, a key enabler of that [all up] strategy, and an accelerator.

            Gregg Moskowitz – Cowen & Company

            In recent periods, we’ve seen your MBD [Microsoft Business Division]growth significantly outpace PC unit growth, although we now have a dynamic where your Office subscription is really resonating with customers. So the question is, just looking at it on a directional basis, is MBD revenue outperformance relative to PC units something that you think is sustainable going forward?

            Peter Klein – Chief Financial Officer

            The answer is that it will depend, and certainly an offset between attach gains we’re making against the market, offset against some deferrals as revenue moves to a subscription. And so it will kind of depend each quarter. Long term, it’s a great trend, because we’re building up a banked book of business on the subscription side, which will become less and less connected to the PC market.

            Then I focused on four subjects for which made further extracts from the Earnings Call:

            1. The Windows and Office (productivity) markets
            2. Q3 FY13 performance for that
            3. Q4 FY13 outlook for that
            4. PC market

            1. The Windows and Office (productivity) markets: Peter Klein – Chief Financial Officer

            Looking ahead, we will release the next version of Windows, codenamed Windows Blue, which further advances the vision of Windows 8 as well as responds to customer feedback. The assortment of touch-enabled devices that are built for Windows 8 by our OEM partners is also improving.

            Over the last couple of months, we’ve started seeing devices that take full advantage of Windows 8, and we expect to see more devices across more attractive price points over the coming months. As part of this, we are also working closely with OEMs on a new suite of small touch devices powered by Windows. These devices will have competitive price points, partly enabled by our latest OEM offerings designed specifically for these smaller devices, and will be available in the coming months.

            In the upcoming back to school selling season, we expect to see devices that incorporate advances from throughout the supply chain, including chipsets. As well, Intel’s fourth generation Core processor will help enable new devices that combine performance benefits with power savings. Later in the year, we expect to see devices based on Intel’s upcoming Bay Trail Atom processor, which promises to deliver tablets and hybrid PCs with extended battery life at competitive prices.

            Today in the Windows Store, there are six times as many apps since launch, and we expect more to be added as we gain traction with Windows 8 adoption. In June, we will host Build, our developer conference, where we will provide more tools and information for developers to build great Windows 8 apps.

            In retail, we are working to improve the consumer purchasing experience. Our initiatives include focused efforts to further educate and incentivize retail sales professionals and to have better in-store product differentiation.

            In summary, Windows is transforming to the new era of computing. As I said on our last earnings call, growth in Windows depends on our ability to give customers the exciting hardware they want at the price points they demand, and a wider range of apps and services to meet their diverse need. We are hard at work with our partners to meet these goals, and we’re confident we are moving in the right direction.

            Now, switching gears to productivity, this quarter we launched the latest version of Office, which brings mobility, social, and cloud features to the world’s most popular productivity app. Importantly, the new Office represents a fundamental shift in our model. Now both businesses and consumers can access Office through subscription.

            With this shift, we expect to grow our customer base, increase customer satisfaction via continuous updates, and reduce piracy. As our enterprise customers modernize their productivity infrastructures, we are confident that they will continue to deploy Office 365. We also expect our transactional customers to increasingly transition to the cloud with Office 365.

            It’s been a while now that we’ve been talking about our investments in the cloud, and I’m pleased to share that we are starting to realize the benefits of those investments in a meaningful way.

            Office 365 lights up with this latest release, as evidenced by our growing customer adoption. This quarter was our strongest ever, with net seat additions up 5 times of the prior year. One in four of our enterprise customers now has Office 365, and the business is on a $1 billion annual revenue run rate.

            2. Q3 FY13 performance: Chris Suh – General Manager of Investor Relations

            In the Windows Division, revenue was flat this quarter. Within that, OEM revenue performance was in line with the underlying x86 PC market, which continues to be challenged as the PC market evolves beyond the traditional PC to touch and mobile devices. This quarter, inventory levels were drawn down as the channel awaits new Windows 8 devices.

            Non-OEM revenue grew 40% this quarter, driven by sales in Surface and continued double digit growth in volume licensing. Businesses continue to value the Windows platform, and volume licensing of Windows is on track to deliver almost $4 billion in revenue this year, and nearly three-quarters of enterprise agreements that we signed this year include Windows.

            Additionally, this quarter we saw continued progress in the transition of Windows XP to Windows 7, and now two-thirds of enterprise desktops are running Windows 7.

            Now, I’ll move on to the Microsoft Business Division, where revenue grew 5%. Within that, business revenue grew 10%, driven by 16% growth in multiyear licensing. In January, we launched the new Office for consumers. The new Office introduces touch, social, and mobile scenarios as well as tight integration with SkyDrive, enabling access to documents from any device.

            The new Office is also available as a subscription, which benefits customers as they are always using the most modern version of Office. As Peter stated, we expect the shift to grow our customer base, and we saw strong early adoption of the subscription service.

            I would like to remind you that with subscription, the revenue is earned ratably over the length of the subscription, rather than at the initial purchase. All up, consumer revenue was roughly in line with the consumer PC market, influenced by the shift to subscription and strong [attach] gain.

            Peter Klein – Chief Financial Officer

            We are seeing [near term impact from going to subscription revenues on our revenues], particularly in our transactional business, in MBD [Microsoft Business Division], as people move from what may have been a transactional to a perpetual license, where the revenue is recognized up front, to a subscription service, where it’s recognized ratably. So you’re basically deferring the rest of the term of the subscription. So in the short term, you’ll be deferring revenues that were not in a subscription, and would have been recognized immediately.

            And as the subscription business is growing, you’ll see that impact growing, but over time, what you’ll get is what looks like an annuity revenue stream, that’s more predictable and has higher customer satisfaction and probably higher retention rates going forward. But in the short term, that will impact mostly in the transactional side of the MBD business.

            It is a fact that we are starting to get scale in our cloud services, and so the growth that we’re seeing in Office 365 is really coming at an improved margin as we scale that out.

            3. Q4 FY13 outlook: Peter Klein – Chief Financial Officer

            In the Windows Division, similar to this quarter, revenue will continue to reflect sales of Surface and strong volume licensing, while OEM revenue will be impacted by the declining traditional PC market as we work to increase our share in tablets.

            In the Microsoft Business Division, multiyear licensing revenue, which is approximately 60% of the division’s total, should grow low double digits. Excluding the recognition of revenue from the Office upgrade offer, transactional revenue, which is the remaining 40% of the division total, should be in line with the x86 PC market.

            As a reminder, when updating your Q4 models, we expect to recognize approximately $780 million of revenue related to the Office upgrade offer.

            … we are expanding both the product set and distribution, and that is broadly, all devices, inclusive of Surface. We are expanding distribution of Surface. We are now in 22 countries, 70 retailers. And we’ll continue to look to expand that. Not only just expanding, but improving the experience. And that’s true not just for Surface, but for broadly Windows 8 devices. And so we’ll be investing against that for both Surface and a broader array of Windows 8 devices at multiple price points, including lower price points going forward.

            4. PC market: Peter Klein – Chief Financial Officer

            On the PC market, I would look to some of the third parties, IDC and Gartner. They’re sort of in the 12-13-14 [%] down range this quarter. And in terms of the chipsets, we’ve always felt that with Windows 8, it was a process of the ecosystem of innovating across the board, and really starting to see that on the chips. And we’re very encouraged by both Haswell and some of the Atom processors to really improve the overall user experience that Windows 8 delivers. And over the coming selling season, I think that’s very encouraging and we’re optimistic about that.

            I think broadly, in improving our position in tablets, and generally in devices, there’s five or six dimensions ranging from what we’re doing with OEMs on the devices and the range of devices, and how they can have a range of price points. What the chips can do, because I think that’s a part of it. Both first party and third-party apps, and we’ve seen improvements across the board there. The user interface, and how we’re innovating across the user experience. And then distribution

            So if you start sort of from the bottom up, all the way to when you buy the product, we’re working across all those dimensions. And on the device side, we are working closely with the OEMs to help them take Windows 8 and show it off in all its glory, across different form factors. I talked about new smaller form factors and how Windows 8 can innovate to improve that experience.

            So I think the biggest thing we’re doing is helping them develop new and improved user experiences across the board, across size, across price point, and deliver a really compelling Windows 8 experience. And it’s not just the devices, like I said, it’s chips, it’s the apps, it’s the buying experience, it’s the user interface. So we’re really focused on all five or six of those dimensions going forward.

            Software defined server without Microsoft: HP Moonshot

            Updates as of Dec 6, 2013 (8 months after the original post):

            image

            Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

            This Cloud, Social, Big Data and Mobile we are referring to as this “New Style of IT” [when talking about the slide shown above]

            Through the Telescope: 3 Minutes on HP Moonshot [HewlettPackardVideos YouTube channel, July 24, 2013]

            Steven Hagler (Senior Director, HP Americas Moonshot) provides insight on Moonshot, why it’s right for the market, and what it means for your business. http://hp.com/go/moonshot

            HIGHLY RECOMMENDED READING:
            HP Offers Exclusive Peek Inside Impending Moonshot Servers [Enterprise Tech, Nov 26, 2013]: “The company is getting ready to launch a bunch of new server nodes for Moonshot in a few weeks”.
            – So far, the most simple and understandable info is serviced in Visual Configuration Moonshot diagram set: http://www.goldeneggs.fi/documents/GE-HP-MOONSHOT-A.pdf  This site includes also full visualisation for all x86 rack, desktop and blade servers.

            From HP Launches Investment Solutions to Ease Organizations’ Transitions to “New Style of IT” [press release, Dec 6, 2013]

            The HP accelerated migration program for cloud—helps …

            The HP Pre-Provisioning Solution—lets …

            New investment solutions for HP Moonshot servers and HP Converged Systems—provide customers and channel partners with quick access to the latest HP products through a simple, scalable and predictable monthly payment that aligns technology and financial requirements to business needs.   

            Access the world’s first software defined server [HP offering, Nov 27, 2013]
            With predictable and scalable monthly payments

            HP Moonshot Financing
            Cloud, Mobility, Security and Big Data require a different level of technology efficiency and scalability. Traditional systems may no longer be able to handle the increasing internet workloads with optimal performance. Having and investment strategy that gives you access to newer technology such as HP Moonshot allows you to meet the requirements for the New Style of IT.
            A simple and flexible payment structure can help you access the latest technology on your terms.
            Why leverage a predictable monthly payment?
            • Provides financial flexibility to scale up your business
            • May help mitigate the financial risk of your IT transformation
            Enables IT refresh cycles to keep up with latest technology
            • May help improve your cash flow
            • Offers predictable monthly payments which can help you stay within budget
            How does it work?
            • Talk to your HP Sales Rep about acquiring HP Moonshot using a predictable monthly payment
            Expand your capacity easily with a simple add-on payment
            • Add spare capacity needed for even greater agility
            • Set your payment terms based on your business needs
            • After an agreed term, you’ll be able to refresh your technology

            From The HP Moonshot team provides answers to your questions about the datacenter of the future [The HP Blog Hub, as of Aug 29, 2013]

            Q: WHAT IS THE FUNDAMENTAL IDEA BEHIND THE HP MOONSHOT SYSTEM?

            A: The idea is simple—use energy-efficient CPU’s attuned to a particular application to achieve radical power, space and cost savings. Stated another way; creating software defined servers for specific applications that run at scale.

            Q: WHAT IS INNOVATIVE ABOUT THE HP MOONSHOT ARCHITECTURE?

            A: The most innovative characteristic of HP Moonshot is the architecture. Everything that is a common resource in a traditional server has been converged into the chassis. The power, cooling, management, fabric, switches and uplinks are all shared across 45 hot-pluggable cartridges in a 4.3U chassis.

            Q: EXPLAIN WHAT IS MEANT BY “SOFTWARE DEFINED” SERVER

            A: Software defined servers achieve optimal useful work per watt by specializing for a given workload: matching a software application with available technology that can provide the most optimal performance. For example, the firstMoonshot server is tuned for the web front end LAMP (Linux/Apache/MySQL/PHP) stack. In the most extreme case of a future FPGA (Field Programmable Gate Array) cartridge, the hardware truly reflects the exact algorithm required.

            Q: DESCRIBE THE FABRIC THAT HAS BEEN INTEGRATED INTO THE CHASSIS

            A: The HP Moonshot 1500 Chassis has been built for future SOC designs that will require a range of network capabilities including cartridge to cartridge interconnect. Additionally, different workloads will have a range of storage needs. 

            There are four separate and independent fabrics that support a range of current and future capabilities; 8 lanes of Ethernet; storage fabric (6Gb SATA) that enable shared storage amongst cartridges or storage expansion to a single cartridge; a dedicated iLO management network to manage all the servers as one; a cluster fabric with point to point connectivity and low latency interconnect between servers.

            image

            Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

            We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]

            Calxeda Midway in HP Moonshot [Janet Bartleson YouTube channel, Oct 28, 2013]

            HP’s Paul Santeler encourages you to test Calxeda’s Midway-based Moonshot server cartridges in the HP Discovery Labs. http://www.hp.com/go/moonshot http://www.calxeda.com

            Details about the latest and future Calxeda SoCs see in the closing part of this Dec 7 update

            @SC13: HP Moonshot ProLiant m800 Server Cartridge with Texas Instruments [Janet Bartleson YouTube channel, Nov 26, 2013]

            @SC13, Texas Instruments’ Arnon Friedmann shows the HP ProLiant m800 Server Cartridge with 4 66K2H12 Keystone II SoCs each with 4 ARM Cortex A15 cores and 8 C66x DSP cores–alltogether providing 500 gigaflops of DSP performance and 8Gigabytes of data on the server cartridge. It’s lower power, lower cost than traditional servers.

            Details about the latest Texas Instruments DSP+ARM SoCs see after the Calxeda section in the closing part of this Dec 7 update

            The New Style of IT & HP Moonshot: Keynote by HP’s Martin Fink at ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 29, published on Nov 11, 2013]

            Keynote Presentation: The New Style of IT Speaker: Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company It’s an exciting time to be in technology. The IT industry is at a major inflection point driven by four generation-defining trends: the cloud, social, Big Data, and mobile. These trends are forever changing how consumers and businesses communicate, collaborate, and access information. And to accommodate these changes, enterprises, governments and fast growing companies desperately need a “New Style of IT.” Shaping the future of IT starts with a radically different approach to how we think about compute — for example, in servers, HP has a game-changing new category that requires 80% less space, uses 89% less energy, costs 77% less–and is 97% less complex. There’s never been a better time to be part of the ecosystem and usher in the next-generation of innovation.

            From Big Data and the future of computing – A conversation with John Sontag [HP Enterprise 20/20 Blog, October 28, 2013]

            20/20 Team: Where is HP today in terms of helping everyone become a data scientist?
            John Sontag: For that to happen we need a set of tools that allow us to be data scientists in more than the ad hoc way I just described. These tools should let us operate productively and repeatably, using vocabulary that we can share – so that each of us doesn’t have to learn the same lessons over and over again. Currently at HP, we’re building a software tool set that is helping people find value in the data they’re already surrounded by. We have HAVEn for data management, which includes the Vertica data store, and Autonomy for analysis. For enterprise security we have ArcSight and ThreatCentral. We have our work around StoreOnce to compress things, and Express Query to allow us to consume data in huge volumes. Then we have hardware initiatives like Moonshot, which is bringing different kinds of accelerators to bear so we can actually change how fast – and how effectively – we can chew on data.
            20/20 Team: And how is HP Labs helping shape where we are going?
            John Sontag: One thing we’re doing on the software front is creating new ways to interrogate data in real time through an interface that doesn’t require you to be a computer scientist.  We’re also looking at how we present the answers you get in a way that brings attention to the things you most need to be aware of. And then we’re thinking about how to let people who don’t have massive compute resources at their disposal also become data scientists.
            20/20 Team: What’s the answer to that?
            John Sontag: For that, we need to rethink the nature of the computer itself. If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data. Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways. And then we’re thinking about how to package these re-imagined computers into boxes of different sizes that match the needs of everyone from the individual to the massive, multinational entity. On top of all that, we need to reduce costs – if we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you, your colleagues, and partners across the world can conduct experiments on this data literally as fast as we can think them up.
            About John Sontag:
            John Sontag is vice president and director of systems research at HP Labs. The systems research organization is responsible for research in memristor, photonics, physical and system architectures, storing data at high volume, velocity and variety, and operating systems. Together with HP business units and partners, the team reaches from basic research to advanced development of key technologies.
            With more than 30 years of experience at HP in systems and operating system design and research, Sontag has had a variety of leadership roles in the development of HP-UX on PA-RISC and IPF, including 64-bit systems, support for multiple input/output systems, multi-system availability and Symmetric Multi-Processing scaling for OLTP and web servers.
            Sontag received a bachelor of science degree in electrical engineering from Carnegie Mellon University.

            Meet the Innovators [HewlettPackardVideos YouTube channel, May 23, 2013]

            Meet those behind the innovative technology that is HP Project Moonshot http://www.hp.com/go/moonshot

            From Meet the innovators behind the design and development of Project Moonshot [The HP Blog Hub, June 6, 2013]

            This video introduces you to key HP team members who were part of the team that brings you the innovative technology that fundamentally changes how hyperscale servers are built and operated such as:
            • Chandrakant Patel – HP Senior Fellow and HP Labs Chief Engineer
            • Paul Santeler  – Senior Vice President and General Manager of the HyperScale Business Unit
            • Kelly Pracht – Moonshot Hardware Platform Manager, HyperScale Business Unit
            • Dwight Barron – HP Fellow, Chief Technologist, HyperScale Business Unit

            From Six IT technologies to watch [HP Enterprise 20/20 Blog, Sept 5, 2013]

            1. Software-defined everything
            Over the last couple of years we have heard a lot about software defined networks (SDN) and more recently, software defined data center (SDDC). There are fundamentally two ways to implement a cloud. Either you take the approach of the major public cloud providers, combining low-cost skinless servers with commodity storage, linked through cheap networking. You establish racks and racks of them. It’s probably the cheapest solution, but you have to implement all the management and optimization yourself. You can use software tools to do so, but you will have to develop the policies, the workflows and the automation.
            Alternatively you can use what is becoming known as “converged infrastructure,” a term originally coined by HP, but now used by all our competitors. Servers, storage and networking are integrated in a single rack, or a series of interconnected ones, and the management and orchestration software included in the offering, provides an optimal use of the environment. You get increased flexibility and are able to respond faster to requests and opportunities.
            We all know that different workloads require different characteristics. Infrastructures are typically implemented using general purpose configurations that have been optimized to address a very large variety of workloads. So, they do an average job for each. What if we could change the configuration automatically whenever the workload changes to ensure optimal usage of the infrastructure for each workload? This is precisely the concept of software defined environments. Configurations are no longer stored in the hardware, but adapted as and when required. Obviously this requires more advanced software that is capable of reconfiguring the resources.
            A software-defined data center is described as a data center where the infrastructure is virtualized and also delivered as a service. Control of the data center is automated by software – meaning hardware configuration is maintained through intelligent software systems. Three core components comprise the SDDC, server virtualization, network virtualization and storage virtualization. It remains to be said that some workloads still require physical systems (often referred to as bare metal), hence the importance of projects such as OpenStack’s Ironic which could be defined as a hypervisor for physical environments.

            2. Specialized servers

            As I mentioned, all workloads are not equal, but run on the same, general purpose servers (typically x86). What if we create servers that are optimized for specific workloads? In particular, when developing cloud environments delivering multi-tenant SaaS services, one could well envisage the use of servers specialized for a specific task, for example video manipulation, dynamic web service management. Developing efficient, low energy specialized servers that can be configured through software is what HP’s Project Moonshot is all about. Today, although still in its infancy, there is much more to come. Imagine about 45 server/storage cartridges linked through three fabrics (for networking, storage and high speed cartridge to cartridge interconnections), sharing common elements such as network controllers, management functions and power management. If you then build the cartridges using low energy servers, you reduce energy consumption by nearly 90%. If you build SaaS type environments, using multi-tenant application modules, do you still need virtualization? This simplifies the environment, reduces the cost of running it and optimizes the use of server technology for every workload.

            Particularly for environments that constantly run certain types of workloads, such as analyzing social or sensor data, the use of specialized servers can make the difference. This is definitely an evolution to watch.

            3. Photonics

            Let’s now complement those specialized servers with photonic based connections enabling flat, hyper-efficient networks boosting bandwidth, and we have an environment that is optimized to deliver the complex tasks of analyzing and acting upon signals provided by the environment in its largest sense.

            But technology is going even further. I talked about the three fabrics, over time; why not use photonics to improve the speed of the fabrics themselves, increasing the overall compute speed. We are not there yet, but early experiments with photonic backplanes for blade systems have shown overall compute speed increased up to a factor seven. That should be the second step.

            The third step takes things further. The specialized servers I talked about are typically system on a chip (SoC) servers, in other words, complete computers on a single chip. Why not use photonics to link those chips with their outside world? On-chip lasers have been developed in prototypes, so we are not that far out. We could even bring things one step further and use photonics within the chip itself, but that is still a little further out. I can’t tell you the increase in compute power that such evolutions will provide you, but I would expect it to be huge.

            4. Storage
            Storage is at a crossroads. On the one hand, hard disk drives (HDD) have improved drastically over the last 20 years, both in reading speed and in density. I still remember the 20MB hard disk drive, weighing 125Kg of the early 80’s. When I compare that with the 3TB drive I bought a couple months ago for my home PC, I can easily depict this evolution. But then the SSD (solid state disk) has appeared. Where a HDD read will take you 4 ms, the SDD read is down at 0.05 ms.

            Using nanotechnologies, HP Labs did develop prototypes of the Memristor, a new approach to data storage, faster than Flash memory and consumes way less energy. Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.


            Details about the latest and future Calxeda SoCs:

            Calxeda EnergyCore ECX-2000 family – ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 30, 2013]

            Calxeda tells us about their new EnergyCore ECX-2000 product line based on ARM Cortex-A15. http://www.calxeda.com/ecx-2000-family/

            From ECX-2000 Product Brief [October, 2013]

            The Calxeda EnergyCore ECX-2000 Series is a family of SoC (Server-on-Chip) products that delivers the power efficiency of ARM® processors, and the OpenStack, Linux, and virtualization software needed for modern cloud infrastructures. Using the ARM Cortex A15 quad-core processor, the ECX-2000 delivers roughly twice the performance, three times the memory bandwidth, and four times the memory capacity of the ground-breaking ECX-1000. It is extremely scalable due to the integrated Fleet Fabric Switch, while the embedded Fleet Engine simultaneously provides out-of-band control and intelligence for autonomic operation.

            In addition to enhanced performance, the ECX-2000 provides hardware virtualization support via KVM and Xen hypervisors. Coupled with certified support for Ubuntu 13.10 and the Havana Openstack release, this marks the first time an ARM SoC is ready for Cloud computing. The Fleet Fabric enables the highest network and interconnect bandwidth in the MicroServer space, making this an ideal platform for streaming media and network-intensive applications.

            The net result of the EnergyCore SoC architecture is a dramatic reduction in power and space requirements, allowing rapidly growing data centers to quickly realize operating and capital cost savings.

            image

            Scalability you can grow into. An integrated EnergyCore Fabric Switch within every SoC provides up to five 10 Gigabit lanes for connecting thousands of ECX-2000 server nodes into clusters capable of handling distributed applications at extreme scale. Completely topology agnostic, each SoC can be deployed to work in a variety of mesh, grid, or tree network structures, providing opportunities to find the right balance of network throughput and fault resiliency for any given workload.

            Fleet Fabric Switch
            • Integrated 80Gb (8×8) crossbar switch with through-traffic support
            • Five (5) 10Gb external channels, three (3) 10Gb internal channels
            • Configurable topology capable of connecting up to 4096 nodes
            • Dynamic Link Speed Control from 1Gb to 10Gb to minimize power and maximize performance
            • Network Proxy Support maintains network presence even with node powered off
            • In-order flow delivery
            • MAC learning provider support for virtualization

            ARM Servers and Xen — Hypervisor Support at Hyperscale – Larry Wikelius, [Co-Founder of] Calxeda [TheLinuxFoundation YouTube channel, Oct 1, 2013]

            [Xen User Summit 2013] The emergence of power optimized hyperscale servers is leading to a revolution in Data Center design. The intersection of this revolution with the growth of Cloud Computing, Big Data and Scale Out Storage solutions is resulting in innovation at rate and pace in the Server Industry that has not been seen for years. One particular example of this innovation is the deployment of ARM based servers in the Data Center and the impact these servers have on Power, Density and Scale. In this presentation we will look at the role that Xen is playing in the Revolution of ARM based server design and deployment and the impact on applications, systems management and provisioning.

            Calxeda Launches Midway ARM Server Chips, Extends Roadmap [EnterpriseTech, Oct 28, 2013]

            ARM server chip supplier Calxeda is just about to ship its second generation of EnergyCore processors for hyperscale systems and most of its competitors are still working on their first products. Calxeda is also tweaking its roadmap to add a new chip to its lineup, which will bridge between the current 32-bit ARM chips and its future 64-bit processors.
            There is going to be a lot of talk about server-class ARM processors this week, particularly with ARM Holdings hosting its TechCon conference in Santa Clara.
            A month ago, EnterpriseTech told you about the “Midway” chip that Calxeda had in the works and as well as its roadmap to get beefier 64-bit cores and extend its Fleet Services fabric to allow for more than 100,000 nodes to be linked together.
            The details were a little thin on the Midway chip, but we now know that it will be commercialized as the ECX-2000, and that Calxeda is sending out samples to server makers right now. The plan is to have the ECX-2000 generally available by the end of the year, and that is why company is ready to talk about some feeds and speeds. Karl Freund, vice president of marketing at Calxeda, walked EnterpriseTech through the details.

            image

            The Midway chip is fabricated in the same 40 nanometer process as the existing “High Bank” ECX-1000 chip that Calxeda first put into the field in November 2011 in the experimental “Redstone” hyperscale servers from Hewlett-Packard. That 32-bit chip, based on the ARM Cortex-A9 core, was subsequently adopted in systems from Penguin Computing, Boston, and a number of other hyperscale datacenter operators who did proofs of concept with the chips. The ECX-1000 has four cores and was somewhat limited in its performance and was definitely limited in its main memory, which topped out at 4 GB across the four-core processor. But the ECX-2000 addresses these issues.
            The ECX-2000 is based on ARM Holding’s Cortex-A15 core and has the 40-bit physical memory extensions, which allows for up to 16 GB of memory to be physically attached to each socket. With the 40-bit physical addressing added with the Cortex-A15, the memory controller can, in theory, address up to 1 TB of main memory; this is called Large Physical Address Extension (LPAE) in the ARM lingo, and it maps the 32-bit physical addressing on the core to a 40-bit virtual address space. Each core on the ECX-2000 has 32 KB of L1 instruction cache and 32 KB of L1 data cache, and ARM licensees are allowed to scale the L2 cache as they see fit. The ECX-2000 has 4 MB of L2 cache shared across the four cores on the die. These are exactly the same L1 and L2 cache sizes as used in the prior ECX-1000 chips.
            The Cortex-A15 design was created to scale to 2.5 GHz, but as you crank up the clocks on any chip, the amount of energy consumed and heat radiated grows progressively larger as clock speeds go up. At a certain point, it just doesn’t make sense to push clock speeds. Moreover, every drop in clock speed gives a proportionately larger increase in thermal efficiency, and this is why, says Freund, Calxeda is making its implementation of the Cortex-A15 top out at 1.8 GHz. The company will offer lower-speed parts running at 1.1 GHz and 1.4 GHz for customers that need an even better thermal profile or a cheaper part where low cost is more important than raw performance or thermals.
            What Calxeda and its server and storage array customers are focused on is the fact that the Midway chip running at 1.8 GHz has twice the integer, floating point, and Java performance of a 1.1 GHz High Bank chip. That is possible, in part, because the new chip has four times the main memory and three times the memory bandwidth as the old chip in addition to a 64 percent boost in clock speed. Calxeda is not yet done benchmarking systems using the chips to get a measure of their thermal efficiency, but is saying that there is as much as a 33 percent boost in performance per watt comparing old to new ECX chips.
            The new ECX-2000 chip has a dual-core Cortex-A7 chip on the die that is used as a controller for the system BIOS as well as a baseboard management controller and a power management controller for the servers that use them. These Fleet Engines, as Calxeda calls them, eliminate yet another set of components, and therefore their cost, in the system. These engines also control the topology of the Fleet Services fabric, which can be set up in 2D torus, mesh, butterfly tree, and fat tree network configurations.
            The Fleet Services fabric has 80 Gb/sec of aggregate bandwidth and offers multiple 10 Gb/sec Ethernet links coming off the die to interconnect server nodes on a single card, multiple cards in an enclosure, multiple enclosures in a rack, and multiple racks in a data center. The Ethernet links are also used to allow users to get to applications running on the machines.
            Freund says that the ECX-2000 chip is aimed at distributed, stateless server workloads, such as web server front ends, caching servers, and content distribution. It is also suitable for analytics workloads like Hadoop and distributed NoSQL data stores like Cassandra, all of which tend to run on Linux. Both Red Hat and Canonical are cooking up commercial-grade Linuxes for the Calxeda chips, and SUSE Linux is probably not going to be far behind. The new chips are also expected to see action in scale-out storage systems such as OpenStack Swift object storage or the more elaborate Gluster and Ceph clustered file systems. The OpenStack cloud controller embedded in the just-announced Ubuntu Server 13.10 is also certified to run on the Midway chip.
            Hewlett-Packard has confirmed that it is creating a quad-node server cartridge for its “Moonshot” hyperscale servers, which should ship to customers sometime in the first or second quarter of 2014. (It all depends on how long HP takes to certify the system board.) Penguin Computing, Foxconn, Aaeon, and Boston are expected to get beta systems out the door this year using the Midway chip and will have them in production in the first half of next year. Yes, that’s pretty vague, but that is the server business, and vagueness is to be expected in such a young market as the ARM server market is.
            Looking ahead, Calxeda is adding a new processor to its roadmap, code-named “Sarita.” Here’s what the latest system-on-chip roadmap looks like now:

            image

            The future “Lago” chip is the first 64-bit chip that will come out of Calxeda, and it is based on the Cortex-A57 design from ARM Holdings –one of several ARMv8 designs, in fact. (The existing Calxeda chips are based on the ARMv7 architecture.)
            Both Sarita and Lago will be implemented in TSMC’s 28 nanometer processes, and that shrink from the current 40 nanometer to 28 nanometer processes is going to allow for a lot more cores and other features to be added to the die and also likely a decent jump in clock speed, too. Freund is not saying at the moment which way it will go.
            But what Freund will confirm is that Sarita will be pin-compatible with the existing Midway chip, meaning that server makers who adopt Midway will have a processor bump they can offer in a relatively easy fashion. It will also be based on the Cortex-A57 cores from ARM Holdings, and will sport four cores on a die that deliver about a 50 percent performance increase compared to the Midway chips.
            The Lago chips, we now know, will scale to eight cores on a die and deliver about twice the performance of the Midway chips. Both Lago and Sarita are on the same schedule, in fact, and they are expected to tape out this quarter. Calxeda expects to start sampling them to customers in the second quarter of 2014, with production quantities being available at the end of 2014.
            Not Just Compute, But Networking, Too
            As important as the processing is to a system, the Fleet Services fabric interconnect is perhaps the key differentiator in its design. The current iteration of that interconnect, which is a distributed Layer 2 switch fabric that is spread across each chip in a cluster, can scale across 4,096 nodes without requiring top-of-rack and aggregation switches.

            image

            Both of the Lago and Sarita chips will be using the Fleet Services 2.0 intehttp://www.ti.com/product/66ak2h12rconnect that is now being launched with Midway. This iteration of the interconnect has all kinds of tweaks and nips and tucks but no scalability enhancements beyond the 4,096 nodes in the original fabric.
            Freund says that the Fleet Services 3.0 fabric, which allows the distributed switch architecture to scale above 100,000 nodes in a flat network, will probably now come with the “Ratamosa” chips in 2015. It was originally – and loosely – scheduled for Lago next year. The circuits that do the fabric interconnect is not substantially different, says Freund, but the scalability is enabled through software. It could be that customers are not going to need such scalability as rapidly as Calxeda originally thought.
            The “Navarro” kicker to the Ratamosa chip is presumably based on the ARMv9 architecture, and Calxeda is not saying anything about when we might see that and what properties it might have. All that it has said thus far is that it is aimed at the “enterprise server era.”


            Details about the latest Texas Instruments DSP+ARM SoCs:

            A Better Way to Cloud [MultiVuOnlineVideo YouTube channel, Nov 13, 2012]

            To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. To view Multimedia News Release, go to http://www.multivu.com/mnr/54044-texas-instruments-keystone-multicore-socs-revitalize-cloud-applications

            Infinite Scalability in Multicore Processors [Texas Instruments YouTube channel, Aug 27, 2012]

            Over the years, our industry has preached how different types of end equipments and applications are best served by distinctive multicore architectures tailored to each. There are even those applications, such as high performance computing, which can be addressed by more than one type of multicore architecture. Yet most multicore devices today tend to be suited for a specific approach or a particular set of markets. This keynote address, from the 2012 Multicore Developer’s Conferece, touches upon why the market needs an “infinitely scalable” multicore architecture which is both scalable and flexible enough to support disparate markets and the varied ways in which certain applications are addressed. The speaker presents examples of how a single multicore architecture can be scalable enough to address the needs of various high performance markets, including cloud RAN, networking, imaging and high performance computing. Ramesh Kumar manages the worldwide business for TI’s multicore growth markets organization. The organization develops multicore processors and software that are targeted for the communication infrastructure space, including multimedia and networking infrastructure equipment, as well as end equipment that requires multicore processors like public safety, medical imaging, high performance computing and test and measurement. Ramesh is a graduate of Northeastern University, where he obtained an executive MBA, and Purdue University where he received a master of science in electrical engineering.

            From Imagine the impact…TI’s KeyStone SoC + HP Moonshot [TI’s Multicore Mix Blog, April 19, 2013]

            TI’s participation in HP’s Pathfinder Innovation Ecosystem is the first step towards arming HP’s customers with optimized server systems that are ideally suited for workloads such as oil and gas exploration, Cloud Radio Access Networks (C-RAN), voice over LTE and video transcoding. This collaboration between TI and HP is a bold step forward, enabling flexible, optimized servers to bring differentiated technologies, such as TI’s DSPs, to a broader set of application providers. TI’s KeyStone II-based SoCs, which integrate fixed- and floating- point DSP cores with multiple ARM® Cortex™A-15 MPCore processors, packet and security processing, and high speed interconnect, give customers the performance, scalability and programmability needed to build software-defined servers. HP’s Moonshot system integrates storage, networking and compute cards with a flexible interconnect, allowing customers to choose the optimized ratio enabling the industry’s first software-defined server platform. Bringing TI’s KeyStone II-based SoCs into HP’s Moonshot system opens up several tantalizing possibilities for the future. Let’s look at a few examples:
            Think about the number of voice conversations happening over mobile devices every day. These conversations are independent of each other, and each will need transcoding from one voice format to another as voice travels from one mobile device, through the network infrastructure and to the other mobile device. The sheer number of such conversations demand that the servers used for voice transcoding be optimized for this function. Voice is just one example. Now think about video and music, and you can imagine the vast amount of processing required. Using TI’s KeyStone II-based SoCs with DSP technology provides optimized server architecture for these applications because our SoCs are specifically tuned for signal processing workloads.
            Another example can be with C-RAN. We have seen a huge push for mobile operators to move most of the mobile radio processing to the data center. There are several approaches to achieve this goal, and each has pros and cons associated with them. But one thing is certain – each approach has to do wireless symbol processing to achieve optimum 3G or 4G communications with smart mobile devices. TI’s KeyStone II-based SoCs are leading the wireless communication infrastructure market and combine key accelerators such as BCP (Bit Rate Co-Processor), VCP (Viturbi Co-Processor) and others to enable 3G/4G standards compliant for wireless processing. These key accelerators offload standard-based wireless processing from the ARM and/or DSP cores, freeing the cores for value-added processing. The combination of ARM/DSP with these accelerators provides an optimum SoC for 3G/4G wireless processing. By combining TI’s KeyStone II-based SoC with HP’s Moonshot system, operators and network equipment providers can now build customized servers for C-RAN to achieve higher performance systems at lower cost and ultimately provide better experiences to their customers.

            A better way to cloud: TI’s new KeyStone multicore SoCs [embeddednewstv YouTube channel, published on Jan 12,2013 (YouTube: Oct 21, 2013)]

            Brian Glinsman, vice president of multicore processors at Texas Instruments, discusses TI’s new KeyStone multicore SoCs for cloud infrastructure applications. TI announced six new SoCs, based on their 28-nm KeyStone architecture, featuring the Industry’s first implementation of quad ARM Cortex-A15 MPCore processors and TMS320C66x DSPs for purpose built servers, networking, high performance computing, gaming and media processing applications.

            Texas Instruments Offers System on a Chip for HPC Applications [RichReport YouTube channel, Nov 20, 2012]

            In this video from SC12, Arnon Friedmann from Texas Instruments describes the company’s new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”

            A better way to cloud: TI’s new KeyStone multicore SoCs revitalize cloud applications, enabling new capabilities and a quantum leap in performance at significantly reduced power consumption

              • Industry’s first implementation of quad ARM® Cortex™-A15 MPCore™ processors in infrastructure-class embedded SoC offers developers exceptional capacity & performance at significantly reduced power for networking, high performance computing and more
              • Unmatched combination of Cortex-A15 processors, C66x DSPs, packet processing, security processing and Ethernet switching, transforms the real-time cloud into an optimized high performance, power efficient processing platform
              • Scalable KeyStone architecture now features 20+ software compatible devices, enabling customers to more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices

            ELECTRONICA – MUNICH (Nov.13, 2012) /PRNewswire/ — To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption.

            To TI, a BETTER way to cloud means:

              • Safer communities thanks to enhanced weather modeling;
              • Higher returns from time sensitive financial analysis;
              • Improved productivity and safety in energy exploration;
              • Faster commuting on safer highways in safer cars;
              • Exceptional video on any screen, anywhere, any time;
              • More productive and environmentally friendly factories; and
              • An overall reduction in energy consumption for a greener planet.
              TI’s new KeyStone multicore SoCs are enabling this – and much more. These 28-nm devices integrate TI’s fixed-and floating-point TMS320C66x digital signal processor (DSP) generation cores – yielding the best performance per watt ratio in the DSP industry – with multiple ARM® Cortex™-A15 MPCore™ processors – delivering unprecedented processing capability combined with low power consumption – facilitating the development of a wide-range of infrastructure applications that can enable more efficient cloud experiences. The unique combination of Cortex-A15 processors and C66x DSPcores, with built-in packet processing and Ethernet switching, is designed to efficiently offload and enhance the cloud’s first generation general purpose servers; servers that struggle with big data applications like high performance computing and video processing.
              “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”
              TI’s six new high-performance SoCs include the 66AK2E02, 66AK2E05, 66AK2H06, 66AK2H12, AM5K2E02 and AM5K2E04, all based on the KeyStone multicore architecture. With KeyStone’s low latency high bandwidth multicore shared memory controller (MSMC), these new SoCs yield 50 percent higher memory throughput when compared to other RISC-based SoCs. Together, these processing elements, with the integration of security processing, networking and switching, reduce system cost and power consumption, allowing developers to support the development of more cost-efficient, green applications and workloads, including high performance computing, video delivery and media and image processing. With the matchless combination TI has integrated into its newest multicore SoCs, developers of media and image processing applications will also create highly dense media solutions.

              image

              “Visionary and innovative are two words that come to mind when working with TI’s KeyStone devices,” said Joe Ye, CEO, CyWee. “Our goal is to offer solutions that merge the digital and physical worlds, and with TI’s new SoCs we are one step closer to making this a reality by pushing state-of-the-art video to virtualized server environments. Our collaboration with TI should enable developers to deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.”
              Simplified development with complete tools and support
              TI continues to ease development with its scalable KeyStone architecture, comprehensive software platform and low-cost tools. In the past two years, TI has developed over 20 software compatible multicore devices, including variations of DSP-based solutions, ARM-based solutions and hybrid solutions with both DSP and ARM-based processing, all based on two generations of the KeyStone architecture. With compatible platforms across TI’s multicore DSPs and SoCs, customers can more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices, starting at just $30 and operating at a clock rate of 850MHz all the way to 15GHz of total processing power.
              TI is also making it easier for developers to quickly get started with its KeyStone multicore solutions by offering easy-to-use, evaluation modules (EVMs) for less than $1K, reducing developers’ programming burdens and speeding development time with a robust ecosystem of multicore tools and software.
              In addition, TI’s Design Network features a worldwide community of respected and well established companies offering products and services that support TI multicore solutions. Companies offering supporting solutions to TI’s newest KeyStone-based multicore SoCs include 3L Ltd., 6WIND, Advantech, Aricent, Azcom Technology, Canonical, CriticalBlue Enea, Ittiam Systems, Mentor Graphics, mimoOn, MontaVista Software, Nash Technologies, PolyCore Software and Wind River.
              Availability and pricing
              TI’s 66AK2Hx SoCs are currently available for sampling, with broader device availability in 1Q13 and EVM availability in 2Q13. AM5K2Ex and 66AK2Ex samples and EVMs will be available in the second half of 2013. Pricing for these devices will start at $49 for 1 KU.

              66AK2H14 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, Nov 10, 2013]
              The same as below for 66AK2H12 SoC with addition of:

              More Literature:

              From that the below excerpt is essential to understand the added value above 66AK2H12 SoC:

              image

              Figure 1. TI’s KeyStone™ 66AK2H14 SoC

              The 66AK2H14 SoC shown in Figure 1, with the raw computing power of eight C66x processors and quad ARM Cortex-A15s at over 1GHz performance, enables applications such as very large fast fourier transforms (FFT) in radar and multiple camera image analytics where a 10Gbit/s networking connection is needed. There are, and have been, several sophisticated technologies that have offered the bandwidth and additional features to fill this role. Some such as Serial RapidIO® and Infiniband have been successful in application domains that Gigabit Ethernet could not address, and continue to make sense, but 10Gbit/s Ethernet will challenge their existence.

              66AK2H12 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, created on Nov 8, 2012]

              Datasheet manual [351 pages]:

              More Literature:

              Description

              The 66AK2Hx platform is TI’s first to combine the quad ARM® Cortex™-A15 MPCore™ processors with up to eight TMS320C66x high-performance DSPs using the KeyStone II architecture. Unlike previous ARM Cortex-A15 devices that were designed for consumer products, the 66AK2Hx platform provides up to 5.6 GHz of ARM and 11.2 GHz of DSP processing coupled with security and packet processing and Ethernet switching, all at lower power than multi-chip solutions making it optimal for embedded infrastructure applications like cloud computing, media processing, high-performance computing, transcoding, security, gaming, analytics and virtual desktop. Using TI’s heterogeneous programming runtime software and tools, customers can easily develop differentiated products with 66AK2Hx SoCs.

              image

              Taking Multicore to the Next Level: KeyStone II Architecture [Texas Instruments YouTube channel, Feb 26, 2012]

              TI’s scalable KeyStone II multicore architecture includes support for both TMS320C66x DSP cores and multiple cache coherent quad ARM Cortex™-A15 clusters, for a mixture of up to 32 DSP and RISC cores. With significant updates to its award-winning KeyStone architecture, TI is now paving the way for a new era of high performance 28-nm devices that meld signal processing, networking, security and control functionality, with KeyStone II. Ideal for applications that demand superior performance and low power, devices based on the KeyStone architecture are optimized for high performance markets including communications infrastructure, mission critical, test and automation, medical imaging and high performance and cloud computing. For more information, please visit http://www.ti.com/multicore.

              Introducing the EVMK2H [Texas Instruments YouTube channel, Nov 15, 2013]

              Introducing the EVMK2H evaluation module, the cost-efficient development tool from Texas Instruments that enables developers to quickly get started working on designs for the 66AK2H06, 66AK2H12, and 66AK2H14 multicore DSP + ARM devices based on the KeyStone architecture.

              Kick start development of high performance compute systems with TI’s new KeyStone™ SoC and evaluation module [TI press release, Nov 14, 2013]

              Combination of DSP + ARM® cores and high-speed peripherals offer developers an optimal compute solution at low power consumption

              DALLAS, Nov. 14, 2013 /PRNewswire/ — Further easing the development of processing-intensive applications, Texas Instruments (TI) (NASDAQ: TXN) is unveiling a new system-on-chip (SoC), the 66AK2H14, and evaluation module (EVM) for its KeyStoneTM-based 66AK2Hx family of SoCs. With the new 66AK2H14 device, developers designing high-performance compute systems now have access to a 10Gbps Ethernet switch-on-chip. The inclusion of the 10GigE switch, along with the other high-speed, on-chip interfaces, saves overall board space, reduces chip count and ultimately lowers system cost and power. The EVM enables developers to evaluate and benchmark faster and easier. The 66AK2H14 SoC provides industry-leading computational DSP performance at 307 GMACS/153 GFLOPS and 19600 DMIPS of ARM performance, making it ideal for a wide variety of applications such as video surveillance, radar processing, medical imaging, machine vision and geological exploration.

              “Customers today require increased performance to process compute-intensive workloads using less energy in a smaller footprint,” said Paul Santeler, vice president and general manager, Hyperscale Business, HP. “As a partner in HP’s Moonshot ecosystem dedicated to the rapid development of new Moonshot servers, we believe TI’s KeyStone design will provide new capabilities across multiple disciplines to accelerate the pace of telecommunication innovations and geological exploration.”

              Meet TI’s new 10Gbps Ethernet DSP + ARM SoC
              TI’s newest silicon variant, the 66AK2H14, is the latest addition to its high-performance 66AK2Hx SoC family which integrates multiple ARM Cortex™-A15 MPCore™ processors and TI’s fixed- and floating-point TMS320C66x digital signal processor (DSP) generation cores. The 66AK2H14 offers developers exceptional capacity and performance (up to 9.6 GHz of cumulative DSP processing) at industry-leading size, weight, and power. In addition, the new SoC features a wide array of unique high-speed interfaces, including PCIe, RapidIO, Hyperlink, 1Gbps and 10Gbps Ethernet, achieving total I/O throughput of up to 154Gbps. These interfaces are all distinct and not multiplexed, allowing designers tremendous flexibility with uncompromising performance in their designs.
              Ease development and debugging with TI’s tools and software
              TI helps simplify the design process by offering developers highly optimized software for embedded HPC systems along with development and debugging tools for the EVMK2H – all for under $1,000. The EVMK2H features a single 66AK2H14 SoC, a status LCD, two 1Gbps Ethernet RJ-45 interfaces and on-board emulation. An optional EVM breakout card (available separately) also provides two 10Gbps Ethernet optical interfaces for 20Gbps backplane connectivity and optional wire rate switching in high density systems.
              The EVMK2H is bundled with TI’s Multicore Software Development Kit (MCSDK), enabling faster development with production ready foundational software. The MCSDK eases development and reduces time to market by providing highly-optimized bundles of foundational, platform-specific drivers, optimized libraries and demos.
              Complementary analog products to increase system performance
              TI offers a wide range of power management and analog signal chain components to increase the system performance of 66AK2H14 SoC-based designs. For example, the TPS53xx integrated FET DC/DC converters provide the highest level of power conversion efficiency even at light loads, while the LM10011 VID converter with dynamic voltage control helps reduce system power consumption. The CDCM6208 low-jitter clock generator also eliminates the need for external buffers, jitter cleaners and level translators.
              Availability and pricing
              TI’s EVMK2H is available now through TI distribution partners or TI.com for $995. In addition to TI’s Linux distribution provided in the MCSDK, Wind River® Linux is available now for the 66AK2Hxx family of SoCs. Green Hills® INTEGRITY® RTOS and Wind River VxWorks® RTOS support will each be available before the end of the year. Pricing for the 66AK2H14 SoC will start at $330 for 1 KU. The 10Gbps Ethernet breakout card will be available from Mistral.

              Ask the Expert: How can developers accelerate scientific computing with TI’s multicore DSPs? [Texas Instruments YouTube channel, Feb 7, 2012]

              Dr. Arnon Friedmann is the business manager for TI’s high performance computing products in the multicore and media infrastructure business. In this video, he explains how TI’s multicore DSPs are well suited for computing applications in oil and gas exploration, financial modeling and molecular dynamics, where ultra- high performance, low power and easy programmability are critical requirements.

              Ask the Expert: Arnon Friedmann [Texas Instruments YouTube channel, Sept 6, 2012]

              How are TI’s latest multicore devices a fit for video surveillance and smart analytic camera applications? Dr. Arnon Friedmann, PhD, is a business manager for multicore processors at Texas Instruments. In this role, he is responsible for growing TI’s business in high performance computing, mission critical, test and measurement and imaging markets. Prior to his current role, Dr. Friedmann served as the marketing director for TI’s wireless base station infrastructure group, where he was responsible for all marketing and design activities. Throughout his 14 years of experience in digital communications research and development, Dr. Friedmann has accumulated patents in the areas of disk drive systems, ADSL modems and 3G/4G wireless communications. He holds a PhD in electrical engineering and bachelor of science in engineering physics, both from the University of California, San Diego.

              End of Updates as of Dec 6, 2013


              The original post (8 months ago):

              HP Moonshot: Designed for the Data Center, Built for the Planet [HP press kit, April 8, 2013]

              On April 8, 2013, HP unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

              For more details on the disruptive potential of HP Moonshot, visit TheDisruption.com

              Introducing HP Moonshot [HewlettPackardVideos April 11, 2013]

              See how HP is defining disruption with the introduction of HP Moonshot.

              HP’s Cutting Edge Data Center Innovation [Ramón Baez, Senior Vice President and Chief Information Officer (CIO) of HP, HP Next [launched on April 2], April 10, 2013]

              This is an exciting time to be in the IT industry right now. For those of you who have been around for a while — as I have — there have been dramatic shifts that have changed how businesses operate.
              From the early days of the mainframes, to the explosion of the Internet and now social networks, every so often very important game-changing innovation comes along. We’re in the midst of another sea change in technology.
              Inside HP IT, we are testing the company’s Moonshot servers. With these servers running the same chips found in smart phones and tablets, they are using incredibly less power, require considerably less cooling and have a smaller footprint.

              We currently are running some of our intensive hp.com applications on Moonshot and are seeing very encouraging results. Over half a billion people will visit hp.com this year, and the new Moonshot technology will run at a fraction of the space, power and cost – basically we expect to run HP.com off of the same amount of energy needed for a dozen 60-watt light bulbs.

              This technology will revolutionize data centers.
              Within HP IT, we are fortunate in that over the past several years we have built a solid data center foundation to run our company. Like many companies, we were a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States.
              With the addition of four new EcoPODs to our infrastructure and these new Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.
              Moonshot is just the beginning.The product roadmap for Moonshot is extremely promising and I am excited to see what we can do with it within HP IT, and what benefits our customers will see.

              What Calxeda is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013] which is best to start with for its simple and efficient message, as well as what Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] already contained on this blog earlier:

              Calxeda discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

              Then we can turn to the Moonshot product launch by HP 2 days ago:

              Note that the first three videos following here were released 3 days later, so don’t be surpised by YouTube dates, in fact the same 3 videos (as well as the “Introducing HP Moonshot” embedded above) were delivered on April 8 live webcast, see the first 18 minutes of that, and then follow according HP’s flow of the presentation if you like. I would certainly recommend my own presentation compiled here.

              HP president and CEO Meg Whitman on the emergence of a new style of IT [HewlettPackardVideos YouTube channel, April 11, 2013]

              HP president and CEO Meg Whitman outlines the four megatrends causing strain on current infrastructure and how HP Project Moonshot servers are built to withstand data center challenges.

              EVP and GM of HP’s Enterprise Group Dave Donatelli discusses HP Moonshot [HewlettPackardVideos YouTube channel, April 11, 2013]

              EVP and GM of HP’s Enterprise Group Dave Donatelli details how HP Moonshot redefines the server market.

              Tour the Houston Discovery Lab — where the next generation of innovation is created [HewlettPackardVideos YouTube channel, April 11, 2013]

              SVP and GM of HP’s Industry Standard Servers and Software Mark Potter and VP and GM of HP’s Hyperscale Business Unit Paul Santeler tour HP’s Discovery Lab in Houston, Texas. HP’s Discovery Lab allows customers to test, tune and port their applications on HP Moonshot servers in-person and remotely.

              A new era of accelerated innovation [HP Moonshot minisite, April 8, 2013]

              Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale.

              Watch the unveiling [link to HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’]image

              On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part  of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.

              image

              imageWith up to a 180 servers inside the box (45 now) it was necessary to integrate network switching. There are two sockets (see left) for the network switch so you can configure for redundancy. The downlink module which talks to the cartridges is on left of the below image. This module is paired with an uplink module (see on the middle of the below image as taken out, and then shown with the uplink module on the right) that is in the back of the server. There will be more options available.image

              More information:
              Enterprise Information Library for Moonshot
              HP Moonshot System [Technical white paper from HP, April 5, 2013] from which I will include here the following excerpts for more information:

              HP Moonshot 1500 Chassis

              The HP Moonshot 1500 Chassis is a 4.3U form factor and slides out of the rack on a set of rails like a file cabinet drawer. It supports 45 HP ProLiant Moonshot Servers and an HP Moonshot-45G Switch Module that are serviceable from the top.
              It is a modern architecture engineered for the new style of IT that can support server cartridges, server and storage cartridges, storage only cartridges and a range of x86, ARM or accelerator based processor technologies.
              As an initial offering, the HP Moonshot 1500 Chassis is fully populated 45 HP ProLiant Moonshot Servers and one HP Moonshot-45G Switch Module and a second HP Moonshot-45G Switch Module can be purchased as an option. Future offerings will include quad server cartridges and will result in up to 180 servers per chassis. The 4.3U form factor allows for 10 chassis per rack, which with the quad server cartridge amounts to 1800 servers in a single rack.
              The Moonshot 1500 Chassis simplifies management with four iLO processors that share management responsibility for the 45 servers, power, cooling, and switches.

              Highly flexible fabric

              Built into the HP Moonshot 1500 Chassis architecture are four separate and independent fabrics that support a range of current and future capabilities:
              • Network fabric
              • Storage fabric
              • Management fabric
              • Integrated cluster fabric
              Network fabric
              The Network fabric provides the primary external communication path for the HP Moonshot 1500 Chassis.
              For communication within the chassis, the network switch has four communication channels to each of the 45 servers. Each channel supports a 1-GbE or 10-GbE interface. Each HP Moonshot-45G Switch Module supports 6 channels of 10GbE interface to the HP Moonshot-6SFP network uplink modules located in the rear of the chassis.
              Storage fabric
              The Storage fabric provides dedicated SAS lanes between server and storage cartridges. We utilize HP Smart Storage firmware found in the ProLiant family of servers to enable multiple core to spindle ratios for specific solutions. A hard drive can be shared among multiple server cartridges to enable low cost boot, logging, or attached to a node to provide storage expansion.
              The current HP Moonshot System configuration targets light scale-out applications. To provide the best operating environment for these applications, it includes HP ProLiant Moonshot Servers with a hard disk drive (HDD) as part of the server architecture. Shared storage is not an advantage for these environments. Future releases of the servers thattarget different solutions will take advantage of the storage fabric.
              Management fabric
              We utilize the Integrated Lights-Out (iLO) application-specific integrated circuit (ASIC) standard in the HP ProLiant family of servers to provide the innovative management features in the HP Moonshot System. To handle the range of extreme low energy processors we provide a device neutral approach to management, which can be easily consumed by data center operators to deploy at scale.
              The Management fabric enables management of the HP Moonshot System components as one platform with a dedicated iLO network. Benefits of the management fabric include:
              • The iLO Chassis Manager aggregates data to a common set of management interfaces.
              • The HP Moonshot 1500 Chassis has a single Ethernet port gateway that is the single point of access for the Moonshot Chassis manager.
              • Intelligent Platform Management Interface (IPMI) and Serial Console for each server
              • True out-of-band firmware update services
              • SL-APM Rack Management spans rack or multiple racks
              Integrated Cluster fabric
              The Integrated Cluster fabric provides a high-speed interface among future server cartridge technologies that will benefit from high bandwidth node-to-node communication. North, south, east, and west lanes are provided between individual server cartridges.
              The current HP ProLiant Moonshot Servertargets light scale-out applications. These applications do not benefit from the node-to-node communications, so the Integrated Cluster fabric is not utilized. Future releases of the cartridges that target different workloads that require low latency interconnects will take advantage of the Integrated Cluster fabric.

              HP ProLiant Moonshot Server

              HP will bring a growing library of cartridges, utilizing cutting-edge technology from industry leading partners. Each server will target specific solutions that support emerging Web, Cloud, and Massive-Scale Environments, as well as Analytics and Telecommunications. We are continuing server development for other applications, including Big Data, High-Performance Computing, Gaming, Financial Services, Genomics, Facial Recognition, Video Analysis, and more.
              Figure 4. Cartridges target specific solutions

              image

              The first server cartridge now available is HP ProLiant Moonshot Server, which includes the Intel® Atom Processor S1260. This is a low power processor that is right-sized for the light workloads. It has dedicated memory and storage, with discrete resources. This server design is idealfor light scale-out applications. Light scale-out applications require relatively little processing but moderately high I/O and include environments that perform the following functions:
              • Dedicated web hosting
              • Simple content delivery
              The HP ProLiant Moonshot Server can hot plug in the HP Moonshot 1500 Chassis. If service is necessary, it can be removed without affecting the other servers in the chassis. Table 1 defines the HP ProLiant Moonshot Server specifications.
              Table 1. HP ProLiant Moonshot Server specifications

              Processor
              One Intel® Atom Processor S1260
              Memory
              8 GB DDR3 ECC 1333 MHz
              Networking
              Integrated dual-port 1Gb Ethernet NIC
              Storage
              500 GB or 1 TB HDD or SSD, non-hot-plug, small form factor
              Operating systems
              Canonical Ubuntu 12.04
              Red Hat Enterprise Linux 6.4
              SUSE Linux Enterprise Server 11 SP2

              imageWith that HP CEO Seeks Turnaround Unveiling ‘Moonshot’ Super-Server: Tech [Bloomberg, April, 2013] as well as HP Moonshot: Say Goodbye to the Vanilla Server [Forbes, April 8, 2013]. HP however is much more eyeing the ARM based Moonshot servers which are expected to come later, because of the trends reflected on the left (source: HP). The software defined server concept is very general. image

              There are a number of quite different server cartridges expected to come, all specialised by server software installed on it. Typical specialised servers, for example, are the ones on which CyWee from Taiwan is working on with Texas Instruments’ new KeyStone II architecture featuring both ARM Cortex-A15 CPU cores and TI’s own C66x DSP cores for a mixture of up to 32 DSP and RISC cores in TI’s new 66AK2Hx family of SoCs, first of which is the TMS320TCI6636 implemented in 28nm foundry technology. Based on that CyWee will deliver multimedia Moonshot server cartridges for cloud gaming, virtual office, video conferencing and remote education (see even the first Keystone announcement). This CyWee involvement in HP Moonshot effort is part of HP’s Pathfinder Partner Program which Texas Instruments also joined recently to exploit a larger opportunity as:

              TI’s 66AK2Hx family and its integrated c66x multicore DSPs are applicable for workloads ranging from high performance computing, media processing, video conferencing, off-line image processing & analytics, video recorders (DVR/NVR), gaming, virtual desktop infrastructure and medical imaging.

              But Intel was able to win the central piece of the Moonshot System launch (originally initiated by HP as the “Moonshot Project” in November 2011 for disruption in terms of power and TCO for servers, actually with a Calxeda board used for research and development with other partners), at least as it was productized just two days ago:
              Raejeanne Skillern from Intel – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel]

              Raejeanne Skillern, Intel Director of Marketing for Cloud Computing, at HP Moonshot 2013 with John Furrier and Dave Vellante

              However ARM was not left out either just relegated in the beginning to highly advanced and/or specialised server roles with its SoC partners, and coming later in the year:

              • Applied Micro with networking and connectivity background having now the X-Gene ARM 64-bit Server on a Chip platform as well which features 8 ARM 64-bit high-performance cores developed from scratch according to an architecture license (i.e. not ARM’s own Cortex-A50 series core), clocked at up to 2.4GHz and also has 4 smaller cores for network and storage offloads (see AppliedMicro on the X-Gene ARM Server Platform and HP Moonshot [SiliconANGLE blog [April 9, 2013]). Sample reference boards to key customers were shipped in March (see Applied Micro’s cloud chip is an ARM-based, switch-killing machine [GigaOM, April 3, 2013]). In the latest X-Gene Arrives in Silicon [Open Compute Summit Winter 2013 presentation, Jan 16, 2013] video you can have the most recent strategic details (upto 2014 with FinFET implementation of a “Software defined X-Gene based data center components”, should be assumed that at 16nm). Here I will include a more product-oriented AppliedMicro Shows ARM 64-bit X-Gene Server on a Chip Hardware and Software [Charbax YouTube channel, Nov 3, 2012] overview video:
                Vinay Ravuri, Vice President and General Manager, Server Products at AppliedMicro gives an update on the 64bit ARM X-Gene Server Platform. At ARM Techcon 2012, AppliedMicro, ARM and several open-source software providers gave updates on their support of the ARM 64-bit X-Gene Server on a Chip Platform.

                More information: A 2013 Resolution for the Data Center [Applied Micro on Smart Connected Devices blog from ARM, Feb 4, 2013] about “plans from Oracle, Red Hat, Citrix and Cloudera to support this revolutionary architecture … Dell’s “Iron” server concept with X-Gene … an X-Gene based ARM server managed by the Dell DCS Software suite …” etc.

              • Texas Instruments with digital signal processing (DSP) background, as it was already presented above. 
              • Calxeda with integration of storage fabric and Internet switching background, with details coming later, etc.:

              This is what is empasized by Lakshmi Mandyam from ARM – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

              Lakshmi Mandyam, Director of Server Systems and Ecosystems, ARM, at HP Moonshot 2013, with John Furrier and Dave Vellante

              She is also mentioning in the talk the achievements which could put ARM and its SoC partners into a role which Intel now has with its general Atom S1200 based server cartridge product fitting into the Moonshot system. Perspective information on that is already available on my ‘Experiencing the Cloud’ blog here:
              The state of big.LITTLE processing [April 7, 2013]
              The future of mobile gaming at GDC 2013 and elsewhere [April 6, 2013]
              TSMC’s 16nm FinFET process to be further optimised with Imagination’s PowerVR Series6 GPUs and Cadence design infrastructure [April 8, 2013]
              With 28nm non-exclusive in 2013 TSMC tested first tape-out of an ARM Cortex™-A57 processor on 16nm FinFET process technology [April 3, 2013]

              The absence of Microsoft is even more interesting as AMD is also on this Moonshot bandwagon: Suresh Gopalakrishnan from AMD – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

              Suresh Gopalakrishnan, Vice President and General Manager, Server Business, AMD, at HP Moonshot 2013, with John Furrier and Dave Vellante

              already showing a Moonshot fitting server cartridge with AMD’s four next-generation SoCs (while Intel’s already productized cartridge is not yet at an SoC level). We know from CES 2013 that AMD Unveils Innovative New APUs and SoCs that Give Consumers a More Exciting and Immersive Experience [press release, Jan 7, 2013] with the:

              Temash” … elite low-power mobility processor for Windows 8 tablets and hybrids … to be the highest-performance SoC for tablets in the market, with 100 percent more graphics processing performance2 than its predecessor (codenamed “Hondo.”)
              Kabini” [SoC which] targets ultrathin notebooks with exceptional battery life and offers impressive levels of performance in both dual- and quad-core options. “Kabini” is expected to deliver an increase of more than 50 percent in performance3 over the previous generation of AMD essential computing APUs (codenamed “Brazos 2.0.”)
              Both APUs are scheduled to ship in the first half of 2013

              so AMD is really close to a server SoC to be delivered soon as well.

              The “more information” sections which follow her are:

              1. The Announcement
              2. Software Partners
              3. Hardware Partners


              1. The Announcement

              HP Moonshot [MultiVuOnlineVideo YouTube channel, April 8, 2013]

              HP today unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

              HP Launches New Class of Server for Social, Mobile, Cloud and Big Data [press release, April 8, 2013]

              Software defined servers designed for the data center and built for the planet
              … Built from HP’s industry-leading server intellectual property (IP) and 10 years of extensive research from HP Labs, the company’s central research arm, HP Moonshot delivers a significant improvement in energy, space, cost and simplicity. …
              The HP Moonshot system consists of the HP Moonshot 1500 enclosure and application-optimized HP ProLiant Moonshot servers. These servers will offer processors from multiple HP partners, each targeting a specific workload.
              With support for up to 1,800 servers per rack, HP Moonshot servers occupy one-eighth of the space required by traditional servers. This offers a compelling solution to the problem of physical data center space.(3) Each chassis shares traditional components including the fabric, HP Integrated Lights-Out (iLo) management, power supply and cooling fans. These shared components reduce complexity as well as add to the reduction in energy use and space.  
              The first HP ProLiant Moonshot server is available with the Intel® Atom S1200 processor and supports web-hosting workloads. HP Moonshot 1500, a 4.3u server enclosure, is fully equipped with 45 Intel-based servers, one network switch and supporting components.
              HP also announced a comprehensive roadmap of workload-optimized HP ProLiant Moonshot servers incorporating processors from a broad ecosystem of HP partners including AMD, AppliedMicro, Calxeda, Intel and Texas Instruments Incorporated.

              Scheduled to be released in the second half of 2013, the new HP ProLiant Moonshot servers will support emerging web, cloud and massive scale environments, as well as analytics and telecommunications. Future servers will be delivered for big data, high-performance computing, gaming, financial services, genomics, facial recognition, video analysis and other applications.

              The HP Moonshot system is immediately available in the United States and Canada and will be available in Europe, Asia and Latin America beginning next month.
              Pricing begins at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch.(4)
              (4) Estimated U.S. street prices. Actual prices may vary.

              More information:
              HP Moonshot System [Family data sheet, April 8, 2013]
              HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’ with embedded video gallery, press kit and more, originally created on April 12, 2010, obviously updated for the April 8, 2013 event]

              Moonshot 101 [HewlettPackardVideos YouTube channel, April 8, 2013]

              Paul Santeler, Vice President & GM of Hyperscale Business Unit at HP, discusses how HP Project Moonshot creates the new style of IT.http://hp.com/go/moonshot

              Alert for Microsoft:

              [4:42] We defined the industry standard server market [reference to HP’s Compaq heritage] and we’ve been the leader for years. With Moonshot we bring to find the market and taking it to the next level. [4:53]

              People Behind HP Moonshot [HP YouTube channel, April 10, 2013]

              HP Moonshot is a groundbreaking new class of server that requires less energy, less space and less cost. Built from HP’s industry-leading server IP and 10 years of research from HP Labs, HP Moonshot is an example of the best of HP working together. In the video: Gerald Kleyn, Director of Platform Research and Development, Hyperscale Business Unit, Industry Standard Servers; Scott Herbel, Worldwide Product Marketing Manager, Hyperscale Business Unit, Industry Standard Servers; Ron Mann, Director of Engineering, Industry Standard Servers; Kelly Pracht, Hardware Platform Manager R&D, Hyperscale Business Unit, Industry Standard Servers; Mike Sabotta, Distinguished Technologist, Hyperscale Business Unit, Industry Standard Servers; Dwight Barron, HP Fellow, Chief Technologist, Hyperscale Business Unit, Industry Standard Servers. For more information, visit http://www.hpnext.com.

              HP Moonshot System Tour [HewlettPackardVideos YouTube channel, April 8, 2013]

              Kelly Pracht, Moonshot Hardware Platform Program Manager, HP, takes you on a private tour of the HP Moonshot System and introduces the foundational HW components of HP Project Moonshot. This video guides you around the entire system highlighting the cartridges and switches.http://hp.com/go/moonshot

              HP Moonshot System is Hot Pluggable [HewlettPackardVideos YouTube channel, April 8, 2013]

              “Show me around the HP Moonshot System!” Vicki Doehring, Moonshot Hardware Engineer, HP, shows us just how simple and intuitive it is to remove components in the HP Moonshot System. This video explains how HP’s hot pluggable technology works with the HP Moonshot System.http://hp.com/go/moonshot

              Alert for Microsoft: how and when will you have a system like this with all the bells and whistles as presented above, as well as the rich ecosystem of hardware and software partners given below 

              HP Pathfinder Innovation Ecosystem [HewlettPackardVideos YouTube channel, April 8, 2013]

              A key element of HP Moonshot, the HP Pathfinder Innovation Ecosystem brings together industry leading sofware and hardware partners to accelerate the development of workload optimized applications. http://hp.com/go/moonshot

              Software partners:

              What Linaro is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

              Linaro discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

              Alert for Microsoft:

              [0:11] In HP approach Linaro is about forming an enterprise group. What they were hoping for, what’s happened is to get a bunch of companies who are interested in taking the ARM architecture into the server space. [0:26]

              Canonical joins Linaro Enterprise Group (LEG) and commits Ubuntu Hyperscale Availability for ARM V8 in 2013 [press release, Nov 1, 2012]

                • Canonical continues its leadership of commercial deployment for ARM-based servers through membership of Linaro Enterprise Group (LEG)
                • Ubuntu, the only commercially supported OS for ARM v7 today, commits to support ARM v8 server next year
                • Ubuntu extends its position as the natural choice for hyperscale  server computing with long term support

              … “Canonical has been supporting our work optimising and consolidating the Linux kernel since our founding in June 2010”, said George Grey, CEO of Linaro. “We’re very happy to welcome them as a member of the Linaro Enterprise Group, building on our relationship to help accelerate development of the ARM server software ecosystem.” …

              … “Calxeda has been thrilled with Canonical’s leadership in developing the ARM ecosystem”,  said Karl Freund, VP marketing at Calxeda. “These guys get it. They are driving hard and fast, already delivering enterprise-class code and support for Calxeda’s 32-bit product today to our mutual clients.  Working together in LEG will enable us to continue to build on the momentum we have already created.” …

              What Canonical is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

              HP Moonshot and Ubuntu work together [Ubuntu partner site, April 9, 2013]

              … Ubuntu, as the lead operating system platform for x86 and ARM-based HP Moonshot Systems, featured extensively at the launch of the program in April 2013. …
              Ubuntu Server is the only OS fully operational today across HP Moonshot x86 and ARM servers, launched in April 2013.
              Ubuntu is recognised as the leader in scale out and Hyperscale. Together, Canonical and HP are delivering massive reductions in data-center energy, space and costs. …

              Canonical has been working with HP for the past two years
              on HP Moonshot
              , and with Ubuntu, customers can achieve higher performance with greater manageability across both x86 and ARM chip sets” Paul Santeler, VP & GM, Hyperscale Business Unit, HP

              Ubuntu & HP’s project Moonshot [Canonical blog, Nov 2, 2011]

              Today HP announced Project Moonshot  – a programme to accelerate the use of low power processors in the data centre.
              The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86),  the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.
              Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.
              imageThe HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.
              The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?
              Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.
              The good news is that Canonical has been working with ARM and Calxeda for several years now and we released the first version of Ubuntu Server ported for ARM Cortex A9 class  processors last month.
              The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM.  This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.
              As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.

              The biggest enterprise alert for Microsoft because of what was discussed in Will Microsoft Stand Out In the Big Data Fray? [Redmondmag.com, March 22, 2013]: What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013] especially as it is a brand new offering, see NuoDB Announces General Availability of Industry’s First & Only Cloud Data Management System at Live-Streamed Event [press release, Jan 15, 2013] now available in archive at this link: http://go.nuodb.com/cdms-2013-register-e.html

              Barry Morris, founder and CEO of NuoDB discusses HP’s Project Moonshot and the database innovations delivered by the combined offering

              Extreme density on HP’s Project Moonshot [NuoDB Techblog, April 9, 2013]

              A few months ago HP came to us with something very cool. It’s called Project Moonshot, and it’s a new way of thinking about how you design infrastructure. Essentially, it’s a composable system that gives you serious flexibility and density.

              A single Moonshot System is 4.3u tall and holds 45 independent servers connected to each other via 1-Gig Ethernet. There’s a 10-Gig Ethernet interface to the system as a whole, and management interfaces for the system and each individual server. The long-term design is to have servers that provide specific capabilities (compute, storage, memory, etc.) and can scale to up to 180 nodes in a single 4.3u chassis.
              The initial system, announced this week, comes with a single server configuration: an Intel Atom S1260 processor, 8 Gigabytes of memory and either a 200GB SSD or a 500GB HDD. On its own, that’s not a powerful server, but when you put 45 of these into a 4.3 rack-unit space you get something in aggregate that has a lot of capacity while still drawing very little power (see below). The challenge, then, is how to really take advantage of this collection of servers.

              NuoDB on Project Moonshot: Density and Efficiency

              We’ve shown how NuoDB can scale a single database to large transaction rates. For this new system, however, we decided to try a different approach. Rather than make a single database scale to large volume we decided to see how many individual, smaller databases we could support at the same time. Essentially, could we take a fully-configured HP Project Moonshot System and turn it into a high-density, low-power, easy to manage hosting appliance.

              To put this in context, think about a web site that hosts blogs. Typically, each blog is going to have a single database supporting it (just like this blog you’re reading). The problem is that while a few blogs will be active all the time, most of them see relatively light traffic. This is known as a long-tail pattern. Still, because the blogs always need to be available, so too the backing databases always need to be running.

              This leads to a design trade-off. Do you map the blogs to a single database (breaking isolation and making management harder) or somehow try to juggle multiple database instances (which is hard to automate, expensive in resource-usage and makes migration difficult)? And what happens when a blog suddenly takes off in popularity? In other words, how do you make it easy to manage the databases and make resource-utilization as efficient as possible so you don’t over-spend on hardware?

              As I’ve discussed on this blog NuoDB is a multi-tenant system that manages individual databases dynamically and efficiently. That should mean that we’re a perfect fit for this very cool (pun intended) new system from HP.

              The Design

              After some initial profiling on a single server, we came up with a goal: support 7,200 active databases. You can read all about how we did the math, but essentially this was a balance between available CPU, Memory, Disk and bandwidth. In this case a “database” is a single Transaction Engine and Storage Manager pair, running on one of the 45 available servers.

              When we need to start a database, we pick the server that’s least-utilized. We choose this based on local monitoring at each server that is rolled up through the management tier to the Connection Brokers. It’s simple to do given all that NuoDB already provides, and because we know what each server supports it lets us calculate a single capacity percentage.
              It gets better. Because a NuoDB database is made of an agile collection of processes, it’s very inexpensive to start or stop a database. So, in addition to monitoring for server capacity we also watch what’s going on inside each database, and if we think it’s been idle long enough that something else could use the associated resources more effectively we shut it down. In other words, if a database isn’t doing anything active we stop it to make room for other databases.
              When an SQL client needs to access that database, we simply re-start it where there are available resources. We call this mechanism hibernating and waking a database. This on-demand resource management means that while there are some number of databases actively running, we can really support a much larger in total (remember, we’re talking about applications that exhibit a long-tail access pattern). With this capability, our original goal of 7,200 active databases translates into 72,000 total supported databases. On a single 4.3u System.
              The final piece we added is what we call database bursting. If a single database gets really popular it will start to take up too many resources on a single server. If you provision another server, separate from the Moonshot System, then we’ll temporarily “burst” a high-activity database to that new host until activity dies down. It’s automatic, quick and gives you on-demand capacity support when something gets suddenly hot.
              The Tests
              I’m not going to repeat too much here about how we drove our tests. That’s already covered in the discussion on how we’re trying to design a new kind of benchmark focused on density and efficiency. You should go check that out … it’s pretty neat. Suffice it say, the really critical thing to us in all of this was that we were demonstrating something that solves a real-world problem under real-world load.
              You should also go read about how we setup and ran on a Moonshot System. The bottom-line is that the system worked just like you’d expect, and gave us the kinds of management and monitoring features to go beyond basic load testing.
              The Results
              We were really lucky to be given access to a full Moonshot System. It gave us a chance to test out our ideas, and we actually were able to do better than our target. You can see this in the view from our management interface running against a real system under our benchmark load. You can see there that when we hit 7200 active databases we were only at about 70% utilization, so there was a lot more room to grow. Huge thanks to HP for giving us time on a real Moonshot System to see all those idea work!

              Something that’s easy to lose track of in all this discussion is the question of power. Part of the value proposition from Project Moonshot is in energy efficiency, and we saw that in spades. Under load a single server only draws 18 Watts, and the system infrastructure is closer to 250 Watts. Taken together, that’s a seriously dense system that is using very little energy for each database.

              Bottom Line
              We were psyched to have the chance to test on a Moonshot System. It gave us the chance to prove out ideas around automation and efficiency that we’ll be folding into NuoDB over the next few releases. It also gave us the perfect platform to put our architecture through its paces and validate a lot about the flexibility of our core architecture.
              We’re also seriously impressed by what we experienced from Project Moonshot itself. We were able to create something self-contained and easy to manage that solves a real-world problem. Couple that with the fact that a Moonshot System draws so little power, the Total Cost of Ownership is impressively low.  That’s probably the last point to make about all this: the combination of our two technologies gave us something where we could talk concretely about capacity and TCO, something that’s usually hard to do in such clear terms.
              In case it’s not obvious, we’re excited. We’ve already been posting this week about some ideas that came out of this work, and we’ll keep posting as the week goes on. Look for the moonshot tag and please follow-up with comments if you’re curious about anything specific and would like to hear more!

              Project Moonshot by the Numbers [NuoDB Techblog, April 9, 2013]

              To really understand the value from HP Project Moonshot you need to think beyond the list price of one system and focus instead on the Total Cost of Ownership. Figuring out the TCO for a server running arbitrary software is often a hard (and thankless?) task, so one of the things we’ve tried to do is not just demonstrate great technology but something that naturally lets you think about TCO in a simple way. We think the final metrics are pretty simple, but to get there requires a little math.

              Executive Summary

              If you’re a CIO, and just want to know the bottom line, then we’ll ruin the suspense and cut to the chase. It will cost you about $70,500 up-front, $1,800 in your first year’s electricity bills and take 8.3 rack-units to support the web-front end and database back-end for 72,000 blogs under real-world load.

              Cost of a Single Database
              Recall that we set the goal at 72,000 databases within a single system. At launch the list price for a fully-configured Moonshot System is around $60,000, so we start out at 83 cents per-database. In practice were seeing much higher capacity in our tests, but let’s start with this conservative number.
              Now consider the power used by the system. From what we’ve measured through the iLO interfaces a single server draws no more than 18 Watts at peak load (measured against CPU and IO activity). The System itself (fans, switches etc.) draws around 250 Watts in our tests. That means that under full load each database is drawing about .015 Watts.
              NuoDB is a commercial software offering, which means that you pay up-front to deploy the software (and get support as part of that fee). For anyone who wants to run a Moonshot System in production as a super-dense NuoDB appliance we’ll offer you a flat-rate license.
              Put together, we can say that the cost per database-watt is 1.22 cents. That’s on a 4.3 rack-unit system. Awesome.
              Quantify the Supported Load
              As we discussed in our post on benchmarking, we’re trying to test under real-world load. As a simple starting-point we chose a profile based on WordPress because it’s fairly ubiquitous and has somewhat serious transactional requirements. In our benchmarking discussion we explain that a typical application action (post, read, comment) does around 20 SQL operations.
              Given 72,000 databases most of these are fairly inactive, so on average we’ll say that each database gets about 250 hits a day (generous by most reports I’ve seen). That’s 18,000,000 hits a day or 208 hits per-second. 4,166 SQL statements a second isn’t much for a single database, but it’s pretty significant given that we’re spreading it across many databases some of which might have to be “woken” on-demand.
              HP was generous enough not only to give us time on a Moonshot System but also access to some co-located servers for driving our load tests. In this case, 16 lower-powered ARM-based Calxeda systems that all went through the same 1-Gig ethernet connection to our Moonshot System. These came from HP’s Discovery Lab; check out our post about working with the Moonshot System for more details.
              From these load-drivers we able to run our benchmark application with up to 16 threads per server, simulating 128 simultaneous clients. In this case a typical “client” would be a web server trying to respond to a web client request. We averaged around 320 hits per-second, well above the target of 208. From what we could observe, we expect that given more capable network and client drivers we would be able to get 3 or 4 times that rate easily.
              Tangible Cost
              We have the cost of the Moonshot System itself. We also know that it can support expected load from a fairly small collection of low-end servers. In our own labs we use systems that cost around $10,000, fit in 3 rack-units and would be able to drive at least the same kind of load we’re citing here. Add a single switch at around $500 and you have a full system ready to serve blogs. That’s $70,500 total in 8.3 rack units, still under $1 per database.
              I don’t know what power costs you have in your data center, but I’ve seen numbers ranging from 2.5 to 25 cents per Kilowatt-Hour. In our tests, where we saw .015 Watts per-database, if you assume an average rate of 13.75 cents per KwH that comes out to .00020625 cents per-hour per-database in energy costs. In one year, with no down-time, that would cost you $1,276.77 in total electricity fees.
              Just as an aside, according to the New York Times, Facebook uses around 60,000,000 Watts a year!
              One of the great things about a Moonshot System is that the 45 servers are already being switched inside the chassis. This means that you don’t need to buy switches & cabling, and you don’t need to allocate all the associated space in your racks. For our systems administrator that alone would make him very happy.
              Intangible Cost
              What I haven’t been talking about in all of this are the intangible costs. This is where figuring out TCO becomes harder.
              For instance, one of the value-propositions here is that the Moonshot System is a self-contained, automated component. That means that systems administrators are freed up from the tasks of figuring out how to allocate and monitor databases, and how to size the data-center for growth. Database developers can focus more easily on their target applications. CIOs can spend less time staring at spreadsheets … or, at least, can allocate more time to spreadsheets on different topics.
              Providing a single number in terms of capacity makes it easy to figure out what you need in your datacenter. When a single server within a Moonshot System fails you can simply replace it, and in the meantime you know that the system will still run smoothly just with slightly lower capacity. From a provisioning point of view, all you need to figure out is where your ceiling is and how much stand-by capacity you need to have at the ready.
              NuoDB by its nature is dynamic, even when you’re doing upgrades. This means that you can roll through a running Moonshot System applying patches or new versions with no down-time. I don’t know how you calculate the value in saved cost here, but you probably do!
              Comparisons and Planned Optimizations
              It’s hard to do an “apples-to-apples” comparison against other database software here. Mostly, this is because other databases aren’t designed to be dynamic enough to support hibernation, bursting and capacity-based automated balancing. So, you can’t really get the same levels of density, and a lot of the “intangible” cost benefits would go away.
              Still, to be fair, we tried running MySQL on the same system and under the same benchmarks. We could indeed run 7200 instances, although that was already hitting the upper-bounds of memory/swap. In order to get the same density you would need 10 Moonshot Systems, or you would need larger-powered expensive servers. Either way, the power, density, automation and efficiency savings go out the window, and obviously there’s no support for bursting to more capable systems on-demand.
              Unsurprisingly, the response time was faster on-average (about half the time) from MySQL instances. I say “unsurprisingly” for two reasons. First, we tried to use schema/queries directly from WordPress to be fair in our comparison, and these are doing things that are still known to be less-optimized in NuoDB. They’re also in the path of what we’re currently optimizing and expect to be much faster in the near-term.
              The second is that NuoDB clients were originally designed assuming longer-running connections (or pooled connections) to databases that always run with security & encryption enabled. We ran all of our tests in our default modes to be fair. That means we’re spending more time on each action setting up & tearing down a connection. We’ve already been working on optimizations here that would shrink the gap pretty substantially.
              In the end, however, our response time is still on the order of a few hundred milliseconds worst-case, and is less important than the overall density and efficiency metrics that we proved out. We think the value in terms of ease of use, density, flexibility on load spikes and low-cost speaks for itself. This setup is inexpensive by comparison to deploying multiple servers and supports what we believe is real-world load. Just wait until the next generation of HP Project Moonshot servers roll out and we can start scaling out individual databases at the same time!

              More information:
              Benchmarking Density & Efficiency [NuoDB Techblog, April 9, 2013]
              Database Hibernation and Bursting [NuoDB Techblog, April 8, 2013]
              An Enterprise Management UI for Project Moonshot [NuoDB Techblog, April 9, 2013]Regarding the cloud based version of NuoDB see:
              NuoDB Partners with Amazon [press release, March 26, 2013]
              NuoDB Extends Database Leadership in Scalability & Performance on a Private Cloud [press release, March 14, 2013] “… the industry’s first and only patented, elastically scalable Cloud Data Management System (CDMS), announced performance of 1.84 million transactions per second (TPS) running on 32 machines. … With NuoDB Starlings release 1.0.1, available as of March 1, 2013, the company has made advancements in performance and scalability and customers can now experience 26% improvement in TPS per machine.
              Google Compute Engine: interview with NuoDB [GoogleDevelopers YouTube channel, March 21, 2013]

              Meet engineers from NuoDB: an elastically scalable SQL database built for the cloud. We will learn about their approach to distributed SQL databases and get a live demo. We’ll cover the steps they took to get NuoDB running on Google Compute Engine, talk about how they evaluate infrastructure (both physical hardware and cloud), and reveal the results of their evaluation of Compute Engine performance.

              Actually Calxeda was best to explain the preeminence of software over the SoC itself:
              Karl Freund from Calxeda – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013], see also HP Moonshot: It’s a lot closer than it looks! [Calxeda’s ‘ARM Servers, Now!’ blog, April 8, 2013]

              Karl Freund, VP of Marketing, Calxeda, at HP Moonshot 2013 with John Furrier and Dave Vellante.

              as well as ending with Calxeda’s very practical, gradual approach to ARM based served market with things like:

              [16.03] Our 2nd generation platform called Midway, which will be out later this year [in the 2nd half of the year], that’s probably the target for Big Data. Our current product is great for web serving, it’s great for media serving, it’s great for storage. It doesn’t have enough memory for Big Data … in a large. So we’ll getting that 2nd generation product out, and that should be a really good Big Data platform. Why? Because it’s low power, it’s low cost, but it’s also got a lot of I/O. Big Data is all that moving a lot of data around. And if you do that more cost effectively you save a lot of money. [16:38]

              mentioning also that their strategy is using standard ARM cores like the Cortex-A57 for their H1 2014 product, and focus on things like the fabric and the management, which actually allows them to work with a streamlined staff of around 150 people.

              Detailed background about Calxeda in a concise form:
              Redefining Datacenter Efficiency: An Overview of Calxeda’s architecture and early performance measurements [Karl Freund, Nov 12, 2012] from where the core info is:

                • Founded in 2008   
                • $103M Funding       
                • 1st Product Announced with HP,  Nov  2011   
                • Initial Shipments in Q2 2012   
                • Volume production in Q4 2012

              image

              image* The power consumed under normal operating conditions
              under full application load (ie, 100% CPU utilization)

              imageA small Calxeda Cluster: a Simple Example
              • Start with four ServerNodes
              • Consumes only 20W total power   
              • Connected via distributed fabric switches   
              • Connect up to 4 SATA drives per node   
              • Then scale this to thousands of ServerNodes

              EnergyCard: a Quad-Node Reference Design

                • Four-node reference platform from Calxeda
                • Available as product and/or design
                • Plugs into OEM system board with passive fabric, no additional switch HW
                  EnergyCard delivers 80Gb Bandwidth to the system board. (8 x 10Gb links)

              image

              image

              It is also important to have a look at what were the Open Source Software Packages for Initial Calxeda Shipments [Calxeda’s ‘ARM Servers, Now!’ blog, May 24, 2012]

              We are often asked what open-source software packages are available for initial shipments of Calxeda-based servers.

              Here’s the current list (changing frequently).  Let us know what else you need!

              image

              Then Perspectives From Linaro Connect [Calxeda’s ‘ARM Servers, Now!’ blog, March 20, 2013] sheds more light on the recent software alliances which make Calxeda to deliver:

              – From Larry Wikelius,   Co-Founder and VP Ecosystems,  Calxeda:

              The most recent Linaro Connect (Linaro Connect Asia 2013 – LCA), held in Hong Kong the first week of March, really put a spotlight on the incredible momentum around ARM based technology and products moving into the Data Center.  Yes – you read that correctly – the DATA CENTER!

              When Linaro was originally launched almost three years ago the focus was exclusively on the mobile and client market – where ARM has and continues to be dominant.  However, as Calxeda has demonstrated, the opportunity for the ARM architecture goes well beyond devices that you carry in your pocket.  Calxeda was a key driver in the formation of the Linaro Enterprise Group (LEG), which was publicly launched at the previous LinaroConnect event in Copenhagen in early November, 2012.

              LEG has been an exciting development for Linaro and now has 13 member companies that include server vendors such as Calxeda, Linux distribution companies Red Hat and Canonical, OEM representation from HP and even Hyperscale Data Center end user Facebook.  There were many sessions throughout the week that focused on Server specific topics such as UEFI, ACPI, Virtualization, Hyperscale Testing with LAVA and Distributed Storage.  Calxeda was very active throughout the week with the team participating directly in a number of roadmap definition sessions, presenting on Server RAS and providing guidance in key areas such as application optimization and compiler focus for Servers.

              Linaro Connect is proving to be a tremendous catalyst for the the growing eco-system around the ARM software community as a whole and the server segment in particular.  A great example of this was the keynote presentation given jointly by Mark Heath and Lars Kurth from Citrix on Tuesday morning.  Mark is the VP of XenServer at Citirix and Lars is well know in the OpenSource community for his work with Xen.  The most exciting announcement coming out of Mark’s presentation is that Citrix will be joining Linaro as a member of LEG.  Citrix will be certainly prove to be another valuable member of the Linaro team and during the week attendees were able to appreciate how serious Citrix is about supporting ARM servers.  The Xen team has not only added full support for ARM V7 systems in the Xen 4.3 release but they have accomplished some very impressive optimizations for the ARM platform.  The Xen team has leveraged Device Tree for optimal device discovery.  Combined with a number of other code optimizations they showed a dramatically smaller code base for the ARM platform.  We at Calxeda are thrilled to welcome Citrix into LEG!

              As an indication of the draw that the Linaro Connect conference is already having on the broader industry the Open Compute Project (OCP) held their first International Event co-incident with LCA at the same venue.  The synergy between Linaro and OCP is significant with the emphasis on both organizations around Open Source development (one software and one hardware) along with the dramatically changing design points for today’s Hyperscale Data Center.  In fact the keynote at LCA on Wednesday morning really put a spotlight on how significant this is likely to be.  Jason Taylor, Director of Capacity Engineering and Analysis at Facebook, presented on Facebook’s approach to ARM based servers.   Facebook’s consumption of Data Center equipment is quite stunning – Jason quoted from Facebook’s 10-Q filed in October 2012 which stated that “The first nine months of 2012 … $1.0 billion for capital expenditures” related to data center equipment and infrastructure.  Clearly with this level of investment Facebook is extremely motivated to optimize where possible.  Jason focused on the strategic opportunity for ARM based severs in a disaggregated Data Center of the future to provide lower cost computing capabilities with much greater flexibility.

              Calxeda has been very active in building the Server Eco-System for ARM based servers.  This week in Hong Kong really underscored how important that investment has become – not just for Calxeda but for the industry as a whole. Our commitment to Open Source software development in general and Linaro in particular has resulted in a thriving Linux Infrastructure for ARM servers that allows Calxeda to leverage and focus on key differentiation for our end users.  The Open Compute Project, which we are an active member in and have contributed to key projects such as the Knockout Storage design as well as the Open Slot Specification, demonstrates how the combination of an Open Source approach for both Software and Hardware can compliment each other and can drive Data Center innovation.  We are early in this journey but it is very exciting!

              Calxeda will continue to invest aggressively in forums and industry groups such as these to drive the ARM based server market.  We look forward to continue to work with the incredibly innovative partners that are members in these groups and we are confident that more will join this exciting revolution.  If you are interested in more information on these events and activities please reach out to us directly at info@calxeda.com.

              The next Linaro Connnect is scheduled for early July in Dublin. We expect more exciting events and topics there and hope to see you there!

              They are also referring on their blog to Mobile, cloud computing spur tripling of micro server shipments this year [IHS iSuppli press release, Feb 6, 2013] which showing the general market situation well into the future as:

              Driven by booming demand for new data center services for mobile platforms and cloud computing, shipments of micro servers are expected to more than triple this year, according to an IHS iSuppli Compute Platforms Topical Report from information and analytics provider IHS (NYSE: IHS).
              Shipments this year of micro servers are forecast to reach 291,000 units, up 230 percent from 88,000 units in 2012. Shipments of micro servers commenced in 2011 with just 19,000 units. However, shipments by the end of 2016 will rise to some 1.2 million units, as shown in the attached figure.

              image

              The penetration of micro servers compared to total server shipments amounted to a negligible 0.2 percent in 2011. But by 2016, the machines will claim a penetration rate of more than 10 percent—a stunning fiftyfold jump.
              Micro servers are general-purpose computers, housing single or multiple low-power microprocessors and usually consuming less than 45 watts in a single motherboard. The machines employ shared infrastructure such as power, cooling and cabling with other similar devices, allowing for an extremely dense configuration when micro servers are cascaded together.
              “Micro servers provide a solution to the challenge of increasing data-center usage driven by mobile platforms,” said Peter Lin, senior analyst for compute platforms at IHS. “With cloud computing and data centers in high demand in order to serve more smartphones, tablets and mobile PCs online, specific aspects of server design are becoming increasingly important, including maintenance, expandability, energy efficiency and low cost. Such factors are among the advantages delivered by micro servers compared to higher-end machines like mainframes, supercomputers and enterprise servers—all of which emphasize performance and reliability instead.”
              Server Salad Days
              Micro servers are not the only type of server that will experience rapid expansion in 2013 and the years to come. Other high-growth segments of the server market are cloud servers, blade servers and virtualization servers.
              The distinction of fastest-growing server segment, however, belongs solely to micro servers.
              The compound annual growth rate for micro servers from 2011 to 2016 stands at a remarkable 130 percent—higher than that of the entire server market by a factor of 26. Shipments will rise by double- and even triple-digit percentages for each year during the period.
              Key Players Stand to Benefit
              Given the dazzling outlook for micro servers, makers with strong product portfolios of the machines will be well-positioned during the next five years—as will their component suppliers and contract manufacturers.
              A slew of hardware providers are in line to reap benefits, including microprocessor vendors like Intel, ARM and AMD; server original equipment manufacturers such as Dell and Hewlett-Packard; and server original development manufacturers including Taiwanese firms Quanta Computer and Wistron.
              Among software providers, the list of potential beneficiaries from the micro server boom extends to Microsoft, Red Hat, Citrix and Oracle. For the group of application or service providers that offer micro servers to the public, entities like Amazon, eBay, Google and Yahoo are foremost.
              The most aggressive bid for the micro server space comes from Intel and ARM.
              Intel first unveiled the micro server concept and reference design in 2009, ostensibly to block rival ARM from entering the field.
              ARM, the leader for many years in the mobile world with smartphone and tablet chips because of the low-power design of its central processing units, has been just as eager to enter the server arena—dominated by x86 chip architecture from the likes of Intel and a third chip player, AMD. ARM faces an uphill battle, as the majority of server software is written for x86 architecture. Shifting from x86 to ARM will also be difficult for legacy products.
              ARM, however, is gaining greater support from software and OS vendors, which could potentially put pressure on Intel in the coming years.
              Read More > Micro Servers: When Small is the Next Big Thing

              Then there are a number of Intel competitive posts on Calxeda’s ‘ARM Servers, Now!’ blog:
              What is a “Server-Class” SOC? [Dec 12, 2012]
              Comparing Calxeda ECX1000 to Intel’s new S1200 Centerton chip [Dec 11, 2012]
              which you can also find in my Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] with significantly wider additional information upto binary translation from x86 to ARM with Linux

              See also:
              ARM Powered Servers: 2013 is off to a great start & it is only March! [Smart Connected Devices blog of ARM, March 6, 2013]
              Moonshot – a shot in the ARM for the 21st century data center [Smart Connected Devices blog of ARM, April 9, 2013]
              Are you running out of data center space? It may be time for a new server architecture: HP Moonshot [Hyperscale Computing Blog of HP, April 8, 2013]
              HP Moonshot: the HP Labs team that did some of the groundbreaking research [Innovation @ HP Labs blog of HP, April 9, 2013]
              HP Moonshot: An Accelerator for Hyperscale Workloads [Moor Insights White Paper, April 8, 2013]
              Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop [HP Vertica blog, April 9, 2013] by team of HP Labs researchers show how the Vertica Analytics Platform can be used to find patterns from a billion records in a couple of minutes, about 9x faster than Hadoop.
              PCs and cloud clients are not parts of Hewlett-Packard’s strategy anymore [‘Experiencing the Cloud’, Aug 11, 2011 – Jan 17, 2012] see the Autonomy IDOL related content there
              ENCO Systems Selects HP Autonomy for Audio and Video Processing [HP Autonomy press release, April 8, 2013]

              HP Autonomy today announced that ENCO Systems, a global provider of radio automation and live television audio solutions, has selected Autonomy’s Intelligent Data Operating Layer (IDOL) to upgrade ENCO’s latest-generation enCaption product.

              ENCO Systems provides live automated captioning solutions to the broadcast industry, leveraging technology to deliver closed captioning by taking live audio data and turning it into text. ENCO Systems is capitalizing on IDOL’s unique ability to understand meaning, concepts and patterns within massive volumes of spoken and visual content to deliver more accurate speech analytics as part of enCaption3.

              “Many television stations count on ENCO to provide real-time closed captioning so that all of their viewers get news and information as it happens, regardless of their auditory limitations,” said Ken Frommert, director, Marketing, ENCO Systems. “Autonomy IDOL helps us provide industry-leading automated closed captioning for a fraction of the cost of traditional services.”
              enCaption3 is the only fully automated speech recognition-based closed captioning system for live television that does not require speaker training. It gives broadcasters the ability to caption their programming, including breaking news and weather, any time, day or night, since it is always on and always available. enCaption3 provides captioning in near real time-with only a 3 to 6 second delay-in nearly 30 languages.
              “Television networks are under increasing pressure to provide real-time closed captioning services-they face fines if they don’t, and their growing and diverse viewers demand it,” said Rohit de Souza, general manager, Power, HP Autonomy. “This is another example of a technology company integrating Autonomy IDOL to create a stronger, faster and more accurate product offering, and demonstrates yet another powerful way in which IDOL can be applied to help organizations succeed in the human information era.”

              Using Big Data to change the game in the Energy industry [Enterprise Services Blog of HP, Oct 24, 2012]

              … Tools like HP’s Autonomy that analyzes the unstructured data found in call recordings, survey responses, chat logs, e-mails, social media posts and more. Autonomy’s Intelligent Data Operating Layer (IDOL) technology uses sophisticated pattern-matching techniques and probabilistic modeling to interpret information in much the same way that humans do. …

              Stouffer Egan turns the tables on computers in keynote address at HP Discover [Enterprise Services Blog of HP, June 8, 2012]

              For decades now, the human mind has adjusted itself to computers by providing and retrieving structured data in two-dimensional worksheets with constraints on format, data types, list of values, etc. But, this is not the way the human mind has been architected to work. Our minds have the uncanny ability to capture the essence of what is being conveyed in a facial expression in a photograph, the tone of voice or inflection in an audio and the body language in a video. At the HP Discover conference, Autonomy VP for United States, Stouffer Egan showed the audience how software can begin to do what the human mind has being doing since the dawn of time. In a demonstration where Iron Man came live out of a two-dimensional photograph, Egan turned the tables on computers. It is about time computers started thinking like us rather than us forcing us to think like them.
              Egan states that the “I” in IT is where the change is happening. We have a newfound wealth of data through various channels including video, social, click stream, audio, etc. However, data unprocessed without any analysis is just that — raw data. For enterprises to realize business value from this unstructured data, we need tools that can process it across multiple media. Imagine software that recognizes the picture in a photograph and searches for a video matching the person in the picture. The cover page of a newspaper showing a basketball star doing a slam dunk suddenly turns live pulling up the video of this superstar’s winning shot in last night’s game. …


              2. Software Partners

              image
              HP Moonshot is setting the roadmap for next generation data centers by changing the model for density, power, cost and innovation. Ubuntu has been designed to meet the needs of Hyperscale customers and, combined with its management tools, is ideally suited be the operating system platform for HP Moonshot. Canonical has been working with HP since the beginning of the Moonshot Project, and Ubuntu is the only OS integrated and fully operational across the complete Moonshot System covering x86 and ARM chip technologies.
              What Canonical is saying about HP Moonshot
              image
              As mobile workstyles become the norm, the scalability needs of today’s applications and devices are increasingly challenging what traditional infrastructures can support. With HP’s Moonshot System, customers will be able to rapidly deploy, scale, and manage any workload with dramatically lower space and energy constraints. The HP Pathfinder Innovation Ecosystem is a prime opportunity for Citrix to help accelerate the development of innovative solutions that will benefit our enterprise cloud, virtualization and mobility customers.
              image
              We’re committed to helping enterprises achieve the most from their Big Data initiatives. Our partnership with HP enables joint customers to keep and query their data at scale so they can ask bigger questions and get bigger answers. By using HP’s Moonshot System, our customers can benefit from the improved resource utilization of next generation data center solutions that are workload optimized for specific applications.
               
              imageToday’s interactive applications are accessed 24×365 by millions of web and mobile users, and the volume and velocity of data they generate is growing at an unprecedented rate. Traditional technologies are hard pressed to keep up with the scalability and performance demands of these new applications. Couchbase NoSQL database technology combined with HP’s Moonshot System is a powerful offering for customers who want to easily develop interactive web and mobile applications and run them reliably at scale. image
              Our partnership with HP facilitates CyWee’s goal of offering solutions that merge the digital and physical worlds. With TI’s new SoCs, we are one step closer to making this a reality by pushing state-of-the-art video to specialized server environments. Together, CyWee and HP will deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.
              image
              HP’s new Moonshot System will enable organizations to increase the energy efficiency of their data centers while reducing costs. Our Cassandra-based database platform provides the massive scalability and multi-datacenter capabilities that are a perfect complement to this initiative, and we are excited to be working with HP to bring this solution to a wide range of customers.
              image
              Big data comes in a wide range for formats and types and is a result of the connected everything world we live in. Through Project Moonshot, HP has enabled a new class of infrastructure to run more efficient workloads, like Apache Hadoop, and meet the market demand of more performance for less.
              image
              The unprecedented volume and variety of data introduces unique challenges to organizations today… By combining the HP Moonshot system with Autonomy IDOL’s unique ability to understand concepts in information, organizations can dramatically reduce the cost, space, and energy requirements for their big data initiatives, and at the same time gain insights that grow revenue, reduce risk, and increase their overall Return on Information.
              image
              Big Data is not just for Big Companies – or Big Servers – anymore – it’s affecting all sectors of the market. At HP Vertica we’re very excited about the work we’ve been doing with the Moonshot team on innovative configurations and types of analytic appliances which will allow us to bring the benefits of real-time Big Data analytics to new segments of the market. The combination of the HP Vertica Analytics Platform and Moonshot is going to be a game-changer for many.
              image
              HP worked closely with Linaro to establish the Linaro Enterprise Group (LEG). This will help accelerate the development of the software ecosystem around ARM Powered servers. HP’s Moonshot System is a great platform for innovation – encouraging a wide range of silicon vendors to offer competing ‘plug-and-play’ server solutions, which will give end users maximum choice for all their different workloads.
              What Linaro is saying about HP Moonshot[HewlettPackardVideos YouTube channel, April 8, 2013]
              image
              Organizations are looking for ways to rapidly deploy, scale, and manage their infrastructure, with an architecture that is optimized for today’s application workloads. HP Moonshot System is an energy efficient, space saving, workload-optimized solution to meet these needs, and HP has partnered with MapR Technologies, a Hadoop technology leader, to accelerate innovation and deployment of Big Data solutions.
              image
              NuoDB and HP are shattering the scalability and density barriers of a traditional database server. NuoDB on the HP Moonshot System delivers unparalleled database density, where customers can now run their applications across thousands of databases on a single box, significantly reducing the total cost across hardware, software, and power consumption. The flexible architecture of HP Moonshot coupled with NuoDB’s hyper-pluggable database design and its innovative “database hibernation” technology makes it possible to bring this unprecedented hardware and software combination to market.
              What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013]
              image
              As the leading solution provider for the hosting market, Parallels is excited to be collaborating in the HP Pathfinder Innovation Ecosystem. The HP Moonshot System in concert with Parallels Plesk Panel and Parallels Containers provides a flexible and efficient solution for cloud computing and hosting.
              image
              Red Hat Enterprise Linux on HP’s converged infrastructure means predictability, consistency and stability. Companies around the globe rely on these attributes when deploying applications every day, and our value proposition is just as important in the Hyperscale segment. When customers require a standard operating environment based on Red Hat Enterprise Linux, I believe they will look to the HP Moonshot System as a strong platform for high-density Hyperscale implementations.
              What Red Hat is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
              image
              HP Project Moonshot’s promise of extreme low-energy servers is a game changer, and SUSE is pleased to partner with HP to bring this new innovation to market. For more than twenty years, SUSE has adapted its enterprise-grade Linux operating system to achieve ever-increasing performance needs that succeed both today and tomorrow in areas such as Big Data and cloud computing.
              What SUSE is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]


              3. Hardware Partners

              image
              AMD is excited to continue our deep collaboration with HP to bring extreme low-energy, ultra dense, specialized server solutions to the market. Both companies share a passion to bring innovative workload optimized solutions to the market, enabling customers to scale-out to new levels within existing energy and space constraints. The new low-power x86 AMD Opteron™ APU is optimized in the HP Moonshot System to dramatically lower TCO in quickly emerging media oriented workloads.
              What AMD is saying about HP Moonshot
              image

              It is exciting to see HP take the lead in innovating low-energy servers for the cloud. Applied Micro’s ARM 64-bit X-Gene Server on a Chip will enable performance levels seen in today’s deployments while offering higher densities, greatly improved I/O, and substantial reductions in the total cost of ownership. Together, we will unleash innovation unlike anything we’ve seen in the server market for decades.

              What Applied Micro is saying about HP Moonshot

              image
              In the current economic and power realities, today’s server infrastructure cannot meet the needs of the next billion data users, or the evolving needs of currently supported users. Customers need innovative SoC solutions which deliver more integration and optimization than has historically been required by traditional enterprise workloads. HP’s Moonshot System is a departure from the one size fits all approach of traditional enterprise and embraces a range of ARM partner solutions that address different performance, workloads and cost points.
              What ARM is saying HP Moonshot
              image
              Calxeda and HP’s new Moonshot System are a powerful combination, and sets a new standard for ultra-efficient web and application serving. Fulfilling a journey started together in November 2011, Project Moonshot creates the foundation for the new age of application-specific computing.
              What Calxeda is saying about HP Moonshot
              image
              HP Moonshot System is a game changer for delivering optimized server solutions. It beautifully balances the need for mixing different processor solutions optimized for different workloads under a standard hardware and software framework. Cavium’s Project Thunder will provide a family of 64-bit ARM v8 processors with dense and scalable sever class performance at extremely attractive power and cost metrics. We are doing this by blending performance and power efficient compute, high performance memory and networking into a single, highly integrated SoC.
              What Cavium is saying about HP Moonshot
              image
              Intel is proud to deliver the only server class, 64-bit SoC technology that powers the first and only production shipping HP ProLiant Moonshot Server today. 64-bit Intel Atom processor S1200 family features extreme low power combined with required datacenter class capabilities for lightweight web scale workloads, such as low end dedicated hosting and static web serving. In collaboration with HP, we have a strong roadmap of additional server solutions shipping later this year, including Intel’s 2nd generation 64-bit SoC, “Avoton” based on leading 22nm manufacturing technology, that will deliver best in class energy efficiency and density for HP Moonshot System.
              What Intel is saying about HP Moonshot
              image What Marvell is saying about HP Moonshot
              image
              HP Moonshot System’s high density packaging coupled with integrated network capability provides the perfect platform to enable HP Pathfinder Innovation Ecosystem partners to deliver cutting edge technology to the hyper-scale market. SRC Computers is excited to bring its history of delivering paradigm shifting high-performance, low-power, reconfigurable processors to HP Project Moonshot’s vision of optimizing hardware for maximum application performance at lowest TCO.
              What SRC Computers is saying about HP Moonshot
              image
              The scalability and high performance at low power offered through HP’s Moonshot System gives customers an unmatched ability to adapt their solutions to the ever-changing and demanding market needs in the high performance computing, cloud computing and communications infrastructure markets. The strong collaboration efforts between HP and TI through the HP Pathfinder Innovation Ecosystem ensure that customers understand and get the most benefit from the processors at a system-level.
              What TI is saying about HP Moonshot

              Windows RT Buzz: only the naming will disappear?

              Microsoft defends Windows RT as necessary disruption [CNET, March 21, 2013]
              vs.
              Microsoft to merge Windows RT into next-generation Windows OS [DIGITIMES, March 27, 2013]

              These headlines tell everything. And don’t forget, end of March is the end of PRISM when all top level decisions for the next fiscal year have already been taken. Now put these two media reports against each other:

              [Michael] Angiulo [corporate vice president, Windows Planning, Hardware & PC Ecosystem] says Microsoft has good reason to stick with the platform.
              “It was a ton of work for us and we didn’t do the work and endure the disruption for any reason other than the fact that there’s a strategy there that just gets stronger over time.
              Looking at things now like power performance and standby time and passive [fanless] form factors. When we launched Windows 8, it was really competitive with a full-sized iPad. A lot of that was made possible by the ARM [chip] architecture.
              If you look forward a year or two and you look at the performance output of ARM chips, those are some really capable chips. I think it has a very bright future.
              People are talking about legacy desktop software not running, but they don’t think about the customer benefit of only running modern apps. The only apps that you install from the Windows store are the kind, that as a customer, you can manage your rights to.
              Let’s say you drop that PC in a pool. Well, you get a new one and then you just redownload [the apps]. That’s the kind of model people are used to with a phone or tablet today. I can maintain all the apps in the [Microsoft] store and reset with a single switch.
              So, on Windows RT, the user experience stays consistent over time. That’s a big benefit. And as the number of apps grow in the store, that value promise only gets stronger.
              And on the ARM side, there is a propensity for a much higher percentage of PCs that are going to ship with mobile broadband [3G/4G], precisely because ARM PCs have even longer battery life [than Intel PCs] on connected standby [when a device is in standby mode but still connected to e-mail, social networking sites, and the Internet in general].”
              Microsoft will no longer launch products under its Windows RT line and will instead merge the product line into the software giant’s next-generation Windows, codenamed Blue, according to sources from the upstream supply chain.
              Although the PC supply chain had pushed the Windows on ARM (WoA) platform aggressively, the Windows RT’s name, which has misled most consumers into believing that the operating system is able to support all existing x86 Windows programs, the lack of apps, as well as compatibility issues have all significantly damaged demand.
              The next-generation of Windows is expected to make its first appearance at the Microsoft Build Developer Conference 2013, hosted from June 26-28 in San Francisco, the US.
              The sources believe that Wintel PC demand is likely to drop significantly before Intel and Microsoft’s next-generation products show up in the second half of the year.

              With that the strategy to stick to Windows RT as a product, but not as a name, is crystall clear. Nevertheless between these two news dates we have other news articles in the world which are casting doubts on the future of Windows RT as a product.

              Look at the bulk of news headlines between March 21 and March 28 to see the kind of mixed reporting. As these headlines coming from the proper Google search:

              Microsoft’s Future Vision: Live, Work, Play [March 1, 2013]

              http://aka.ms/envision – Technology could transform our life at work, on the go, and at home. This is a snapshot of what the future will look like five to ten years from now. In the years ahead, technology will amplify our senses; help us stay connected to the people we care about and transform the way we live, work and play.

              Inside Microsoft’s house of the future [BBC, March 4, 2013]

              Microsoft invited BBC News to take a first look at its revamped Space of the Future at its headquarters in Redmond, Washington. The facility is used to portray what the firm thinks life might be like five to 10 years in the future.

              A lot is riding on its vision being correct.

              In a recent interview when chairman Bill Gates was asked if he was happy with the performance of the firm under chief executive Steve Ballmer’s rule, he replied: “There are a lot of amazing things that Steve’s leadership got done at the company over the last year… but is it enough? No. He and I are not satisfied in terms of breakthrough things that we’re doing everything possible.”

              The firm’s stock price is roughly where it was five years ago while rivals Apple, Amazon and Samsung have all seen theirs more than double.

              So, launching best-selling products for the home could help bolster Microsoft’s reputation for innovation and reinvigorate investors.

              And its engineers revealed a host of ideas including desks that recognise users and match their ergonomic requirements, widespread gesture control and online content that queries itself.

              Video produced by the BBC’s Matthew Danzico

              Linux client market share gains outside the Android? Instead of gains will it shrink to 5% in the next 3 years?

              The Linux Foudation quite proundly referred to ReadWriteMobile: The ‘Year of the Linux Desktop’? That’s So 2012 [Feb 3, 2013]

              For those Linux enthusiasts still pining for the mythical “Year of the Linux Desktop,” the wait is over. In fact, it already happened. In 2012 Microsoft’s share of computing devices fell to 20% from a high of 97% as recently as 2000, as a Goldman Sachs report reveals [”Clash of the titans” downloadable from here, dated Dec 7, 2012]. While Apple has taken a big chunk of Microsoft’s Windows lead, it’s actually Google that plays Robin Hood in the operating system market, now claiming 42% of all computing devices with its free “Linux desktop” OS, Android.

              Read more at ReadWriteMobile.

              from which I will include here the following chart:

              image

              for which Goldman Sachs commented as:

              The compute landscape has undergone a dramatic transformation over the last decade with consumers responsible for the massive market realignment. While PCs were the primary internet connected device in 2000 (139mn shipped that year), today they represent just 29% of all internet connected devices (1.2bn devices to ship in 2012), while smartphones and tablets comprise 66% of the total. Further, although Microsoft was the leading OS provider for compute devices in 2000 at 97% share, today the consumer compute market (1.07bn devices) is led by Android at 42% share, followed by Apple at 24%, Microsoft at 20% and other vendors at 14%.

              Note from Goldman Sachs: Microsoft has gone from 97 percent share of compute market to 20 percent [The Seattke Times Dec 7, 2012]:
              I asked Goldman Sachs about what happened in the 2004-2005 time frame — as seen in the above chart — that made Apple’s vendor share jump, Microsoft’s share plummet and the “other” category to go from zero to 29 percent. Goldman Sachs replied that it has to do with more mainstream adoption of non-PC consumer computing devices but declined to elaborate beyond that.

              Microsoft was put into the “Challenged” category (along with Google BTW) by Golmann Sachs noting that:

              … we estimate that Microsoft would have to sell roughly 5 Windows Phones or roughly two Windows 8 RT tablets to offset the loss of one traditional Windows PC sale, which we estimate has an overall blended selling price of $60 for business and consumer.

              but a kind of more positive than negative outlook was predicted for the company by

              … we expect the recent launches of Windows Phone 8 and Windows 8 tablets to help the company reclaim some share in coming years.

              Apple, at the same time, was into the “Beneficiaries” category (along with Facebook and Samsung BTW) by Goldmann Sachs for the reason of:

              … we believe loyalty to the company’s ecosystem is only increasing and this should translate into continued growth going forward. In particular, we see the potential for Apple to capture additional growth as existing iOS users move to multiple device ownership and as the company penetrates emerging regions with new devices such as the iPad miniAAPL and lower priced iPhones. As a result, we believe Apple’s market share in phones has room to rise much further, and that its dominant tablet market share appears to be more resilient than most expect. We expect these factors to continue to drive the stock higher.

              This is, however, not going to happen if taking a judgement from the stock market reflections since then with 13.7% drop in Apple’ share price vs. that of Dec 7 (the report publishing date) and a whopping 34.5% drop vs. its last peak on Sept 19, 2012 (at $702.1):image 
              source: Yahoo! Finance

              Why Did $AAPL Stock Go Down After Beating Earnings Estimates And $AMZN Stock Go Up After Missing? [Techcrunch, Jan 29, 2013] had the following explanation:

              The moves in different directions for Amazon and Apple have been about expectations and guidance. Wall Street has higher expectations for Apple and ‘different’ expectations for Amazon. Wall Street wants Apple’s ‘gross margins’ to grow. They don’t expect Amazon’s ‘profits’ to grow. It sounds silly, but if Apple has reported lower profits and a huge gross margin increase the stock might have shot up. If Amazon had reported record profits today on decreasing margins, Wall Street might have panicked.

              Wall Street has stopped caring about Apple’s profits today. They were displeased with forward guidance. Growth rates have slowed measurably at Apple which is understandable for a company of its’ size. Wall Street is worried that growth is slowing and competition from Google and Samsung are taking a toll. Apple has given Wall Street so many wonderful surprises so magic has become the norm. Now that Apple is boring, they have run for the hills.

              That moode didn’t change even after Apple CEO Tim Cook was trying to assure investors at the Goldman Sachs Internet and Technology Conference on Feb 12, just a week ago. Read the Wrap up: Apple CEO Tim Cook’s Goldman Sachs Conference keynote [AppleInsider, Feb 13, 2013] from which I will quote only the following excerpts as the most notable ones:

              Cook went on to say that introducing a “budget device” was not something Apple would be comfortable with, and instead pointed to the strategy seen with the iPhone lineup. In that model, new variants like the iPhone 5 are sold at the highest price while preceding versions like the iPhone 4S and iPhone 4 are sold at discounted rates.

              According to Cook, the iPad is “the poster child of the post-PC revolution” and has driving the push to tablets since its introduction in 2010.

              While Apple’s tablet has been the downfall for a number of PC alternatives, such as netbooks, the device is also said to be hurting the company’s own Mac computer sales. During the last quarter of 2012, Mac sales dropped 22 percent year-to-year on low demand and supply constraints. Apple’s iPad business, however, grew by nearly 50 percent over the same period.

              The cannibalization question raises its head a lot,” Cook said. “The truth is: we don’t really think about it that much. Our basic belief is: if we don’t cannibalize, someone else will. In the case of iPad particularly, I would argue that the Windows PC market is huge and there’s a lot more there to cannibalize than there is of Mac, or of iPad.”

              Cook noted that burgeoning markets like China and Brazil will be major players in future growth, and the company is banking on its ability to draw customers in to the Apple ecosystem with “halo products.”

              “Through the years, we’ve found a very clear correlation between people getting in and buying their first Apple product and some percentage of them buying other Apple products.”

              At the same conference Microsoft, similarly to Apple, declared a ‘no change’ strategy despite of the obvious failure of its Windows 8 and Windows Phone efforts so far. In the No “Plan B” for Microsoft’s mobile ambitions: CFO [Reuters, Feb 13, 2013] report one can read:

              “We’re very focused on continuing the success we have with PCs and taking that to tablets and phones,” Microsoft’s Chief Financial Officer Peter Klein said

              “It’s less ‘Plan B’ than how you execute on the current plan,” said Klein. “We aim to evolve this generation of Windows to make sure we have the right set of experiences at the right price points for all customers.”

              Gartner estimates that Microsoft sold fewer than 900,000 Surface tablets in the fourth quarter, which is a fraction of the 23 million iPads sold by Apple. Microsoft has not released its own figures but has not disputed Gartner’s.

              Windows phones now account for 3 percent of the global smartphone market, Gartner says, which is almost double their share a year ago but way behind Google’s Android with 70 percent and Apple with 21 percent.

              To grab more share, Klein said Microsoft was working with hardware makers to make sure Windows software is available on devices ranging from phones to tablets to larger all-in-one PCs.

              “It’s probably more nuanced than just you lower prices or raise prices,” said Klein. “It’s less a Plan B and more, how do you tweak your plan, how do you bring these things to market to make sure you have the right offerings at the right price points?”

              So the last 3 months went against Goldmann Sachs’ November 2012 predictions. The only question now remains whether those 3 months brought any changes in the non-Apple and non-Microsoft territories which would question other parts of the Goldmann Sachs’ forecast as well?

              There were no negative changes just strengthening of the already established dominant position against both Apple and Microsoft:

              1. Mainstream tablets 7-inch at US$199, say Taiwan makes [DIGITIMES, Feb 19, 2013]

              Google’s Nexus 7 and Amazon’s Kindle Fire HD have reshuffled the global tablet market and consequently 7-inch with a price cap of US$199 has become the mainstream standard for tablets, according to Taiwan-based supply chain makers.

              Cumulative sales of the Nexus 7 have reached six million and are expected to reach eight million units before the expected launch of the second-generation model in June 2013, the sources said. The Nexus 7 and Kindle Fire have driven vendors to develop inexpensive 7-inch tablet models instead of 10-inch ones, the sources indicated.

              In order to be as reach US$199, 7-inch tablets are equipped with basic required functions such as access to the Internet and watching video, the sources noted. While Google, Amazon, Samsung Electronics and Asustek Computer are competitive at US$199 for 7-inch tablets, white-box or other vendors need to launch 7-inch models at lower prices such as US$149, the sources said. Fox example, China-based graphics card vendor Galaxy Microsystems has cooperated with Nvidia to launch a 7-inch tablet in the China market at CNY999 (US$160).

              2. Digitimes Research: 68.6% of touch panels shipped in 4Q12 from the Greater China area [DIGITIMES, Feb 19, 2013] meaning that in supply chain terms there is a growing concentration on suppliers not only from Greater China but especially from mainland China:

              Taiwan- and China-based touch panel makers held a 68.6% global market share for touch panels shipped during the fourth quarter of 2012, according to Digitimes Research.

              China-based panel makers saw the biggest share in the handset touch panel market during the fourth quarter due to smartphone demand in China, while Taiwan-based panel makers only held a 27.5% share in the market largely due to lower-than-expected sales of the iPhone 5, said Digitimes Research.

              In terms of touch panels used in tablets, Taiwan-based panel makers saw a drop in their global market share to 59.9% during the period largely due to the iPad mini using DITO thin-film type touch screens provided from Japan-based touch panel makers. China-based panel makers meanwhile held 18.6% in the market due to demand for white-box tablets in China, added Digitimes Research.

              Meanwhile, Digitimes Research found that Taiwan-based TPK provided 70.9% of all touch panels used in notebook applications in 2012.

              3. Touch Panel Market Projected for a 34% Growth in 2013 from 2012 [Displaybank, sent in a newsletter form, Feb 19, 2013] published to promote Touch Panel Market Forecast and Cost/Issue/Industry Analysis for 2013 [Jan 30, 2013]

              The touch panel market is growing rapidly due to the increasing sale of smartphones and tablet PCs. The touch panel market size in 2012 was 1.3 billion units, a 39.4% growth over 2011. The market is projected to grow 34% in 2013, growing to more than 1.8 billion units.

              Touch Panel Market Forecast (Unit: Million)image(Source: Displaybank, “Touch Panel Market Forecast and Cost/Issue/Industry Analysis for 2013”)

              Smartphone and tablet PCs, major applications that use touch panels, are expected to continue to grow at a high rate. In addition, most IT devices that use display panels have either switched to or will start using the touch panels soon. Therefore the touch panel market will show a double digit growth annually until 2016, by unit. The market size is expected to reach more than 2.75 billion units by 2016.
              With the explosion in the sale of smartphones and tablet PCs during the past few years, our lives have changed dramatically. They are now common place in our lives, and have a huge influence in the IT industry in general. With the introduction of Windows 8 OS in October 2012, upsizing of touch panels has begun. The impact of this event on the immediate growth of the touch panel market and the long-term effect is so immense that it cannot be estimated at the moment.
              The financial crisis that started in 2008 left much of the IT industry hobbling worldwide. But only the touch panel market is enjoying a boom. Many new players are pouring into the industry, and those on the sidelines are waiting for the opportune moment to enter. As more players enter the competitive landscape, touch panel prices are falling rapidly. In addition, to gain competitiveness and to differentiate itself in the market has led players to develop and improve structure, technique and process, and seek out new materials.
              The introduction of Windows 8 is leading the increase in touch capable Notebook and AIO PCs. It is still too early for the touch interface to completely displace keyboard and mouse, but the touch functionality does add convenience to some operations. We are sure to see an increase in specialized apps that capitalize on such functions. Therefore, touch functions will complement traditional input methods. As the technology is still in early implementation stages, it is used only in select high-end Ultrabooks. But it’s only a matter of time before touch functions make its way to mid-end products.
              Forecasting the future of touch panel industry is not only difficult, but also outright confusing in the current landscape due to the rapid expansion; the increase in number of devices that use touch panels; more players in the market; and rapid development of new products and new processes. In serving clients, Displaybank has released “Touch Panel Market Forecast and Cost/Issue/Industry Analysis for 2013” to provide industry outlook by application, product, and capacitive touch structure. The report also includes the supply chain of set makers and touch panel manufacturers; and cost analysis of major capacitive touch panels by size and type. This report will serve as a guide to bring clarity and understanding of rapidly transforming touch panel industry.

              4. Cheaper components could allow 7-inch tablets priced below US$150, says TrendForce [DIGITIMES, Dec 14, 2012]

              Viewing that Google and Amazon have launched 7-inch tablets at US$199, other vendors can offer 7-inch tablets at below US$150 only by adopting cheaper components, according to Taiwan-based TrendForce.
              As panels and touch modules together account for 35-40% of the total material costs of a 7-inch tablet, replacing the commonly used 7-inch FFS panels with 7-inch TN LCD panels accompanied by additional wide-view angle compensation could save over 50% in panel costs, TrendForce indicated. In addition, replacing a G/G (glass/glass) or OGS (one glass solution) touch module with a G/F/F (glass/film/film) one, although inferior in terms of transmittance and touch sensitivity, can cut costs by about 70%. Thus, the adoption of a TN LCD panel and a G/F/F touch module for a 7-inch tablet could reduce material costs by about US$25, TrendForce said.
              Given that the type of DRAM affects standby time only as far as user experience is concerned, costs can be reduced through replacing 1GB mobile DRAM priced at about US$10 with 1GB commodity DRAM priced at about US$3.50, TrendForce noted. As for NAND flash, 8GB and 4GB eMMC cost US$6 and US$4, respectively, and therefore the latter should be the preferred choice to save costs.
              For CPUs, China-based IC design houses, including Allwinner Technology, Fuzhou Rockchip Electronics, Ingenic Semiconductor, Amlogic and Nufront Software Technology (Beijing), provide 40-55nm-based processors at about US$12 per chip which could be alternatives to chips used in high-end tablets which cost about US$24, TrendForce indicated.
              While the sales performance of tablets below US$150 is yet to be seen, such cheap models are expected to put pressure upon China-based white-box vendors, and in turn intensify price competition in the tablet market in 2013, TrendForce commented.

              5. Strong demand from non-iPad tablet sector to boost short-term performance of IC vendors [DIGITIMES, Jan 28, 2013]

              Demand for IC parts from the tablet industry in China has been stronger than expected in the first quarter of 2013, which could help boost the short-term performance of IC design houses, while offsetting the impact of slow demand from China’s smartphone sector caused by high inventory levels, according to industry sources.

              Entry-level tablets meet market demand in terms of pricing and functionality, particularly in China, said the sources, adding that demand for entry-level tablets in China and other emerging markets could top 4-5 million a month in 2013 compared to 2-3 million in the second half of 2012.

              MediaTek, while seeing demand for its handset solutions from China decrease in the first quarter of 2013, has also enjoyed emerging IC demand from the tablet sector, with plans to release chipset solutions for the segment in the second quarter of the year, the source revealed.

              Since the growth momentum for tablets in 2013 is expected to come from non-iPad vendors in China and other emerging markets, Taiwan-based suppliers of LCD driver, analog and touch-controller ICs as well as those of Wi-Fi, audio and Bluetooth chips will benefit from the trend thanks to cost advantages and strong business ties in these markets, the sources commented.

              6. Allwinner A31 SoC is here with products and the A20 SoC, its A10 pin-compatible dual-core is coming in February 2013 [Dec 10, 2012] and The upcoming Chinese tablet and device invasion lead by the Allwinner SoCs [Dec 4, 2012], both from my own separated trend tracking site devoted to the ‘Allwinner phenomenon’ coming from mainland China and having the potential of drastically altering the 2013 device market (not taken into account at all by Goldmann Sachs report):

              Allwinner Tech tell us about the new features of their A31 product targeted for tablets, smartphones and smart TVs. Based on quadcore ARM Cortex-A7.

              that already resulted in huge growth of the mainland China Android tablet manufacturing in 2012, as well shown by this chart:which has already fundamentally affected the worldwide tablet market in 2012:

              7. What Allwinner started in 2012 with the single core A10/A13 SoCs and which was further boosted by the quad-core Cortex-A7 A31 SoC on Dec 5, 2012 with the release of Onda V972 and V812 tablets (for US$ 208 and US$144 respectively) is an incredible strategic inflection point for the whole ICT industry, which ALL SoC vendors should compete with. Rockchip shown as the #2 on the mainland China market just followed the suite:

              Rockchip’s new RK3188 chipset: quadcore ARM Cortex-A9 and quadcore ARM Mali-400, 28nm HKMG process. Plus an update on Rockchip’s involvement with products for the education market.

              8. Now the most ambitious external challenger Marvell Announces Industry’s Most Advanced Single-chip Quad-core World Phone Processor to Power High-performance, Smartphones and Tablets with Worldwide Automatic Roaming on 3G Networks [press release, Feb 19, 2013] which is going to add to the competition the integrated on the SoC 3.5G modems:

              Marvell’s PXA1088 is the industry’s most advanced single-chip solution to feature a quad-core processor with support for 3G field-proven cellular modems including High Speed Packet Access Plus (HSPA+), Time division High Speed Packet Access Plus (TD-HSPA+) and Enhanced Data for GSM Environment (EDGE).

              The Marvell PXA1088 solution incorporates the performance of a quad-core ARM Cortex-A7 with Marvell’s mature and proven WCDMA and TD-SCDMA modem technology to provide a low-cost [elsewhere stated by Marvell that this SoC is for the phones space in the “$100 range”] 3G platform for both smartphones and tablets. The advanced application processor technology of the PXA1088 enables a breakthrough end user experience for multimedia and gaming applications with universal connectivity. Marvell’s complete mobile platform solution includes the Avastar® 88W8777 WLAN + Bluetooth 4.0 + FM single-chip SoC and the L2000 GNSS Hybrid Location Processor, and an integrated power management and audio codec IC.

              Marvell’s PXA1088 is backward pin-to-pin compatible with its dual-core single-chip Unified 3G Platform, the PXA988/PXA986, enabling device partners to upgrade their next-generation mobile devices to quad-core without  additional design cost.

              Currently, the PXA1088 platform is sampling with leading global customers. Products based on this platform are expected to be commercially available in 2013 [elsewhere stated by Marvell thatWe’ll start seeing PXA1088-based phones in the first half of this year”].

              9. Yesterday we had two significant advancements described in the Ubuntu and HTC in lockstep [Feb 19, 2013] post here. Especially the Ubuntu related part is remarkable as first time we had a new platform which can span the whole spectrum of devices: from smartphones, to tablets, to desktops, to TVs – actually all from a smartphone capability expanded via docking and other means to a screen, to a TV, a keyboard, and a mouse. This is certainly an extreme case of the new Ubuntu capability which can have implementation in different devices as well. Even in that case, however, the source and binary codes could be the same. This is also cleverly using the already well established Android drivers and Android Board Support Package (BSP) infrastructure of the most cost-efficient ARM SoC vendors. Note that this is furthest from any “license violation” attacks as the original OHA terms and conditions are stating the Apache V2 licencing which:

              The Apache license allows manufacturers and mobile operators to innovate using the platform without the requirement to contribute those innovations back to the open source community. Because these innovations and differentiated features can be kept proprietary … Because the Apache license does not have a copyleft clause, industry players can add proprietary functionality to their products based on Android without needing to contribute anything back to the platform. As the entire platform is open, companies can remove functionality if they choose.

              10. Finally today came Google Glass: showing how radically the user experience might be changing in the next 2-3 years:

              Want to see how Glass actually feels? It’s surprisingly simple. Say “take a picture” to take a picture. Record what you see, hands free. Even share what you see, live. Directions are right in front of you. Speak to send a message, or translate your voice. Get the notifications that matter most. Ask whatever’s on your mind and get answers without having to ask. All video footage captured through Glass. Welcome to a world through Glass. See more athttp://www.google.com/glass/start “New Lipstick” by The Kissaway Trail on Google Play -http://goo.gl/v4dUf

              More information: Google Glass – Home [Feb 20, 2013] where it is also possible to grasp its wonderful, non-intrusive seign like this:

              image

              Conclusion: There are even more uncalculated by Goldmann and Sachs advancements in the non-Apple and non-Microsoft spaces than in Apple and Microsoft ones. Just in these 3 months! Therefore it would be ridiculous if Goldmann and Sachs’ “consumer compute platform share” forecast as shown in the chart above will be fullfilled!