Home » Posts tagged 'cloud computing'

Tag Archives: cloud computing

A year of healthy progress along Microsoft strategic ambitions

Microsoft Stock Price for the last 5 years — July 22, 2016:Microsoft Stock Price for the last 5 years -- 22 July, 2016 My earlier posts related specifically to this 3 years overall transition history:
– Microsoft partners empowered with ‘cloud first’, high-value and next-gen experiences for big data, enterprise social, and mobility on wide variety of Windows devices and Windows Server + Windows Azure + Visual Studio as the platform as of July 10, 2013
– Microsoft reorg for delivering/supporting high-value experiences/activities as of July 11, 2013
– An ARM-focussed Microsoft spin-off could be the only solution to save Microsoft in the crucial next 3-years period as of August 24, 2013
– Opinion Leaders and Lead Opinions: Reflections on Steven Sinofsky’s “Era of Continuous Productivity” vision as of September 1, 2013
– The question mark over Wintel’s future will hang in the air for two more years as of September 15, 2013
– Microsoft could be acquired in years to come by Amazon? The joke of the day, or a certain possibility (among other ones)? as of September 16, 2013
– Sinofsky’s ‘continuous productivity’ idea to be realised first in Box Notes as of September 21, 2013
MS FY15 NEW STRATEGIC SETUPMicrosoft is transitioning to a world with more usage and more software driven value add (rather than the old device driven world) in mobility and the cloud, the latter also helping to grow the server business well above its peers as of April 25, 2014
– Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft as of July 23, 2014
– Steve Ballmer on leaving Microsoft, relationship with Bill Gates: “We’ve dusted-up many times”, on His Biggest Regret: “doing hardware earlier [for being] more effective in phone business” AND on Amazon: “They Make No Money.” as October 25, 2014
– The Empire Reboots — Can C.E.O. Satya Nadella Save Microsoft? | Vanity Fair, Oct 27, 2014

WPC Day 1: The Digital Transformation Opportunity from Microsoft Partner Network UK Blog as of July 11, 2016:

“Empower every person and every
organisation on the planet to achieve more”
The Microsoft Mission

At the core of today’s opening Worldwide Partner Conference keynote was ‘Digital Transformation’ aka the desire of CEO’s to use technology to change business outcomes – whether it be how they:

  • Engage their customers,
  • Empower employees to make better decisions,
  • Optimise their operations,
  • Build up the predictive power within their organisations so that every operation is intelligent,
  • Transform their products and services.

Digital Transformation = An Unprecedented Partner Opportunity

Every customer of every size business (startup to Enterprise) is not only looking to use digital technology, but to build digital technology for their own.

Digital-transformatoin-all-partner-types1-1024x530[1]

Businesses are looking to drive greater efficiency – automating processes and enhancing productivity, particularly in those areas where there are operating expenses. This poses an unprecedented opportunity for you no matter what partner type you are.

Digital Transformation Opportunity by Microsoft and Partners -- July 11, 2016Microsoft Ambitions to Drive Digital Transformation

Microsoft has three core ambitions which play a fundamental part in digitally transforming businesses:

  • Re-inventing Productivity and Business process
  • Building the Intelligent Cloud
  • Create more Personal Computing

These will be covered in more detail over the next two days keynotes, however, Satya provided some great examples of what these 3 ambitions entail.

1) Re-inventing Productivity and Business Process

This is all about removing the barriers between productivity tools and business applications. Satya focused on two key areas:

  • ‘Conversations as a Platform’: Using human language understanding personal assistants and Bots (conversational interfaces) which augment our connection with technologies. (Watch the demo 48 minutes into Day 1 Keynote)

2) Building out the intelligent Cloud

To showcase how intelligent cloud is helping transformation, Satya invited General Electric CEO, Jeff Immelt, on stage to discuss how he has digitally transformed the GE business.

Considering GE is over 140 years old, it’s a company that has embraced transformation and digital transformation. You can read more about their story and find out about Microsoft’s new partnership with GE to bring Predix to Azure, accelerating digital transformation for industrial customers.

Satya then went on to talk about ‘The next phase of building the Intelligent cloud’ with ‘Cognitive services’.  We’re seeing the beginnings of a new platform for cognitive services. Microsoft has taken decades of research from Microsoft Research encapsulating speech, computer vision, natural language text understanding, and made these available as API’s. These API’s are being used to infuse perception into apps – the ability for Apps/Bots to understand speech and see i.e. computer vision. These cognitive capabilities are capable of transforming business by bringing productivity gains. A great example of this is how Macdonalds are creating efficiency in their Drive Thru’s with speech/order recognition (Watch the demo 1 hour 10 minutes into the Day 1 keynote).

3) Create More Personal Computing

Create more personal computing was the third and final ambition covered. Satya discussed Windows 10 – an OS system spanning multiple devices from Raspberry PI to Hololens and bringing centralised infrastructure benefits and cost savings to business.

It was on the topic of Hololens, he discussed how personal computing is shaped by category creation moments. Moments where input and output change. ‘Mixed Reality’ is that moment. With Hololens its created an interface changing moment – Mixing real with virtual, enabling us to be anywhere and everywhere – fully untethered and mobile.

What followed was a great demo showcasing how Japan Airlines are using Microsoft HoloLens to change how they train flight crews and mechanics (Watch the demo 1 hour 17 minutes into the Day 1 keynote)

Mixed reality offers huge opportunities for partners with so many applications across so many sectors.

Expect more details on Digital Transformation and Microsoft’s three ambitions in WPC Day 2 and 3 keynotes.

News From WPC2016 Day 1

The three ambitions announced a year ago and the proof-points of healthy progress along them in FY16:

  1. Office 365, Dynamics 365, AppSource, and LinkedIn as all being part of one overarching strategy in Productivity and Business Process:
    – core part of an overarching strategy
    – digital transformation both for us and our partnerships with customers
  2. Significant differentiation vs. Amazon AWS in Intelligent Cloud:
    – enterprise cloud leadership
    – every customer is also an ISV
    – hyperscale-plus-hybrid approach with annuity focus enabling cloud lead conversation with customers
    – meeting cloud needs of customers where they are
  3. Windows strategy to achieve progress in More Personal Computing:
    – deliver more value and innovation, particularly for enterprise customers
    – grow new monetization through services across our unified Windows platform
    – innovate in new device categories in partnership with our OEMs

The Q1FY16 progress was presented in my Microsoft is ready to become a dominant force in cloud computing with superior cloud offerings, a Windows ecosystem under complete renewal, first signs of Surface-Lumia-Xbox successes on the market, and strong interest in technology partnerships by other industry leaders as of October 24, 2015.

Reinvent Productivity and Business Processes“, “Build the Intelligent Cloud” and “Create More Personal Computing” were the original 3 “interlocking ambitions” the Microsoft CEO talked about at Microsoft Iginite held on May 4-8, 2015 in Chicago. The proof-points of FY16 progress are shown along that list, and explained in detail by remarks from Microsoft (MSFT) Satya Nadella on Q4 2016 Results – Earnings Call Transcript as of July 18, 2016.

For more information see also:  Q4 2015 Earning Call Transcript, the 2015 Annual Report or—even better—my earlier posts indicated here under each ambition. For a deeper strategic intent underlying these ambilitions see my earlier post Julia Liuson: “Microsoft must transform from a company that throws a box with software into the market … into a company that offers pure services” published on These ambitions also became reporting segments in FY16. See Earnings Release FY16 Q1 as of October 22, 2015. The major corporate groups were also organised along these line: ASG = Application & Services Group for “Reinvent productivity and business processes” ambition, C&E = Cloud & Enterprise for “Build the intelligent cloud platform” ambition, and OSG= Operating Systems Group for “Create more personal computing” ambition.

Note that the overall strategic approach was developed 2 years ago and it was described in my post Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft of July 23, 2014:

image.png

Here are the remarks from Microsoft (MSFT) Satya Nadella on Q4 2016 Results – Earnings Call Transcript as of July 18, 2016. for details

1. Office 365, Dynamics 365, AppSource, and LinkedIn as all being part of one overarching strategy in Productivity and Business Process:

For initial and additional details available earlier see my earlier posts:
– The first “post-Ballmer” offering launched: with Power BI for Office 365 everyone can analyze, visualize and share data in the cloud as of February 10, 2014
– OneNote is available now on every platform (+free!!) and supported by cloud services API for application and device builders as of March 18, 2014
– An upcoming new era: personalised, pro-active search and discovery experiences for Office 365 (Oslo) as of April 2, 2014
– Microsoft Azure: Marketable machine learning components capability for “a new data science economy”, and real-time analytics for Azure HDInsight service as of October 22, 2014

In fact, this last quarter, some of the most strategic announcements were all around our application platform. At our partner conference, there was a significant amount of excitement with the tools that we announced like PowerApps and Power BI, Azure functions and Flow. These are tools that our developers and system integrators and solution partners will use in order to be able to customize applications around Azure. And so to me that’s another huge advantage and a competitive differentiation for us.

1.1 Core part of an overarching strategy

The move to the cloud for our customers and for us is not just about a new way of delivering the same value just as a SaaS service. It’s really the transformation from having applications that are silos to becoming more services in the cloud where you can reason about the activity and the data underneath these services to benefit the customers who are using these services. So that’s what this notion of a graph [by Microsoft Graph] represents.

So when somebody moves to Office 365, their graph [by Microsoft Graph], their people, their relationships with other people inside the organization, their work artifacts all move to the cloud. You can connect them with all the business process data that’s in Dynamics 365, but not just in Dynamics 365 but all the applications in AppSource because business process will always be a much more fragmented market as opposed to just one market share leader by industry, by vertical, by country. And so that’s our strategy there.

And now the professional cloud or the professional network helps usage across all of that professional usage. Whether it’s in Office 365 or whether you’re a salesperson using any application related to sales, you want your professional network there. Of course, it’s relevant in recruiting, it’s relevant in training, it’s relevant in marketing. So that’s really our strategy with LinkedIn as the professional network meeting the professional cloud. And these are all part of one overarching strategy, and ultimately it’s about adding value to customers.

1.2 Digital transformation both for us and our partnerships with customers

This past year was a pivotal one in both our transformation and in our partnerships with customers who are also driving their own digital transformation. Our progress is best captured in the results of our three ambitions, starting with Productivity and Business Process. In a world of infinite information but finite attention and time, we aim to change the nature of work with digital technology. In pursuit of this ambition, we continue to add value to our products, grow usage, and increase our addressable market. Along these lines, let me start with Office 365 and then move to Dynamics 365.

In the last quarter, we advanced our collaboration tools. We launched Microsoft Planner, which helps teams manage operations, as well as Skype Meetings, which is aimed at helping small businesses collaborate. In June, we further strengthened our security value proposition with the release of Advanced Security Management.

Lastly, we continue to add intelligence in machine learning to Office to help people automate their tasks and glean insights from data. These advancements helped to drive increased usage across enterprises, small and medium businesses, and consumers. In the enterprise, Office 365 Commercial seats grew 45% year over year, and revenue grew 59% in constant currency. Also 70% of our Office Enterprise agreement renewals are in the cloud. Innovative companies like Facebook, Hershey’s, Discovery Communications, Cushman Wakefield all adopted Office 365 and now see how transformative this service can be for their own business.

We are enthusiastic about the early feedback and growth opportunity from companies using our newly released Office 365 E5, which includes powerful security controls, advanced analytics, and cloud voice. These customers tell us that they love the simplification that comes with standardizing across all of our productivity workloads.

We will continue to grow our install base and drive premium mix through offers like Office 365 E5, but they’re very, very early days of E5. And E5 value proposition across all three of the areas, whether it’s cloud voice or analytics or security are all three massive areas for us. And I would say if anything, the initial data from our customers around security is gaining a lot of traction. But at the same time, one of the things that customers are looking for is making an enterprise-wide architectural decision across all of the workloads.

We see momentum in small and medium businesses, with a growing number of partners selling Office 365, now up to nearly 90,000, a 25% increase year over year. We continue to grab share and adding over 50,000 customers each month for 28 consecutive months.

We also see momentum amongst consumers, with now more than 23 million Office 365 subscribers. Across segments, customers increasingly experience the power of Office on their iOS and Android mobile devices. In fact, we now have more than 50 million iOS and Android monthly active devices, up more than four times over last year.

Now let’s talk about progress with the other pillar of this ambition, Dynamics 365. We are removing any impedance that exists between productivity, collaboration, and business process. This month we took a major step forward with the introduction of Microsoft Dynamics 365 and Microsoft AppSource. Dynamics 365 provides business users with purpose-built SaaS applications. These applications have intelligence built in. They integrate deeply with communications and collaboration capabilities of Office 365.

Dynamics 365 along with AppSource and our rich application platform introduces a disruptive and customer-centric business model so customers can build what they want and use just the capabilities they need. The launch of Dynamics 365 builds on the momentum we’re already seeing in this business. Customers around the globe are harnessing the power of Dynamics in their own transformation, including 24 Hour Fitness and AccuWeather. Overall, Dynamics now has nearly 10 million monthly paid seats, up more than 20% year over year, and Q4 billings grew more than 20% year over year.

Overall, Business Processes represent an enormous addressable market, projected to be more than $100 billion by 2020. It’s a market we are increasingly focused on, and I believe we are poised with both Dynamics 365 and Microsoft AppSource to grow and drive opportunity for our partners.

Across Office 365 and Dynamics 365, developers increasingly see the opportunity to build innovative apps and experiences with the Microsoft Graph, and we now have over 27,000 apps connected to it. Microsoft AppSource will be a new way for developers to offer their services and reach customers worldwide.

Lastly, with Office 365 and Dynamics 365, we have the opportunity to connect the world’s professional cloud and the world’s professional network with our pending LinkedIn deal. Overall, the Microsoft Cloud is winning significant customer support. With more than $12 billion in Commercial Cloud annualized revenue run rate, we are on track to achieve our goal of $20 billion in fiscal year 2018. Also, nearly 60% of the Fortune 500 companies have at least three of our cloud offerings. And we continue to grow our annuity mix of our business. In fact, commercial annuity mix increased year over year to 83%.

2. Significant differentiation vs. Amazon AWS in Intelligent Cloud 

For initial and additional details available earlier see my earlier posts:
– Windows Azure becoming an unbeatable offering on the cloud computing market as of June 28, 2013
Microsoft partners empowered with ‘cloud first’, high-value and next-gen experiences for big data, enterprise social, and mobility on wide variety of Windows devices and Windows Server + Windows Azure + Visual Studio as the platform as of July 10, 2013

– 4. Microsoft products for the Cloud OS [‘Experiencing the Cloud’, as of Dec 18, 2013, but published only on Feb 14, 2014] (was separated from the next “half bakedness” post because of its length)
– 4.5. Microsoft talking about Cloud OS and private clouds: starting with Ray Ozzie in November, 2009[‘Experiencing the Cloud’, as of Dec 18, 2013, but published only on Feb 14, 2014] (was separated from the next “half bakedness” post because of its length)
Microsoft’s half-baked cloud computing strategy (H1’FY14) as of February 17, 2014 Note that this “half bakedness” ended by the facts published in Microsoft is ready to become a dominant force in cloud computing with superior cloud offerings, a Windows ecosystem under complete renewal, first signs of Surface-Lumia-Xbox successes on the market, and strong interest in technology partnerships by other industry leaders as of October 24, 2014
– Microsoft is transitioning to a world with more usage and more software driven value add (rather than the old device driven world) in mobility and the cloud, the latter also helping to grow the server business well above its peers as of April 25, 2014
– Microsoft BUILD 2014 Day 2: “rebranding” to Microsoft Azure and moving toward a comprehensive set of fully-integrated backend services as of April 27, 2014
– Scott Guthrie about changes under Nadella, the competition with Amazon, and what differentiates Microsoft’s cloud products as of October 2, 2014
– Sam Guckenheimer on Microsoft Developer Division’s Journey to Cloud Cadence as of October 19, 2014
– Microsoft Azure: Marketable machine learning components capability for “a new data science economy”, and real-time analytics for Azure HDInsight service as of October 22, 2014
Microsoft Cloud state-of-the-art: Hyper-scale Azure with host SDN — IaaS 2.0 — Hybrid flexibility and freedom as of July 11, 2015
– Microsoft’s first quarter proving its ability to become a dominant force in cloud computing with superior cloud offerings as of Januar 27, 2015
– DataStax: a fully distributed and highly secure transactional database platform that is “always on” as of February 3, 2016
– Microsoft chairman: The transition to a subscription-based cloud business isn’t fast enough. Revamp the sales force for cloud-based selling as of June 6, 2016

Cloud Growth Helps Microsoft Beat Street in Q4 from TheStreet as of July 19, 2016 

… [0:34] and Microsoft’s Enterprise Mobility [Suite]
customers nearly doubled YoY to 33,000. [0:40] …

Note that the Q1FY16 report was that “Enterprise Mobility [Suite] customers more than doubled year-over-year to over 20,000, and the installed base grew nearly 6x year-over-year“. Enterprise Mobility Suite (EMS) is a service available in the CSP (Cloud Solution Partner program) along with Windows Intune, Office 365, Azure and CRM Online. The reason for that very impressive growth was given by Satya Nadella in the much earlier Q2FY15 report as:

Microsoft Enterprise Mobility Suite is one key of product innovation that I would like to highlight given the growth and uniqueness of our offering. Microsoft offers a comprehensive solution that brings together mobile device management, mobile application management, hybrid identity management and data protection into a unified offering via EMS.

Office 365 now includes new app experiences on all phones and tablets for mobile productivity.  Further, we have released completely new scenarios. This includes Office Sway for visualizing and sharing ideas; Delve, to help search and discover content; Office 365 Groups to make it easier to collaborate; andOffice 365 Video for secure media streaming for businesses.

Finally, we continue to invest in enterprise value by integrating MDM and the Enterprise Mobility Suite into Office 365; new encryption technologies and compliance certifications; and new eDiscovery capabilities in Exchange.

Overall at the highest level, our strategy here is to make sure that the Microsoft Services i.e. cloud services be it Azure, Office 365, CRM Online or Enterprise Mobility Suite are covering all the devices out there in the marketplace. So that, that way we maximize the opportunity we have for each of these subscription and capacity based services.

2.1 Enterprise cloud leadership

Now let’s get into the specifics of the Intelligent Cloud, an area of massive opportunity, as we are clearly one of the two enterprise cloud leaders. Companies looking to digitally transform need a trusted cloud partner and turn to Microsoft. As a result, Azure revenue and usage again grew by more than 100% this quarter. We see customers choose Microsoft for three reasons. They want a cloud provider that offers solutions that reflect the realities of today’s world and their enterprise-grade needs. They want higher level services to drive digital transformation, and they want a cloud open to developers of all types. Let me expand on each.

To start, a wide variety of customers turn to Azure because of their specific real-world needs. Multinationals choose us because we are the only hybrid and hyperscale cloud spanning multiple jurisdictions. We cover more countries and regions than any other cloud provider, from North America to Asia to Europe to Latin America. Our cloud respects data sovereignty and makes it possible for an enterprise application to work across these regions and jurisdictions. More than 80% of the world’s largest banks are Azure customers because of our leadership support for regulatory requirements, advanced security, and commitment to privacy. Large ISVs like SAP and Citrix as well as startups like Sprinklr also choose Azure because of our global reach and a broad set of platform services. Last week GE announced it will adopt our cloud for its IoT approach.

Next, Azure customers also value our unique higher-level services. Now at 33,000, we nearly doubled in one year the number of companies worldwide that have selected our Enterprise Mobility Solutions. The Dow Chemical Company leverages EMS along with Azure, Office 365, and Dynamics to give its thousands of employees secure real-time access to data and apps from anywhere.

Just yesterday, we announced Boeing will use Azure, our IoT suite, and Cortana Intelligence to drive digital transformation in commercial aviation, with connected airline systems optimization, predictive maintenance, and much more. This builds on great momentum in IoT, including our work with Rolls-Royce, Schneider Electric, and others.

This is great progress, but our ambitions are set even higher. Our Intelligent Cloud also enables cognitive services. Cortana Intelligence Suite offers machine learning capabilities and advanced predictive analytics. Customers like Jabil Circuit, Fruit of the Loom, Land O’Lakes, LIBER already realize the benefits of these new capabilities.

Lastly, central to our Intelligent Cloud ambition is providing developers with the tools and capabilities they need to build apps and services for the platforms and devices of their choice. We have the best support for what I would say is the most open platform for all developers. Not only is .NET first class but Linux is first class, Java is first class. The new Azure Container service cuts across both containers running on Windows, running across Linux. So again, it speaks to the enterprise reality. .NET Core 1.0 for open source and our ongoing work with companies such as Red Hat, Docker, and Mesosphere also reflects significant progress on this front. We continue to see traction from open source, with nearly a third of customer virtual machines on Azure running Linux.

So those would be the places where we are fairly differentiated, and that’s what you see us gaining both for enterprise customers and ISVs.

On the server side, premium server revenue grew double digits in constant currency year over year. New SQL Server 2016 helps us expand into new markets with built-in advanced analytics and unparalleled performance. More than 15,000 customers, including over 50% of the Fortune 500, have registered for the private preview of SQL Server for Linux. And we’re not slowing down. We will launch Windows Server 2016 and System Server 2016 later this year.

2.2 Every customer is also an ISV

One of the phenomena now is that pretty much anyone who is a customer of Azure is also in some form an ISV, and that’s no longer just limited to people who are “in the classic tech industry” or the software business. So every customer who starts off consuming Azure is also turning what is their IP in most cases into an ISV solution, which ultimately will even participate in AppSource. So at least the vision that we have is that every customer is a digital company that will have a digital IP component to it, and that we want to be able to partner with them in pretty unique ways.

That’s the same case with GE. It’s the same case with Boeing. It’s the same case with Schneider Electric or ABB or any one of the customers we are working with because they all are taking some of their assets and converting them into SaaS applications on Azure. And that’s something that we will in fact have distribution agreements with.

And AppSource is a pretty major announcement for us because we essentially created for SaaS applications and infrastructure applications a way to distribute their applications through us and our channel. And I think it makes in fact our cloud more attractive to many of them because of that. So we look – I think going forward, you’ll look to see – or you’ll see us do much more of this with many other customers of ours.

2.3 Hyperscale-plus-hybrid approach with annuity focus enabling cloud lead conversation with customers

The focus for us is in what I describe as this hyperscale-plus-hybrid approach when you think about the current approach, which is pretty unique to us. Overall, I believe this hyperscale plus hybrid architecturally helps us a lot with enterprise customers because we meet them where their realities are today and also the digital transformation needs going forward, so that’s one massive advantage we have.

And the way we track progress is to see how is our annuity growth of our server business, and how is our cloud growth. And if you look at this last quarter, our annuity grew double digits and our cloud grew triple digits. And that’s a pretty healthy growth rate, and that’s something that by design both in terms of the technical architecture as well as the traction we have in the marketplace and our sales efforts and so on are playing out well, and we are very bullish about that going forward.

The Transactional business is much more volatile because of the macro environment, IT budgets, and also the secular shift to the cloud. The question again that gets asked is about the cannibalization. But if you look at Boeing or you look at any of the other examples that I talk about when it comes to the cloud, our servers never did what these customers are now doing in our cloud. So at a fundamental long-term secular basis, we have new growth, new workloads, and that’s what we are focused on, and that’s a much bigger addressable market than anything our Transactional Server business had in the past.

[Amy E. Hood – Chief Financial Officer & Executive Vice President:]
The first thing really that I think Satya and I both focus on every quarter, every month, is how much of our business are we continuing to shift to annuity and specifically to the cloud. We structure all of our motions at this company, from how we engineer to how we do our go-to-markets to how we think about sales engagement to how we do our investments, fundamentally toward that long-term structural transition in the market.

In terms of server products and services, I tend to think of it as the all-up growth. It’s really about growing the cloud, growing the hybrid, and then whatever happens in the Transactional business happens.

And so to your question on Transactional performance, there were some deals that didn’t get done in Q3 that got done in Q4, and there were some deals done in Q4 on the Office side with large companies that I’m thrilled by. But at the same time, we still will focus on those deals moving to the cloud over time. And so this volatility that we are going to see because of macro and because of budget constraints, especially on Transactional, we will focus on because we expect excellent execution and have accountability to do that in the field. But our first priority, every time, is to make sure we are focused on annuity growth and digital transformation at our company, which is best done through that motion.

In terms of the sales motion they are absolutely incented more towards cloud versus Transactional going into this year.

I do believe that every conversation that we’re having with customers is cloud-led. That cloud-led conversation and making a plan for customers to best change and transform their own business certainly is a far more in-depth one than on occasion is required by long-time Transactional purchasers, especially in Office, as an example, because what we’re talking about now is really pivoting your business for the long term.

And so I’m sure there are examples where that has elongated the sales cycle, for good reason. But I would generally point back and say most of these are driven at the structural level, which is – structurally over time, on-premises Transactional business will move to the cloud or to a hybrid structure through an annuity revenue stream.
[END BY Amy E. Hood]

2.4 Meeting cloud needs of customers where they are

The position that we have taken is that we want to serve customers where they are and not assume very simplistically that the digital sovereignty needs of customers can be met out of a fewer data center approach. Because right now, given the secular trend to move to the cloud across all of the regulated industries across the globe, we think it’s wiser for us and our investors long term to be able to meet them where they are. And that’s what you see us. We are the only cloud that operates in China under Chinese law, the only cloud that operates in Germany under German law. And these are very critical competitive advantages to us.

And so we will track that, and we will be very demand driven. So in this case we’re not taking these positions of which regions to open and where to open them well in advance of our demand. If anything, I think our cycle times have significantly come down. So it will be demand-driven, but I don’t want to essentially put a cap because if the opportunity arises, and for us it’s a high ROI decision to open a new region, we will do so.

3. Windows strategy to achieve progress in More Personal Computing

For initial and additional details available earlier see my earlier posts:
– Windows Embedded is an enterprise business now, like the whole Windows business, with Handheld and Compact versions to lead in the overall Internet of Things market as well as of June 8, 2013
– How the device play will unfold in the new Microsoft organization? as of July 14, 2013
– With Android and forked Android smartphones as the industry standard Nokia relegated to a niche market status while Apple should radically alter its previous premium strategy for long term as of August 17, 2013
– Windows [inc. Phone] 8.x chances of becoming the alternative platform to iOS and Android: VERY SLIM as it is even more difficult for Microsoft now than any time before as of August 20, 2013
– Leading PC vendors of the past: Go enterprise or die! as of November 7, 2013
– Xamarin: C# developers of native “business” and “mobile workforce” applications now can easily work cross-platform, for Android and iOS clients as well as of November 15, 2013
Microsoft is transitioning to a world with more usage and more software driven value add (rather than the old device driven world) in mobility and the cloud, the latter also helping to grow the server business well above its peers as of April 25, 2014
Microsoft Surface Pro 3 is the ultimate tablet product from Microsoft. What the market response will be? as of May 21, 2014
Windows 10 Technical Preview: Terry Myerson and Joe Belfiore on the future of Windows as of October 1, 2014
– The Era Of Sub-$90 Windows 8.1 Phones in U.S. as of October 3, 2014
– Windows 10 is here to help regain Microsoft’s leading position in ICT as of July 31, 2015
– Microsoft and partners to capitalize on Continuum for Phones instead of the exited Microsoft phone business as of June 5, 2016

We have increased Windows 10 monthly active devices and are now at more than 350 million. This is the fastest adoption rate of any prior Windows release. While we are proud of these results, given changes to our phone plan, we changed how we will assess progress. Going forward, we will track progress by regularly reporting the growth of Windows 10 monthly active devices in addition to progress on three aspects of our Windows strategy:

3.1 Deliver more value and innovation, particularly for enterprise customers

We continue to pursue our goal of moving people from needing Windows to choosing Windows to loving Windows. In two weeks, we will launch Windows 10 Anniversary Update, which takes a significant step forward in security. We are also extending Windows Hello to support apps and websites and delivering a range of new features like Windows Ink and updates to Microsoft Edge. We expect these advances will drive increased adoption of Windows 10, particularly in the enterprise, in the coming year. We already have strong traction, with over 96% of our enterprise customers piloting Windows 10.

3.2 Grow new monetization through services across our unified Windows platform

As we grow our install base and engagement, we generate more opportunity for Microsoft and our ecosystem. Bing profitability continues to grow, with greater than 40% of the search revenue in June from Windows 10 devices. Bing PC query share in the United States approached 22% this quarter, not including volume from AOL and Yahoo!. The Cortana search box has over 100 million monthly active users, with 8 billion questions asked to date.

We continue to drive growth in gaming by connecting fans on Xbox Live across Windows 10, iOS, and Android. Just this quarter we launched our Minecraft Realm subscription on Android and iOS. Overall engagement on Xbox Live is at record levels, with more than 49 million monthly active users, up 33% year over year. At E3 we announced our biggest lineup of exclusive games ever for Xbox One and Windows 10 PCs. And we announced Xbox Play Anywhere titles, where gamers can buy a game once and play it on both their Windows 10 PC and Xbox One. We also announced two new members of the Xbox One console family, the Xbox One S and Project Scorpio.

The Windows Store continues to grow, with new universal Windows apps like Bank of America, Roku, SiriusXM, Instagram, Facebook, Wine, Hulu, and popular PC games like Quantum Break.

3.3 Innovate in new device categories in partnership with our OEMs

Our hardware partners are embracing the new personal computing vision, with over 1,500 new devices designed to take advantage of Windows 10 innovations like Touch, Pen, Hello, and better performance and power efficiency.

Microsoft’s family of Surface devices continues to drive category growth, and we are reaching more commercial customers of all sizes with the support of our channel partners. We recently announced new Surface enterprise initiatives with IBM and Booz Allen Hamilton to enable more customer segments. Also in the past year, we grew our commercial Surface partner channel from over 150 to over 10,000.

Lastly this quarter, more and more developers and enterprise customers got to experience two entirely new device categories from Microsoft Surface Hub and Microsoft HoloLens. While we are still in the early days of both of these devices, we are seeing great traction with both enterprise customers and developers, making us optimistic about future growth.

Microsoft chairman: The transition to a subscription-based cloud business isn’t fast enough. Revamp the sales force for cloud-based selling.

See also my earlier posts:
– John W. Thompson, Chairman of the Board of Microsoft: the least recognized person in the radical two-men shakeup of the uppermost leadership, ‘Experiencing the Cloud’, Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft, ‘Experiencing the Cloud’, July 23, 2014

May 17, 2016John Thompson: Microsoft Should Move Faster on Cloud Plan in an interview with Bloomberg’s Emily Chang on “Bloomberg West”

The focus is very-very good right now. We’re focused on cloud, on the hydrid model of the cloud. We’re focused on the application services we can deliver not just in the cloud but on multiple devices. If ever I would like to see something change, it’s more about pace. From my days at IBM [Thompson spent 28 years at IBM before becoming chief executive at Symantec] I can remember we never seemed to be running or moving fast enough. That is always the case in the established enterprise. While you believe that you’re moving fast in fact you’re not moving as fast as a startup.

June 2, 2016: Microsoft Ramps Up Its Cloud Efforts Bloomberg Intelligence’s Mandeep Singh reports on “Bloomberg Markets”

If you look at their segment revenue 43% from Windows and hardware devices. That part is the one where it is hard to come up with a cloud strategy to really kind of migrate that segment to the cloud very quickly. The infrastructure side is 30%, that is taken care of, and the Office is the other 30% that they have a good mix. That is really the other 43% revenue they have to figure out how to accelerate that transition to the cloud.

Then Bloomberg’s June 2, 2016 article (written by Dina Bass) came out with the following verdict:

Microsoft Board Mulls Sales Force Revamp to Speed Shift to Cloud 

Board members at Microsoft Corp. are grappling with a growing concern: that the company’s traditional software business, which makes up the majority of its sales, could evaporate in a matter of years — and Chairman John Thompson is pushing for a more aggressive shift into newer cloud-based products.

Thompson said he and the board are pleased with a push by Chief Executive Officer Satya Nadella to make more money from software and services delivered over the internet, but want it to move much faster. They’re considering ideas like increasing spending, overhauling the sales force and managing partnerships differently to step up the pace.

The cloud growth isn’t merely nice to have — it’s critical against the backdrop of declining demand for what’s known as on-premise software programs, the more traditional approach that involves installing software on a company’s own computers and networks. No one knows exactly how quickly sales of those legacy offerings will drop off, Thompson said, but it’s “inevitable that part of our business will be under continued pressure.”

The board members’ concern was born from experience. Thompson recounts how fellow director Chuck Noski, a former chief financial officer of AT&T, watched the telecom carrier’s traditional wireline business evaporate in just three years as the world shifted to mobile. Now, Noski and Thompson are asking whether something similar could happen to Microsoft.

“What’s the likelihood that could happen with on-prem versus cloud? That in three years, we look up and it’s gone?” Thompson said in an interview, snapping his fingers to make the point.

Small, but Growing

Nadella has said the company is on track to make its forecast for $20 billion in annualized sales from commercial cloud products in fiscal 2018. Still, Thompson said, the cloud business could be even further along, and the software maker should have started its push much earlier. Commercial cloud services revenue has posted impressive growth rates — with Azure product sales rising more than 100 percent quarterly — but the total business contributed just $5.8 billion of Microsoft’s $93.6 billion in sales in the latest fiscal year.

Thompson praised the technology behind smaller cloud products, such as Power BI tools for business analysis and data visualization and the enterprise mobile management service, which delivers apps and data to various corporate devices. But the latter, for example, brings in $300 million a year — just a sliver of overall annual revenue, which will soon top $100 billion, Thompson said.

The board is examining whether Microsoft has invested enough in its complete cloud lineup, Thompson said. It’s not just about developing better cloud technology — it’s a question of how the company sells those products and its strategy for recruiting partners to resell Microsoft’s services and build their own offerings on top of them. Persuading partners to develop compatible applications is a strong point for cloud market leader Amazon.com Inc., he said.

Thompson declined to be specific about what the company might change in sales and partnerships, but he said the company may need to “re-imagine” those organizations. “The question is, should it be more?” he said. “If you believe we need to run harder, run faster, be less risk-averse as a mantra, the question is how much more do you do.”

Cloud Partnerships

Analysts say Microsoft should seek to develop a deeper bench of partners making software for Azure and consultants to install and manage those services for customers who need the help. Microsoft is working on this, but is behind Amazon Web Services, said Lydia Leong, an analyst at Gartner Inc.

“They are nowhere near at the same level of sophistication, and the Microsoft partners are mostly new to the Azure ecosystem, so they don’t know it as well,” she said. “If you’re a customer and you want to migrate to AWS, you have this massive army that can help you.”

In the sales force, Microsoft’s representatives need more experience in cloud deals — which are generally subscription-based rather than one-time purchases — and how they differ from traditional software contracts, said Matt McIlwain, managing director at Seattle’s Madrona Venture Partners. “They haven’t made enough of a transition to a cloud-based selling motion,” he said. “It’s still a work in progress.”

Microsoft declined to comment on the company’s cloud strategy or any changes to sales and partnerships for this story, and director Noski couldn’t be reached for comment.

One-Time Purchases

The company’s dependence on demand for traditional software was painfully apparent in its most recent quarterly report, when revenue was weighed down by weakness in its transactional business, or one-time purchases of software that customers store and run on their own PCs and networks. Chief Financial Officer Amy Hood in April said that lackluster transactional sales were likely to continue.

Microsoft’s two biggest cloud businesses are the Azure web-based service, which trails top provider Amazon but leads Google and International Business Machines Corp., and the Office 365 cloud versions of e-mail, collaboration software, word-processing and spreadsheet software. Microsoft’s key on-premise products include Windows Server and traditional versions of Office and the SQL database server.

Slumps like last quarter’s hurt even more amid the company’s shift to the cloud, which has brought a lot of changes to its financial reporting. For cloud deals, revenue is recognized over the term of the deal rather than providing an up-front boost. They’re also lower-margin businesses, squeezed by the cost of building and maintaining data centers to deliver the services. Microsoft’s gross margin dropped from 80 percent in fiscal 2010 to 65 percent in the year that ended June 30, 2015.

“This business growing incredibly well, but the gross margin of that is substantially lower than their core products of the olden days,” said Anurag Rana, an analyst at Bloomberg Intelligence. “How low do they go?”

‘Different Model’ [of doing business for subscription-based software]

It’s jarring for some investors, but the other option is worse, said Thompson.

That’s a very different model for Microsoft and one our investors are going to have to suck it up and embrace, because the alternative is don’t embrace the cloud and you wake up one day and you look just like — guess who?” Thompson doesn’t finish the sentence, but makes it clear he’s referring to IBM, the company where he spent more than 27 years, which he says is “not relevant anymore.” IBM declined to comment.

The pressure is good for Microsoft, Thompson said — pressure tends to result in change.

“You can re-imagine things when you’re stressed. It’s a lot easier to do it when you’re stressed because you feel compelled to do something,” Thompson said. “I see a lot of stress at Microsoft.”

The Dawn of the SoC 2.0 Era: The TSMC Perspective

From its companion post The Dawn of the SoC 2.0 Era: The ARM Perspective

futureICT - Cortex-A Roadmap Strategy -- April-2015

Source of the slide: ARM Cortex系列核心介绍 (Core ARM Cortex Series Introduction, 52RD, April 13, 2015)

Regarding TSMC itself the April 8 conclusion in TSMC Outlines 16nm, 10nm Plans article by EE|Times is:

“It’s not completely clear who is ahead at 16/14 but I think TSMC is making a major commitment to trying to be ahead at 10,” Jones said. “If that happens and TSMC has closed the gap with Intel, the issue is then if TSMC’s 10 and Intel’s 10 are the same,” he said.

Background from the April 14, 2015 TSMC Symposium: “10nm is Ready for Design Starts at This Moment” article in Cadence Communities Blog:

The 10nm semiconductor process node is no longer in the distant future – it is here today, according to presenters at the recent TSMC 2015 Technology Symposium in San Jose, California. TSMC executives noted that EDA tools have been certified, most of the IP is ready or close to ready, and risk production is expected to begin in the fourth quarter of 2015.

Here are some more details about 10nm at TSMC as presented in talks by Dr. Cliff Hou, vice president of R&D at TSMC (right), and Dr. BJ Woo, vice president of business development at TSMC (below left). At the TSMC Symposium, speakers also introduced two new process nodes, 16HHC and 28HPC+ (see blog post here).

According to Woo, TSMC is not only keeping up with Moore’s Law – it is running ahead of the law with its 10FF offering. “We have done a lot more aggressive scaling than Moore’s Law demands for our 10nm technology,” she said. A case in point is the fully functional 256Mb SRAM with a cell size that is approximately 50% smaller than the 16FF+ cell size. She called this an “exceptional shrink ratio” that goes beyond traditional scaling.

And it’s not just SRAM. The 10FF node, Woo said, can scale key pitches by more than 70%. Combine that with innovative layout, and 10nm can achieve almost 50% die size scaling compared to 16FF+. “And this is very, very aggressive,” she said.

After noting that the 16FF+ already provides “clear performance leadership,” Woo said that 10FF offers a 22% performance gain over 16FF+ at the same power, or more than 40% power reduction at the same speed. This comparison is based on a TSMC internal ring oscillator benchmark circuit. For the Cortex-A57 test chip used to validate EDA tools, the result was a 19% speed increase at the same power, and a 38% power reduction at the same speed.

New features in 10FF include a unidirectional (1D) layout style and new local interconnect layer. These features help 10FF achieve a 2.1X logic density improvement over 16FF+, whereas normally TSMC gets about a 1.9X density boost for node migration, Woo said. In addition to the density improvement, the 1D Mx architecture can reduce CD (critical dimension) variation by 60%, she said.

And an already remarkable quote from April 12, 2015 TSMC Symposium: New Low-Power Process, Expanded R&D Will Drive Vast Innovation: TSMC Executive article in Cadence Communities Blog:

Hock Tan, CEO of Avago, described a symbiotic relationship between TSMC and his company that led to a super high-density switch for a networking customer, implemented in 16FF+. The switch has 96 ports, each running 100G Gbps, and drawing less than 2W each. That enables, in a next-generation data center, the tripling of a switch performance to more than 10 Tbps.

Moreover, according to the April 12, 2015 TSMC Symposium: New 16FFC and 28HPC+ Processes Target “Mainstream” Designers and Internet of Things (IoT) article from Cadence Communities Blog:

16FFC is a “compact” version of the 16nm FinFET+ (16FF+) process technology that is now in risk production at TSMC. It claims advantages in power, performance, and area compared to the existing 16FF+ process, along with easy migration from 16FF+. It can be used for ultra low-power IoT applications such as wearables, mobile, and consumer.

28HPC+ is an improved version of the 28HPC (High Performance Compact) process, which is itself a fairly recent development. Late last year 28HPC went into volume production, and it provides a 10% smaller die size and 30% power reduction compared to TSMC’s earlier 28LP process. 28HPC+ ups the ante by providing 15% faster speed at the same leakage, or 30-50% reduction in leakage at the same speed, compared to 28HPC.

TSMC also provided updates on other processes on its roadmap, which includes the following:

  • High Performance – 28HP, 28HPM, 20SoC, 16FF+
  • Mainstream – 28LP, 28HPC, 28HPC+, 16FFC
  • Ultra Low Power – 55ULP, 40ULP, 28ULP, 16FFC (16FFC is in both mainstream and low power categories)

In connection with that remember the September 29, 2014 announcement:
TSMC Launches Ultra-Low Power Technology Platform for IoT and Wearable Device Applications

TSMC (TWSE: 2330, NYSE: TSM) today announced the foundry segment’s first and most comprehensive ultra-low power technology platform aimed at a wide range of applications for the rapidly evolving Internet of Things (IoT) and wearable device markets that require a wide spectrum of technologies to best serve these diverse applications. In this platform, TSMC offers multiple processes to provide significant power reduction benefits for IoT and wearable products and a comprehensive design ecosystem to accelerate time-to-market for customers.

TSMC’s ultra-low power process lineup expands from the existing 0.18-micron extremely low leakage (0.18eLL) and 90-nanometer ultra low leakage (90uLL) nodes, and 16-nanometer FinFET technology, to new offerings of 55-nanometer ultra-low power (55ULP), 40ULP and 28ULP, which support processing speeds of up to 1.2GHz. The wide spectrum of ultra-low power processes from 0.18-micron to 16-nanometer FinFET is ideally suited for a variety of smart and power-efficient applications in the IoT and wearable device markets. Radio frequency and embedded Flash memory capabilities are also available in 0.18um to 40nm ultra-low power technologies, enabling system level integration for smaller form factors as well as facilitating wireless connections among IoT products.

Compared with their previous low power generations, TSMC’s ultra-low power processes can further reduce operating voltages by 20% to 30% to lower both active power and standby power consumption and enable significant increases in battery life — by 2X to 10X — when much smaller batteries are demanded in IoT/wearable applications.

“This is the first time in the industry that we offer a comprehensive platform to meet the demands and innovation for the versatile Internet of Things market where ultra-low power and ubiquitous connectivity are most critical,” said TSMC President and Co-CEO, Dr. Mark Liu. “Bringing such a wide spectrum of offerings to this emerging market demonstrates TSMC’s technology leadership and commitment to bring great value to our customers and enable design wins with competitive products.”

One valuable advantage offered by TSMC’s ultra-low power technology platform is that customers can leverage TSMC’s existing IP ecosystem through the Open Innovation Platform®. Designers can easily re-use IPs and libraries built on TSMC’s low-power processes for new ultra-low power designs to boost first-silicon success rates and to achieve fast time-to-market product introduction. Some early design engagements with customers using 55ULP, 40ULP and 28ULP nodes are scheduled in 2014 and risk productions are planned in 2015.

“TSMC’s new ultra-low power process technology not only reduces power for always-on devices, but enables the integration of radios and FLASH delivering a significant performance and efficiency gain for next-generation intelligent products,” said Dr. Dipesh Patel, executive vice president and general manager, physical design group, ARM. “Through a collaborative partnership that leverages the energy-efficient ARM® Cortex®-M and Cortex-A CPUs and TSMC’s new process technology platform, we can collectively deliver the ingredients for innovation that will drive the next wave of IoT, wearable, and other connected technologies.”

“Low power is the number one priority for Internet-of-Things and battery-operated mobile devices,” said Martin Lund, Senior Vice President and General Manager of the IP Group at Cadence. “TSMC’s new ULP technology platform coupled with Cadence’s low-power mixed-signal design flow and extensive IP portfolio will better meet the unique always-on, low-power requirements of IoT and other power sensitive devices worldwide.”

CSR has an unequalled reputation in Bluetooth technology and has been instrumental in its progression, including helping to write the Bluetooth Smart standard that is meeting the demands of today’s rapidly evolving consumer electronics market,” said Joep van Beurden, CEO at CSR. “For many years, CSR has closely collaborated with TSMC, and we are pleased to demonstrate the results of that collaboration with the adoption of the 40ULP platform for our next generation of Bluetooth Smart devices including products for markets like smart home, lighting and wearables that are enabling the growth of the Internet of Things. Our solutions simplify complex customer challenges and help speed their time to market by allowing them to design and deliver breakthrough low power wireless connected products on these powerful new platforms.”

“The imaging SoC solutions of Fujitsu Semiconductor Limited bring the best balance between high imaging quality and low power consumption, to meet the significant demand from our customers and the electronics market,” said Tom Miyake, Corporate Vice President, at System LSI Company of Fujitsu Semiconductor Limited. “We welcome that TSMC is adding the 28ULP technology to its successful 28nm platform. We believe this technology will provide our SoCs with the key feature: low power consumption at low cost.”

Nordic Semiconductor has been a pioneer and leader in ultra-low power wireless solutions since 2002, and with the launch of its nRF51 Series of Systems-on-Chip (SoCs) in 2012 the company established itself as a leading vendor of Bluetooth Smart wireless technology,” said Svenn-Tore Larsen, CEO of Nordic Semiconductor. “We have been collaborating closely with TSMC on the selection of process technology for our upcoming nRF52 Series of ultra-low power RF SoCs. I am happy to announce that we have selected the TSMC 55ULP platform. This process is a key enabler for us to push the envelope on power consumption, performance and level of integration of the nRF52 Series to meet the future requirements of Wearable and Internet of Things applications.”

“Built on TSMC’s Ultra-Low Power technology platform and comprehensive design ecosystem, Realtek’s Bluetooth Energy Efficient smart SoC, BEE, supports the latest Bluetooth 4.1 specification featuring Bluetooth Low Energy (BLE) and GATT-based profiles,” said Realtek Vice President and Spokesman, Yee-Wei Huang. “BEE’s power efficient architecture, low power RF, and embedded Flash are ideal both for the IoT and for wearable devices such as smart watches, sport wristbands, smart home automation, remote controls, beacon devices, and wireless charging devices.”

Silicon Labs welcomes TSMC’s ultra-low power initiative because it will enable a range of energy-friendly processing, sensing and connectivity technologies we are actively developing for the Internet of Things,” said Tyson Tuttle, Chief Executive Officer, Silicon Labs. “We look forward to continuing our successful collaboration with TSMC to bring our solutions to market.”

“Synopsys is fully aligned with TSMC on providing designers with a broad portfolio of high-quality IP for TSMC’s ultra-low power process technology and the Internet of Things applications,” said John Koeter, Vice President of Marketing for IP and Prototyping at Synopsys. “Our wide range of silicon-proven DesignWare® interface, embedded memory, logic library, processor, analog and subsystem IP solutions are already optimized to help designers meet the power, energy and area requirements of wearable device SoCs, enabling them to quickly deliver products to the market.”

As well as the ARM and Cadence Expand Collaboration for IoT and Wearable Device Applications Targeting TSMC’s Ultra-Low Power Technology Platform announcement of Sept 29, 2015:

ARM® and Cadence® today announced an expanded collaboration for IoT and wearable devices targeting TSMC’s ultra-low power technology platform. The collaboration will enable the rapid development of IoT and wearable devices by optimizing the system integration of ARM IP and Cadence’s integrated flow for mixed-signal design and verification, and their leading low-power design and verification flow.

The partnership will deliver reference designs and physical design knowledge to integrate ARM Cortex® processors, ARM CoreLink™ system IP, and ARM Artisan® physical IP along with RF/analog/mixed-signal IP and embedded flash in the Virtuoso®-VDI Mixed-Signal Open Access integrated flow for the new TSMC process technology offerings of 55ULP, 40ULP and 28ULP.

“TSMC’s new ULP technology platform is an important development in addressing the IoT’s low-power requirements,” stated Nimish Modi, senior vice president of Marketing and Business Development at Cadence. “Cadence’s low-power expertise and leadership in mixed-signal design and verification form the most complete solution for implementing IoT applications. These flows, optimized for ARM’s Cortex-M processors including the new Cortex-M7, will enable designers to develop and deliver new and creative IoT applications that take maximum advantage of ULP technologies.”

“The reduction in leakage of TSMC’s new ULP technology platform combined with the proven power-efficiency of Cortex-M processors will enable a vast range of devices to operate in ultra energy-constrained environments,” said Richard York, vice president of embedded segment marketing, ARM. “Our collaboration with Cadence enables designers to continue developing the most innovative IoT devices in the market.”

This new collaboration builds on existing multi-year programs to optimize performance, power and area (PPA) via Cadence’s digital, mixed-signal and verification flows and complementary IP alongside ARM Cortex-A processors and ARM POP™ IP targeting TSMC 40nm, 28nm, and 16nm FinFET process technologies. Similarly, the companies have been optimizing the solution based around the Cortex-M processors in mixed-signal SoCs targeting TSMC 65/55nm and larger geometry nodes. The joint Cortex-M7 Reference Methodology for TSMC 40LP is the latest example of this collaboration.

For the above keep in mind The TSMC Grand Alliance [TSMC, Dec 3, 2013]:

The TSMC Grand Alliance is one of the most powerful force for innovation in the semiconductor industry, bringing together our customers, EDA partners, IP partners, and key equipment and materials suppliers at a new, higher level of collaboration.

The objectives of the TSMC Grand Alliance are straightforward: to help our customers, the alliance members and ourselves win business and stay competitive.

We know collaboration works. We have seen it in the great strides our customers and ecosystem members have made through the Open Innovation Platform® where today there are 5,000 qualified IP macros and over 100 EDA tools that supports our customers’ innovation and helps them attain maximum value from TSMC’s technology.

Today Open Innovation Platform is an unmatchable design ecosystem and a key part of the Grand Alliance that will prove much more powerful. Looking at R&D investment alone, we calculate that TSMC and ten of our customers invest more in R&D than the top two semiconductor IDMs combined.

Through the Grand Alliance TSMC will relentlessly pursue our mission and collaborate with customers and partners. We need each other to be competitive. We need each other to win. Such is the power of the Grand Alliance.

[Some more information is in the very end of this post]

A related overview in Kicking off #ARMWearablesWK with an analysts view of the market post of November 17, 2014 of ARM Connected Community blog by David Blaza:

Today as we kickoff ARM Wearables Week we hear from Shane Walker of IHS who is their Wearables and Medical market expert.

Shane’s take on this market is that it’s for real this time (there was a brief Smartwatch wave a few years ago) and will continue to be a hot growth sector through 2015. One of the great benefits of talking with analysts like Shane is they help you think through what’s going on and bust a few myths that may have found their way into our thinking. For example I asked Shane what the barriers to growth were and he carefully and patiently pointed out that Wearables are growing at a 21% CAGR already and will hit $12b in device sales this year (without services, more on that later in the week).  So this is not an emerging or promising market, it’s here and growing at an impressive rate. By 2019 Shane’s estimate is that it will hit $33.5b in device sales and services are increasingly going to factor into the wearables experience (Big Data is coming!).

Shane breaks the Wearables market down to 5 major categories:

  1. Healthcare and Medical
  2. Fitness and Wellness
  3. Infotainment
  4. Industrial
  5. Military

I’m glad he did this for me because wearables are incredibly diverse and this week you are going to see some category defying products here such as smart Jewelry where does that fit?

Below you can see a table chart that Shane was willing to share that shows his estimate for market size and units sold, the main learning for me is how much of this market is healthcare related. Also attached below are details on what services IHS offer in the Wearables market or you can find them here.

futureICT - World Market for Wearable Technology - Revenue by Application -- IHS-November-2014

attached is: Wearable Technology Intelligence Service 2014.pdf  [IHS Technology, November 17, 2014]

Note the following table in that:
futureICT - Wearable Technology Data Coverage Areas by IHS

More information:
– A Guide to the $32b Wearables Market [IHS Technology, March 11, 2015]
– which has a free to download whitepaper:
Wearable Technology: The Small Revolutions is Making Big Waves

Brief retrospective on the SoC 1.0 Era

futureICT - Shipments of TSMC Advanced Technologies Q1'2009 - Q1'2015

Detailed Background from TSMC’s quaterly calls

Q1 2015:

Mark Liu – TSMC – President & Co-CEO
[update on new technology]

The continuous demand of more functionality and integration in smartphones drives for more silicon content. We expect smartphones will continue to drive our growth in the next several years.

In the meantime, we see IoT appears us — present us new growth opportunities. The proliferation of IoT not only will bring us growth in the sensor, connectivity and advanced packaging areas, the associated application and services, such as big data analytics, will also further our growth in the computation space, including application processor, network processor, image processor, graphic processor, microcontroller and other various processors. That was the long-term outlook.

I’ll update some of our 10-nanometer development progress. Our 10-nanometer technology development is progressing well. Our technology qualification remains in Q4 this year.

Recently we have successfully achieved fully functional yields of our 256-megabit SRAM. Currently we have more than 10 customers fully engaged with us on 10-nanometer. We still expect to have 10-nanometer volume ramp in fourth quarter 2016 and to contribute billing in early 2017.

This technology adopts our third-generation FinFET transistor and have scaling more than one generation. Its price is fully justified by its value for various applications, including application processor, baseband SoC, network processor, CPU and graphic processors. Its cost and price ratio will comply to our structural profitability considerations.

As for new technology development at TSMC, I’d like to start with — to update you our 7-nanometer development. We have started our 7-nanometer technology development program early last year. We also have rolled out our 7-nanometer design and technology collaboration activity with several of our major customers. Our 7-nanometer technology developments today are well in progress.

TSMC’s 7-nanometer technology will leverage most of the tools used in 10-nanometer, in the meantime achieve a new generation of technology value to our customers. The 7-nanometer technology risk production date is targeted at early 2017.

Now I would like to give you an update on EUV. We have been making steady progress on EUV. Both our development tools, we have two NXE 3300 have been upgraded to the configuration of 80 watt of EUV power, with an average wafer throughput of a few hundred wafers per day. We continue to work with ASML to improve tool stability and availability. We also are working with ASML and our partners on developing the infrastructure of EUV, such as masks and resists.

Although today the process on record of both 10-nanometers and 7-nanometer are on immersion tools, with innovative multiple patterning techniques, we will continue to look for opportunity to further reduce the wafer cost and simplify the process flow by inserting EUV layer in the process.

Now I’d like to give you an update of our recently announced ultra-low-power technologies. We have offered the industry’s most comprehensive ultra-low-power technology portfolio, ranging from 55-nanometer ULP, 40-nanometer ULP, 28-nanometer ULP, to the recently announced 16 FFC, a compact version of 16 FinFET Plus, enable continual reduction of operating voltage and power consumption. Today more than 30 product tape-outs planned in 2015 from more than 25 customers.

This 55- and 40-nanometer ULP will be the most cost-effective solution for low- to mid-performance wearable and IoT devices. The 28 ULP and 16 FFC will be the most power-efficient solution for high-performance IoT applications. In particular, our 16 FFC offers the ultra-low-power operation at a supply voltage of 0.55 volts, with higher performance than all of the FD-SOI technologies marketed today.

Lastly I’ll give you an update of our recent IoT specialty technology development. We have developed the world’s first 1.0-micron pixel size 16-megapixel CMOS image sensor, with stacked image signal processor, which was announced in March by our customer for the next-generation smartphone. Secondly, we continue to drive the best low resistance in BCD [Bipolar-CMOS-DMOS for DC-to-DC converter: together with Ultra-High-Voltage (UHV) technology for AC-to-DC converter—are the key to enable monolithic integrated PMIC design] technology roadmap, from 0.18 micron to 0.13 micron and from 8-inch to 12-inch production for wireless charging and fast wired charging of mobile devices. We continue to extend our 0.13 BCD technology from consumer and industrial applications to automotive-grade electrical system control applications.

Lastly, recently we have started production in foundry’s first 40-nanometer industrial embedded Flash technology that was started from November last year. And this technology recently passed automotive-grade qualification, that was in March, for engine control applications.

C.C. Wei – TSMC – President & Co-CEO

I will update you the 28-nanometer, 20 and 16 FinFET status and also our InFO business.

First, 28-nanometer. This is the fifth year since TSMC’s 28-nanometer entered mass production. 28-nanometer has been a very large and successful node for us. Our market segment share at this node has held up well and is in the mid-70s this year. We expect this to continue in year 2016. In comparison, this is better than what we had in the 40-nanometer node.

The demand for 28-nanometer is expected to grow this year due to the growth of mid- and low-end smartphones and as well as the second-wave segment, such as radio frequency, circuit product and the Flash controllers that migrate into this node.

However, due to some customers’ inventory adjustments, which we believe is only going to be for the short term, the demand for 28-nanometer in the second quarter will be lower than our previous quarter, resulting in 28-nanometer capacity utilization rate to be in the high-80s range. But we expect the utilization rate of the 28-nanometer to recover soon and to be above 90% in the second half of this year.

While we are in the mass production, we also continue to improve the performance of our technology. Last year we have introduced our 28-HPC, which is a compact version of 28-HPM. For the purpose of helping 64-bit CPU conversion for mid- to low-end market, this year we further improved the 28-HPC to 28-HPC Plus. For comparison, 28-HPC Plus will have 18% power consumption — lower power consumption at the same speed or 15% faster speed at the same kind of power.

As for the competitive position, we are confident that we will continue to lead in performance and yield. So far we do not see there is a very much effective capacity in High K metal gate at 28-nanometer outside TSMC. And since we have already shipped more than 3m 12-inch 28-nanometer wafers, the learning curve has given us an absolute advantage in cost.

Now let me move to our 20 SoC. TSMC remains the sole solution provider in foundry industry for 20-nanometer process. Our yield has been consistently good after a very successful ramp last year. But recently we have observed customers’ planned schedule for product migration from 20 nanometer to 16 FinFET started sooner than we forecasted three months ago.

As a result, even we continue to grow 20-nanometer business in the second quarter of this year, our earlier forecast of 20-nanometer contributing above 20% of total wafer revenue this year has to be revised down by a few points to a level about the mid teens. That being the case, we still forecast the revenue from 20-nanometer will more than double that of year 2014’s level.

Now 16 FinFET. The schedule for 16 FinFET high-volume production remains unchanged. We will begin ramping in the third quarter this year. And the ramp rate appeared be faster than we forecasted three months ago, thanks to the excellent yield learning that we can leverage our 20-nanometer experience and also due to a faster migration from 20-nanometer to 16 FinFET.

In addition to good yield, our 16 FinFET device performance also met all products’ specs due to our very good transistor engineering. So we believe our 16 FinFET will be a very long-life node due to its good performance and the right cost. This is very similar to our 28-nanometer node.

We are highly confident that our 16 FinFET is very competitive. As we’ve said repeatedly, combining 20-nanometer and 16-nanometer, we will have the largest foundry share in year 2015. And if we only look at 16-nanometer alone, we still can say TSMC will have the largest 16- or 14-nanometer foundry share in year 2016.

Now let me move to our InFO business update. The schedule to ramp up the InFO in second quarter next year remains unchanged. We expect InFO will contribute more than $100m quarterly revenue by next year, fourth quarter next year, when it will be fully ramped.

Right now we are building a new facility in Longtan, that’s a city very near to Hsinchu, where our headquarters are, for ramping up InFO. Today a small product line is almost complete and it’s ready for early engineering experiment. This pilot line will be expanded to accommodate the high-volume ramp in year 2016.

Andrew Lu – Barclays – Analyst

… I think Mark presented at the Technology Symposium in San Jose mentioned that 16 FinFET versus competing technology is about 10% performance better. So can you elaborate what’s 10% performance better? If our die size is larger than our competitors, how can we get the 10% performance better?

Mark Liu – TSMC – President & Co-CEO

In the conference we talked about 16 FinFET Plus. That is our second-generation FinFET transistor. In that we improved our transistor performance a great deal. According to our information, that transistor speed, talk about speed at fixed power, is higher than the competitor by 10%. That’s what I meant. …  Because of the transistor structure, transistor engineering.

Andrew Lu – Barclays – Analyst

Compared to competing — is the competing the current competitor’s solution or the next-generation competitor’s solution? For example, LPE versus LPP or something like that?
Mark Liu – TSMC – President & Co-CEO
The fastest one. The fastest.
Andrew Lu – Barclays – Analyst
Their best one?
Mark Liu – TSMC – President & Co-CEO
Yes.

Dan Heyler – BofA Merrill Lynch – Analyst

My second question is relating to 20-nanometer. Here you certainly have a lot of growth in 16, with customers taping out aggressively, especially next year. Given your high share at 28, how do you keep 28 full? You obviously have a lot of technology there. Customers will move forward.

So I’m wondering, could you elaborate on new areas that are actually creating new demand at 28, such that you can continue to grow 28 next year. And do you think you can grow? I think previously you said maybe hold it at current levels even with 16 growing. So just maybe revisit that question.

C.C. Wei – TSMC – President & Co-CEO

To answer the question, I think the high-end smartphone will move to 16 FinFET. However, the mid- to — and lower-end smartphones will stay in the 28-nanometer because that’s very cost effective. And mid- and low-end smartphone continues to grow significantly. So that will give a very strong demand on 28-nanometer. In addition, we still have a second-wave product, like RF and Flash controller, as I use as an example, move into 28-nanometer.

So summing it up, I think the 28-nanometer’s demand continue to grow while we move into the 16 FinFET for high-end smartphone.

Michael Chou – Deutsche Bank – Analyst

As Mark has highlighted your EUV program, Does that imply you may consider using EUV in the second stage of your 16-nanometer — 10-nanometer ramp-up, potentially in 2018 or 2019? 

Mark Liu – TSMC – President & Co-CEO

Yes, we always look for opportunity to insert EUV in both 10-nanometer and 7-nanometer. The EUV technology provides not only some cost benefit, but also simplify the process. That means you can replace multiple layers with one layer that helps your yield improvement. So there’s opportunity both in quality and cost always exist so long as EUV’s productivity comes to the threshold point.

And in — as you noticed on 10-nanometer, our capacity build will largely done in 2016 and 2017. So 2018 will be inserted, if inserted, will be combined with some other tools upgrade, some tool upgrade to 7, for example, and replaced by the EUV tools. In that node it will not be a fresh capacity build with EUV at that time because that’s a little bit late in the schedule for the 10.

7-nanometer, of course it will be higher probability adopting EUV. And the benefit will be bigger because the 7-nanometer has a lot of multiple layers, quadruple, even multiple patterning layers, thus EUV can be more effective in reducing the cost and improve the yield, for example. So that’s our current status.

But today EUV is still in the engineering mode. The productivity, as you heard, will still have some gaps for practical insertion of the technology. So we’re still working on that, in that mode. And we have — although we have one-day performance up to 1,000 wafer per day, but I was talking about average still a few hundreds. And we need to get to more than 1,000 to consider a schedule to put it into the production.

Randy Abrams – Credit Suisse – Analyst

As you go to fourth quarter, how broad is the customer base? Is it a single key product or are you seeing broadening out of 16 FinFET as you ramp that in fourth quarter?

Mark Liu – TSMC – President & Co-CEO

… As for the second half, we think, first of all, the inventory adjustment will largely complete towards the end of second quarter.

We think the end market of smartphone is still healthy growth this year. Therefore the second half will resume the growth. And, more importantly, our 16 FinFET technology will start to ramp in the second half. So that will contribute a lot of growth, more than the 20-nanometer shipment reduction. So those two factors.

Roland Shu – Citigroup – Analyst

My first question is on given the fast ramp of 16-nanometer, so are we going to see meaningful revenue contribution for 16 in 3Q?

C.C. Wei – TSMC – President & Co-CEO

We ramp up in third quarter this year, but it’s many layers of process, plus about one month is back-end. So in 3Q we expect just the revenue just very minimum.

Bill Lu – Morgan Stanley – Analyst

This is a follow-up to Randy’s question. But I’m going to go over some numbers with you first before I ask the question, which is we did the math. I don’t think these are exactly right. But over the last five years we’ve got IDM zero growth, fabless 8%, but system houses above 20%, right. So system houses, I’m excluding memory, just the system LSI, the logic portion. I think that might be slightly conservative.

Now that’s a pretty big change. And I’m wondering how you should think about that, how you should — if you look at TSMC addressing the system houses versus the fabless customers, if you look at, for example, your market share, if you look at your margin for the system houses versus the fabless, how do you think about that?

Mark Liu – TSMC – President & Co-CEO

Yes. Indeed, in the past five years the system houses sourcing and foundry business to us has a much higher growth rate, as you quoted. But remember, that came from a very small base. Okay? But we welcome system house sourcing because we consider them are fabless too, fabless companies, the companies without fabs, bring business to us.

It’s not necessarily the margin has to do with what type of company sourced. It has to do with our value to that company and also the size, the size of the business. If the business is bigger, of course the — we probably can enjoy a slightly — a little bit better price. So it depends on the size of the business, less dependent on what company, system company or non-system company’s business.

Steven Pelayo – HSBC – Analyst

For the last three years or so, TSMC’s been growing 20%, 30% year-on-year revenue growth rates. First quarter 50% year on year. But to Bill’s question there, it does look like in the second half of the year, if I play around with your full-year guidance and what you’re doing, low single-digit year-on-year growth rates. And if we exclude maybe 16-nanometer, above 16-nanometer, maybe it’s flat to down. Is that the new industry? What are we talking now for industry growth rates for both the semi industry and in the foundry market this year?

90 days ago you suggested the semi market was going to grow 5% this year with foundries growing 12%. In light of your new guidance, in light of what it looks like you’re going to have very slight year-on-year growth rates in the second half of the year, what do you think that means for the overall industry?

Mark Liu – TSMC – President & Co-CEO

We think the semiconductor growth this year currently is indeed we adjusted down from 5% earlier to 4% at this time. Yes. We think it’s really due to the macroeconomic situation around the world today. And therefore the foundry market — foundry growth rate will adjusted down too. We are looking at about 10% range. So that’s why we revised our view on the current semiconductor growth.

Brett Simpson – Arete Research – Analyst

My question on 10-nanometer, I know it’s still 18 months away from ramp-up, but can you talk about how fast this ramp might scale relative to 20-nanometer or 28-nanometer?

And as you ramp up 10-nanometer for high-end smartphones, would you expect low-end smartphones to start migration from 28 with 16 FinFET in 2017?
Elizabeth Sun – TSMC – Director of Corporate Communications
… Your question seems to say that if we ramp 10-nanometer in the future, which will be targeting the high-end smartphone, will the low-end smartphone be migrating from 28-nanometer into 16-nanometers.
Brett Simpson – Arete Research – Analyst
And  just to add to that, Elizabeth, how quickly will 10-nanometer scale up relative to the scaling of 20-nanometer — the ramp-up of 20-nanometer and 28? Will it be as fast?
Elizabeth Sun – TSMC – Director of Corporate Communications
So the profile of the 10-nanometer ramp, will that be steeper than the profile of the 20 or the 28-nanometer?

Mark Liu – TSMC – President & Co-CEO

Okay. The first part of the question has to do with 10-nanometer ramp for the high-end smartphone, will the mid/low-end move to 16? I think we — this is up to our customers’ product portfolio. We definitely know a lot of customer is looking at 28-nanometer to use — to do as the low end. But the specification, the smartphone processor specification changes constantly. So what portion of that product will move to 16-nanometer? We think definitely there are some portion, but how a big portion really depends on their product strategy.

On the 10-nanometer ramp, I wouldn’t say it’s bigger. But at least it’s similar scale of our ramp as we do in 16 and as we do in 20.

Brett Simpson – Arete Research – Analyst Great.

Thank you. And let me just have a follow-up here. There’s been a lot of talk in the industry about one of your larger customers [Qualcomm] planning to introduce a new application processor on both Samsung’s 14-nanometer process as well as your 16 FinFET for the same chip later this year. And we haven’t really seen a single chip get taped out on two new processors at the same time before in the industry. So my question, how does this really work between the two foundries? Does it mean that that one customer can adjust dynamically, month to month, how they allocate wafers between you and Samsung? Or am I — or how might this work?
Elizabeth Sun – TSMC – Director of Corporate Communications … So your question seems to say that there is a customer that appeared to be working with two different foundries on the 14 and 16-nanometer node. And the products are about to arrive. You would like to understand how this customer will be allocating month by month the — what’s the production or the orders with both of the two foundries. Is that your question?
Brett Simpson – Arete Research – Analyst
Yes, that’s right. Whether they can move around dynamically how they allocate wafers. That’s right.

C.C. Wei – TSMC – President & Co-CEO

Well my answer is very typical. Our 16 FinFET is really very competitive. And we did not know that customer going to — how they’re going to allocate. I cannot even make any comment on that.

Gokul Hariharan – JPMorgan – Analyst

First of all on 16-nanometer, since Dr. Wei mentioned that next year a lot of demand on entry-level to mid-end smartphone is still going to stay at 28-nanometer, could you talk about your visibility for second-wave demand for 16-nanometer? 

What is the visibility that you have? Is it going to be really strong? Because you mentioned that a lot of the cost-sensitive customers would still stay on 28, at least for next year.

C.C. Wei – TSMC – President & Co-CEO

For 28-nanometer I said mid to low end this year that, and next year probably, that smartphone will stay in 28-nanometer because it’s very cost-effective and performance-wise is very good. For 16 FinFET I think that people will start to move with their product plan and some of the mid-end smartphone will move into 16-nanometer. That’s for sure.

In addition to that, we also see improving our 16 FinFET ultra-lower-power Mark just mentioned. And that will have a lot of application. And every product, lower power consumption is one of that advantage.

And so that would be our second wave of 16 FinFET.

Dan Heyler – BofA Merrill Lynch – Analyst

… So on 16, this FinFET Compact which is getting introduced, when would we expect to see that in volume production?

C.C. Wei – TSMC – President & Co-CEO

FFC? That will be ready next year. And we expect that high-volume production starts probably two years later. That’s year 2017. 2018 will reach the high volume.

Dan Heyler – BofA Merrill Lynch – Analyst

Okay. So is there a — so the cost-down version for mid-end phones FinFET that you alluded to, plus low power, when is that available?

C.C. Wei – TSMC – President & Co-CEO

Probably in 2017 second half.

Q4’2014:

Lora Ho – Taiwan Semiconductor Manufacturing Company Ltd – SVP and CFO

During the fourth quarter, the strong 20-nanometer ramp was mainly driven by communication-related applications. As a result, communication grew 18% sequentially and the revenue contribution increased from 59% in the third quarter to 65% in the fourth quarter. As for other applications, computer grew 7%, while consumer and industrial declined 21% and 11% respectively.

On a full-year basis, communication increased 39% and represented 59% of our revenue. The major contributing segments included baseband, application processors, image processors and display drivers. Another fast-growing application in 2014 was industrial and standard, which grew 30% year over year. The growth was mainly driven by increasing usage of power management ICs, near-field communications and audio codec within the mobile devices.

By technology, 20-nanometer revenue contribution started with a very small number in the second quarter, jumped to 9% in the third quarter and reached 21% in the fourth quarter. Such unprecedented ramp cannot be achieved without seamless teamwork with our customer, the R&D and operational people in TSMC.

On a full-year basis, 20 nanometer accounted for about 9% of our full-year wafer revenue. Looking forward, we are confident that 20 nanometer will continue its momentum to contribute 20% of the revenue for the whole year 2015.

Meanwhile, customer demand for our 28-nanometer wafers remained strong. Accordingly, these two advanced technologies, 20 nanometer plus 28 nanometer, represented 51% of our fourth-quarter total wafer revenue, a big increase from 43% in the third quarter.

Mark Liu – Taiwan Semiconductor Manufacturing Company Ltd – President and Co-CEO

Now I’ll give you a few words on 10-nanometer development update. Our 10-nanometer technology development is progressing and our qualification schedule at the end of 2015, end of this year, remains the same. We are now working with customers for their product tape-outs. We expect its volume production in 2017.

On the new technology development in TSMC, I’ll begin with beyond 10 nanometer I just talked about. We are now working on our future-generation platform technology development, with separate dedicated R&D development teams. These technologies will be offered in the 2017-to-2019 period. We are committed to push forward our technology envelope along the silicon scaling path.

In addition to the silicon device scaling, we are also working on the system scaling through advanced packaging to increase system bandwidth, to decrease power consumption and device form factors. Our first-generation InFO technology has been qualified. Currently we are qualifying customer InFO products with 16-nanometer technology. And it will be ready for volume ramp next year, 2016. We are now working on our second-generation InFO technology to supplement the silicon scaling of 10-nanometer generation.

On the other side, in addition to the recently announced 55ULP ultra-low power technology, 40ULP, 28ULP technologies for ultra-low power application, such as wearable and IoT, we are also working on 16ULP technology development. This 16ULP design kit will be available in June this year. It will be just suitable for both high-performance and ultra-low power or ultra-low voltage, less than 0.6-volt applications.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Ltd – President and Co-CEO

Good afternoon, ladies and gentlemen. I’ll update you on 28, 20, 16-nanometer status and the InFO business.

First on 28 nanometer. Since year 2011, we started to ramp up 28-nanometer production. Up to now we have enjoyed a big success in terms of a good manufacturing result and, most importantly, the strong demand from our customers. This year we expect the success will continue.

Let me give a little bit more detail, first on the demand side. The demand continues to grow, which are driven by the strong growth of mid- and low-end 4G smartphones, as well as the technology migration from some second-wave segments, such as the radio frequency, hard disk drive, flash controller, connectivity and digital consumers.

Second, on the technology improvement, we continue our effort to enhance 28-nanometer technology by improving the speed performance while reducing the power consumption. 28HPC, 28 ultra-low power technology are some examples.

So to conclude the 28-nanometer status, we believe we can defend our segment share well because of excellent performance and performance/cost ratio and our superior defect density results.

Let’s talk about the 20 SoC business status. After successfully ramp up in high volume last year, we expect to grow 20-nanometer business more than double this year due to high-end mobile device demand, which were generated by our customers’ very competitive products. Our forecast of the 20-nanometer business, as Lora just pointed out, will contribute 20% of the total wafer revenue. That remains unchanged.

Now on 16-nanometer ramp-up. We expect to have more than 50 product tape-outs this year on 16-nanometer. High-volume production will start in third quarter, with meaningful revenue contribution starting in fourth quarter this year. In order to stress again what our Chairman already mentioned, that combining 20 nanometer and 16 nanometer we expect to enjoy overwhelming market segment share.

Last, I will update on the InFO business. The traction on InFO is strong. We have engaged with many customers. And a few of these customers are expected to ramp up in second quarter next year. Right now we are building a small pilot line in a new site to prepare for high-volume production next year. Also we expect this InFO technology will contribute sizeable revenue in 2016.

Dan Heyler – BofA Merrill Lynch – Analyst

…. I guess as we look at your pie chart on your slide with communications and computer being amazingly only 9% of your revenue, and, say, 10 years ago that chart was much, much different, with computer being the biggest. As we look at computer opportunities going forward, I think to some extent there’s maybe a sense of a little bit of disappointment in that we don’t see ARM necessarily in PCs yet. We haven’t really necessarily seen that ecosystem come through in the server business. And big data being such an important trend going forward, with compute growing about 15% per year, I’m wondering what TSMC is doing or what your view of that opportunity will be in the future as a potential growth driver.

Morris Chang – Taiwan Semiconductor Manufacturing Company Ltd – Chairman

Server is one of them, Mark. Well there’s IoT actually also, and just don’t forget that mobile actually we think has a few more years to run yet. Really the TSMC silicon content in the average phone is actually increasing, which is something that is not recognized by a lot of people, because everybody says that the weight, the gravity is shifting to the middle level, lower-level priced phones. But according to our data, and we have kept track of it for quite a long time, the average of TSMC silicon content in the average phone is actually increasing.

So — and look, we still look for over — I think the number we have is that by 2019 there’ll be 2b phones manufactured. It is — I think last year it was, what, 1.3b? I think, yes, 1.3b. 1.3b to 2b. And, well, and the average TSMC silicon content per phone is increasing. And the number of phones is going up. So that’s by no means a — it’s still there. It’s still a growth engine.

And then IoT, I think we talked about IoT before, and now we are certainly not oblivious to the server possibility. So why don’t I ask Mark to talk about the server and maybe C.C. will talk a little about the IoT.

Mark Liu – Taiwan Semiconductor Manufacturing Company Ltd – President and Co-CEO

Okay, Dan. I’ll just respond to you on the server part. Chairman talked about the area we’re mostly focused on, phone, today. And that would drive — give us growth momentum in the next several years.

On server, we work with the product innovators around the world. And such a field definitely we’ll not lose in our radar screen and theirs. And TSMC has been, over the years, developed our technology to suit for high-power computing.

And from 65, 40, 28 to 16 nanometer, we continuously improve our transistor performance. And today we believe our 16 FinFET Plus transistor performance probably is the top of — is one of the top of the world. It’s well suitable, well capable of doing the computing tasks.

And actually before server, and there are several supercomputers around the world, in US and in Japan, already powered by our technology, doing the weather forecasting, whether the geo exploration applications today. And on the server, on ARM in particular, we have very close partnership with ARM in recent years. And ARM is a very innovative company. They produce CPU core and new architecture every year. And we reached our leading-edge technology very early with ARM and to design their leading-edge CPU cores. And that will continue and several of our customers are taking advantage of that.

Yes, in the past it’s been getting into slower as expected. That’s because the software ecosystem is slower to come. And — but actually a lot of the server companies, system company is continuing investing in this ecosystem. Linux-based ecosystem is coming very strong too. So I think the trend will continue. And we will, with our customers, get into these segments in the next — in the near future. Yes.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Ltd – President and Co-CEO

For the IoT, that would be a big topic right now in the whole industry. All I want to say is that we are happy to share with you that, a long time ago, we already focused on our specialty technology, which are the CMOS image sensor, MEMs, embedded Flash, all those kind of things. Today we add another new technology, ultra-low power, into it. And that will be the basis for the IoT technology necessary in the future. We believe that when the time comes and IoT business becomes big, TSMC will be in a very good position to capture most of the business. That’s what I share with you. Thank you.

Randy Abrams – Credit Suisse – Analyst

… And the follow-up question on profitability. If you could give a flavor on structural profitability for 2015 and some of the flavor for 20, how quick that may get to corporate margins, and for 16, because it’s an extension, whether that could be near corporate margins as that comes up. And if you could give a comment on the inventory at current levels, if there’s any — if that will stay at these higher levels from the WIP you’ve been building or if that may come back down to a different level.

Lora Ho – Taiwan Semiconductor Manufacturing Company Ltd – SVP and CFO

Randy, you have multiple questions. I recall you asked for the structural profitability. That’s you first question, right? From what we can see now, we are quite confident we can maintain equal or slightly better structural profitability, standard gross margin versus 2014.

For the 20-nanometer and 16-nanometer ramping, how would that affect corporate margin? I have said in last July it usually takes seven or eight quarters for any new leading-edge technology to get close to the corporate average. So for 20 nanometer, it will take eight quarters. So we believe — so 20 nanometer start to sell in second quarter 2014, and we expect by first quarter 2016, that’s eight quarters, it will be at corporate average level.

For 16, we are going to mass produce this product. It will follow the similar trend. 16 nanometer will be based on the feature of 20 nanometer, so the margin will start to be higher. But it will also follow the similar trend. It takes seven quarters to reach to corporate average. So say we plan to mass produce 16 FinFET in third quarter 2015, so by first quarter 2017 you will get close to corporate average. So there will — before that there will be still small dilutions. For this year, the dilution will be 2 to 3 percentage points. And the last year, the second half will be 3 to 4 percentage points and very low in 2016.

Donald Lu – Goldman Sachs – Analyst

… Chairman, about six months ago you gave us a comment on your estimate on TSMC’s market share in FinFET in 2015, 2016, 2017. So has that changed?

Morris Chang – Taiwan Semiconductor Manufacturing Company Ltd – Chairman

… Donald’s question was I said — actually I looked up my statement at that time, July 16 of last year. I said on the subject of 16 and 20, 16-nanometer and 20-nanometer technology, I said that — I actually made three statements.

The first statement was that because we started 16 a little late, our market share in 2015, our 16-nanometer market share in 2015 will be smaller than our major largest competitor’s.

The second statement I made was that we started 16 late because we wanted to do 20. And so if you combine 20 and 16, our major competitor, who will be slightly ahead of us this year on the 16, he has very little 20. Almost no 20 at all. And if we combine 20 and 16, our combined share in this year will be much higher than that competitor’s.

The third statement I made is that in 2016 we will have much larger share in just 16 nanometer than that competitor.

All right. First I want to say that I, at this time, stand on those statements. In fact, I now will add a couple of statements. The statements I will add are — that’s fourth statement now. Okay? When we have a larger share of just 16 alone in 2016, the 16 market will also be much larger than this year, 2015. So, yes, we’re slightly behind. We have a smaller market share in 2015 in a smaller market. Next year we will have a larger share, in fact much larger share, in a much larger market, 16.

So — and another statement I want to make is that I’m, at this point, very, very comfortable with all those statements that I have made on July 16 last year and the statements that I have added today. I’m very comfortable. I don’t know whether I answered your question or not, Donald.

Donald Lu – Goldman Sachs – Analyst

Yes. How about 2017, if –?

Morris Chang – Taiwan Semiconductor Manufacturing Company Ltd – Chairman

What? Well, 2017, the share is going to continue. We’re not going to lose the leadership on 16 market share once we recapture that in 2016. It’s going to continue 2017, 2018. And also both 20 and 16 are going to live longer than you might think now. Well 28, for that matter, will also live longer than you’d think.

Michael Chou – Deutsche Bank – Analyst

… Can we say your 16-nanometer market share in 2016 will be quite similar to your dominance in 28 nanometer, given that your 20 nanometer is the only provider? So the apple-to-apple comparison should be 28 to 16 nanometer.
Elizabeth Sun – Taiwan Semiconductor Manufacturing Company Ltd – Director of Corporate Communications
So market share in 16 nanometer in 2016, will that be the same as our market share at 28 nanometer, I would say, back in 2013, 2014?
Michael Chou – Deutsche Bank – Analyst Yes

Morris Chang – Taiwan Semiconductor Manufacturing Company Ltd – Chairman

Well, no, I don’t think so, because 28, of course we were virtually sole source. And 16, we already know we’re not. There’s at least one major competitor and then there’s another one that’s just eager to get in. I don’t mean that first competitor’s accessory, I mean another one.

Brett Simpson – Arete Research – Analyst

My question is around 28 nanometer. You’re running a large capacity at 28 nanometer at the moment. So can you share with us what your capacity plan is for 28? As you migrate more business to 20 nanometer and below over the next couple of years, do you intend to convert 28-nanometer capacity to lower nodes, or do you think you can keep the existing 28-nanometer capacity running full going forward.
Elizabeth Sun – Taiwan Semiconductor Manufacturing Company Ltd – Director of Corporate Communications
All right. Let me repeat Brett’s question so that people here can hear it better. Brett’s question is TSMC’s 28-nanometer capacity is very large. As our technology migrates to more advanced nodes, such as 20 and 16, in the next few years, what will be our plan on capacity of the 28 nanometer? Will we still have large demand to utilize those capacities or we need to do some changes?

Morris Chang – Taiwan Semiconductor Manufacturing Company Ltd – Chairman

Every — in every generation we worry a lot about the conversion loss we will suffer when we convert the equipment of that — the existing capacity of that generation to the capacity of the next generation. Now, so we do two things. First, we try to minimize that conversion loss. And since we’ve been living with the problem for so long now, I think we’re getting to be pretty good at it. So the conversion loss from one generation to another is normally in the low single digit, low middle single digit. Now the second thing we try to do is, and I think we actually have been doing it perhaps even more successfully than the first thing. The first thing was to try to minimize the conversion loss. The second thing we try to do is we try to prolong the life of each generation. And I was saying just five minutes ago that I think that the life of 28 nanometer may be longer than a lot of people think. And I mean it. Actually we’re still making half-micron stuff. And we try to prolong the life of every generation as we continue to migrate to advanced technologies. And 28 is certainly a generation that we want to prolong the life of.

Bill Lu – Morgan Stanley – Analyst

My first question is on 28 nanometers. If I look at your capacity this year versus 2014, how much is the increase in capacity?

Morris Chang – Taiwan Semiconductor Manufacturing Company Ltd – Chairman

High teens. High teens actually.

Gokul Hariharan – JPMorgan – Analyst

… First, I had a question on there’s been a lot of controversy about cost per transistor, whether Moore’s law — the economics of Moore’s law are slowing down. Your competitor Intel has put out a very emphatic statement saying that until 7 nanometer they’re seeing that continuing at the same pace as before. But there has been a lot of noise from the fabless community in the last couple of years that at 20 nanometer or at 16 nanometer there is a potential slowdown.

Could we have TSMC’s version now that you’re pretty much ready to start 10 nanometer and thinking already about 7? That’s my first question.
Elizabeth Sun – Taiwan Semiconductor Manufacturing Company Ltd – Director of Corporate Communications
So, all right. Let me repeat. Gokul, your question is mainly on the comments on cost per transistor. Some of the other players, I think you’re referring to Intel, who has made comments that they do see the cost per transistor to continue into 7 nanometer and so they can handle the economics of the Moore’s law. Whereas, on the other hand, fabless companies begin to complain about not seeing enough economics, starting with 20 nanometer. So what is TSMC’s statement regarding this economics issue?

Mark Liu – Taiwan Semiconductor Manufacturing Company Ltd – President and Co-CEO

Let me answer this question. The cost of transistor continues to go down. And by scaling mostly is — everybody knows, nobody I think has refused that statement — we see the cost of transistor continues going down in a constant rate. And in going forward, the cost of transistor going down probably at slightly slower rate. That’s the argument. But it really depends on companies. And for some companies simply do not have the technological capabilities. And today, further going down the Moore’s Law technology developments, just a few. And we — as far as whether those costs can — is — can get enough returns, and of course that has to do with how much that technology brings value to the product where they command the price. And today we see certain segments will continue to need that type of system performance to get enough return. So this is the reason we committed to push the system scaling.

Roland Shu – Citigroup – Analyst

Just a 10-nanometer question to C.C. Since, C.C., you said we are expecting to volume production of 10-nanometer in 2017. But I remember in the past two quarters actually our goal was to pulling in 10-nanometer mass production by end of 2016. So are we pushing out the 10-nanometer mass production schedule a little bit on that?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Ltd – President and Co-CEO

Let me explain that, because 10 nanometer, the mask layers is about 70 to 80. So you’ve got to start in 2016 to have output in 2017. So what I’m talking about is 2017 is to start to have revenue.

Q3 2014:

Lora Ho – TSMC – SVP & CFO

By technology, after two years of meticulous preparation we began volume shipments of 20-nanometer wafers. The revenue contribution went up from 0% to 9% of the third quarter wafer revenue. This is the fastest and the most successful ramp for a new technology in TSMC’s history.

Mark Liu – TSMC – Co-CEO


On 10-nanometer development, our 10-nanometer development is progressing according to plan. Currently we are working on early customer collaboration for product tape-outs in 4Q of 2015. The risk production date remain targeted at the end of 2015.

Our goal is to enable our customers’ production in 2016. To meet this goal, we are getting our 10-nanometer design ecosystem ready now. We have completed certification of over 35 EDA tools using ARM’s CPU core as the vehicle. In addition, we have started the IP validation process six months earlier than previous nodes with our IP partners.

We are working with over 10 customers on their 10-nanometer product design. The product plans show wide range of applications, including application processors, baseband, CPU, server, graphics, network processor, FPGA and game console. Our 10-nanometer will achieve industry-leading speed, power and gate density.

C.C. Wei – TSMC – Co-CEO


Next, I’ll talk about the 16-nanometer ramp and competitive status. In 16-nanometer, we have two versions, 16 FinFET and the 16 FinFET Plus.

FinFET Plus has better performance and has been adopted by most of our customers. 16 FinFET we began the risk production in November last year and since then have passed all the reliability qual early this year. For the FinFET Plus, we also passed the first stage of the qualification on October 7 and since then entered the risk production. The full qualification, including the technology and product qual, is expected to be completed next month.

So right now we have more than 1,000 engineers working on ramp up for the FinFET Plus. On the yield learning side, the progress is much better than our original plan. This is because the 16-nanometer uses similar process to 20 SOC, except for the transistor. And since 20 SOC has been in mass production with a good yield, our 16 FinFET can leverage the yield learning from 20 SOC and enjoy a good and smooth progress. So we are happy to say that 16-nanometer has achieved the best technology maturity at the same corresponding stage as compared to all TSMC’s previous nodes.

In addition to the process technologies, our 16 FinFET design ecosystem is ready also. It supports 43 EDA tools and greater than 700 process design kits with more than 100 IPs. All these are silicon validated. We believe this is the biggest ecosystem in the industry today.

On the performance side, compared with the 20 SOC, 16 FinFET is greater than 40% speed faster than the 20 SOC at the same total power or consumes less than 50% power at the same speed. So our data shows that in high-speed applications it can run up to 2.3 gigahertz. Or on the other hand, for low-power applications it consumes as low as 75 miniwatts per core.

This kind of a performance will give our customer a lot of flexibility to optimize their design for different market applications. So far we expect to have close to 60 tape-outs by the end of next year.

In summary, because of the excellent progress in yield learning and readiness in manufacturing maturity and also to meet customers’ demand, we plan to pull in 16-nanometer volume production through the end of Q2 next year or early Q3 year 2015. The yield performance and smooth progress of our 16 FinFET, FinFET Plus further validate our strategy of starting 20 SOC first, quickly follow with the 16 FinFET and FinFET Plus. We chose this sequence to maximize our market share in the 20-, 16-nanometer generation.

Next, I’ll talk about 28-nanometer status. We had strong growth in second quarter on 28-nanometer. And the business grew another quarter and accounted for 34% of TSMC’s wafer revenue in third quarter. On the technology side, we continue our effort to improve yield and tighten the process corners, so that our customer can take advantage of these activities and shrink their die size and therefore reduce the cost.

Let me give you an example. On 28LP, the polysilicon gate version, we now offer a variety of enhanced processes to achieve better performance. We also offer a very competitive cost so that our customers can address the mid- to low-end smartphone market. In addition to the 28LP, we also provide a cost-effective high-K metal gate version, the 28HPC for customers to further optimize the performance and the cost. Recently, we added another 28-nanometer offering we called 28 Ultra Low Power, for ultra low power applications obviously. We believe this 28ULP will help TSMC customers to expand their business into the IoT area.

In summary, we expect our technology span in 28-nanometer node will enhance TSMC’s competitiveness and ensure a good market share. We also expect the strength of the demand for our 28-nanometer will continue for multi years to come. In response, we are preparing sufficient capacities to meet our customers’ future demand.

Q2 2014:


Morris Chang – TSMC – Chairman

Now a few words on 20-nanometer and 16-nanometer progress. In the last two and half to three years, 28-nanometer technology has driven our growth. In the next three years, 20 and 16-nanometer technologies are going to drive our growth; 28 in the last two and half to three, 20 and 16 in the next three.

After two years of meticulous preparation, we began volume shipments of our 20-nanometer wafers in June. The steepness of our 20-nanometer ramp sets a record. We expect 20-nanometer to generate about 10% of our wafer revenue in the third quarter and more than 20% of our wafer revenue in the fourth quarter. And we expect the demand for 20-nanometer will remain strong and will continue to contribute more than 20% of our wafer revenue in 2015. It will reach 20% of our total wafer revenue in the fourth quarter of this year and it will be above 20% of our total wafer revenue next year.

The 16-nanometer development leverages off 20-SoC learning and is moving forward smoothly. Our 16-nanometer is more than competitive, combining performance, density and yields considerations. 16-nanometer applications cover a wide range including baseband, application processors, consumer SoCs, GPU, network processors, hard disk drive, FPGA, servers and CPUs. Volume production of 16-nanometer is expected to begin in late 2015 and there will be a fast ramp up in 2016. The ecosystem for 16-nanometer designs is current and ready.

A few years ago, in order to take advantage of special market opportunities, we chose to develop 20-SoC first and then quickly follow with 16-nanometer. We chose this sequence to maximize our market share in the 20/16-nanometer generation. As the 20/16 foundry competition unfolds, we believe our decision to have been correct.

Number one, in 20-SoC, we believe we will enjoy overwhelmingly large share in 2014, 2015 and onwards.

Number two, in 16-nanometer, TSMC will have a smaller market share than a major competitor in 2015. But we’ll regain leading share in 2016, 2017 and onwards.

Number three, if you look at the combined 20 and 16 technologies, TSMC will have an overwhelming leading share every year from 2014 on.

Number four, in total foundry market share, after having jumped 4 percentage points in 2013, TSMC will again gain several percentage points in 2014. This is the total foundry market share covering all technologies. After having increased 4 percentage points last year, TSMC will gain another several percentage points this year.

Now a few words about 10-nanometer. The 10-nanometer development is progressing well. The 10-nanometer speed is 25% faster than the 16-nanometer. The power consumption is 45% less than 16-nanometer and the gate density is 2.2x that of the 16-nanometer. Power is 25% faster. Did I say power? I meant speed. Speed is 25% faster, power is 45% less, gate density 2.2 times more, all compared with 16-nanometer.

We work closely with our key customers to co-optimize our 10-nanometer process and design. We expect to have customer tape outs in the second half of 2015.

William Dong – UBS – Analyst

Good afternoon Mr. Chairman. I guess — we keep talking about technology. I guess the question I want to ask is that with all this rush to continue to push down technology roadmap, to go down to 16, to 14 and to 10 nanometer, what are our thoughts about what’s driving this demand? As we move toward, for example, Internet of Things, is there such a requirement to keep pushing on the technology front to actually have enough, sufficient demand to keep driving it down?

Morris Chang – TSMC – Chairman

Well, if the cost is low enough — cost is very much a part of the equation. If the cost is low enough, the demand will increase because we can see a lot of applications that are just waiting there. Of course I’m talking about the mobile products, but I’m also talking about Internet of Things, so wearables and so on, so on, Internet of Things. The applications are just waiting there for better, for faster speed and lower power and higher density ICs. Cost is definitely in the equation.

So, yes, when you ask will the demand be there. If we can get the cost down to an acceptable level, demand will be there. And of course that’s why — that’s how things like EUV come into the question. Nobody has asked about that yet. We actually were prepared to answer that with the same answer that we gave you last time, by the way, that we are still planning to — there’s still a possibility to use EUV on one, one or two — or just one layer in the 10 nanometer, yes. One layer, one layer in 10 nanometer and 7 I think is, of course, an even better candidate.

Dan Heyler – BofA-Merrill Lynch – Analyst

Hopefully this question simplifies and doesn’t complicate things. Just to make sure I understand this share loss thing, so basically what you’re saying is the share loss at 16, these are customers that are choosing to skip 20? Is that how should I think of this that these are not any — are any of these customers that are currently 20 that are going to 16 next year or is this all people that are choosing to skip 20?

Morris Chang – TSMC – Chairman

Well, first of all, I want to question the word share loss. I don’t consider there is share loss because just like 32/28 we had zero share in 32. But then we were very successful in 28. The two really belong to the same generation. And 20 and 16 also belong to the same generation. So, yes — and share loss means that you start with something and then you lose it, it becomes less. Well, this year nobody has — everybody has zero share, okay. And I am just saying that we will start on 16, we will start with a lower share than we did with 20 or 28. We start with a lower share than we did with 20 or 28. And then we’ll get back to a high share in 2016. I’m just arguing with him, but he did have a question; what was that?

Dan Heyler – BofA-Merrill Lynch – Analyst

Or just simply are your — are these customers moving to 16, are these the ones that have currently been on 20 or are these the guys that have skipped because the debate in the industry is should we go straight to 16 and skip 20. So are these customers that have basically been at 28 and are skipping 20 and going straight to 14 at your competitor?

Morris Chang – TSMC – Chairman

Mainly because our customers wanted it sooner. We got in a little late, as I said; our customers wanted it sooner. So that’s why we’re starting — and we’ll catch up only a little later.

Michael Chou – Deutsche Bank – Analyst

Chairman, regarding the 16/20 nanometer, could we say your total market share in 16 and 20 nanometer will be similar to 28/32 for the corresponding period? Can we say that?


Morris Chang – TSMC – Chairman

The combined 20 — I just ran an analysis just a couple of weeks ago, so I know exactly the answer to your question. The combined 20/16 market share in the first two years of its existence, which is this year and next year — well, I guess I have to add in 2016 — the combined — our combined 20/16 share in 2014, 2015 and 2016 will still be greater than our combined share of 32 and 28 in 2012, 2013 and 2014.

Q1 2014:


Mark Liu – Taiwan Semiconductor Manufacturing Company Ltd – President & Co-CEO

Then I cover the updates on 16 FinFET, 16 FinFET plus and our 10 FinFET. First, we have two general offers for customers, 16 FinFET and 16 FinFET plus. 16 FinFET plus offers 15% speed improvement, the same total power, compared to 16 FinFET. More importantly, 16 FinFET plus offers 30% total power reduction at the same speed, compared to 16 FinFET.

Our 16 FinFET plusmatches the highest performance among all available 16-nanometer and 14-nanometer technologies in the market today. Compared to our own 20 SoC, 16 FinFET plus offers 40% speed improvement. The design rules of 16 FinFET and 16 FinFET plus are the same; IPs are compatible.

We will receive our first customer product tapeout this month. About 15 products planned for 2014, another about 45 in 2015. Volume production is planned in 2015. Since 95% tools of 16 and 20 are common, we will ramp them in the same gigafabs in TSMC. 16 FinFET yield learning curve is very steep today and has already caught up with 20 SoC. This is a unique advantage in TSMC 16-nanometer.

For 10 FinFET, 10 FinFET offer TSMC’s third generation FinFET transistor, designed to meet the power and the performance requirement of mobile computing devices. 10 FinFET will offer greater than 25% speed improvement, the same total power, compared to 16 FinFET plus. More importantly, 10 FinFET offer greater than 45% total power reduction at the same speed, compared to 16 FinFET plus.

10 FinFET will offer 2.2X of density improvement over its previous generation, 16 FinFET plus. So, currently, 10 FinFET development progress is well on track, but risk production will be in 4Q 2015. Above are the key messages on three items.

C.C. Wei – Taiwan Semiconductor Manufacturing Company Ltd – President & Co-CEO

…  I would like to take this opportunity to share with you the two topics with you; namely, the 20 SoC ramp and TSMC’s advance assembly solution to our customer. First, I will brief you on the status of 20 SoC ramp.

Let me recap what we had said in the last meeting here. We started 20 SoC production in January this year and by fourth quarter of this year, the 20 SoC will account for 20% of the quarterly revenue — wafer revenue. And for the whole year of 2014 we expect 20 SoC will be about 10% of our total wafer revenue of the year of 2014, of course. All these expectations remain the same today.

Now, there are some major achievement I would like to share with you. First, on the ramping speed. 20 SoC by far is the fastest ramping in TSMC’s history. Of course, this fast ramp is to meet customers’ strong demand. And I believe this production of 20 SoC in TSMC represents one of the largest mobilization in semiconductor history. Let me share with some numbers, so you can have a snapshot on this ramp.

In about one year’s time we have built a manufacturing team of 4,600 engineers and 2,000 operators in two fabs; Fab 14 in Tainan and Fab 12 in Hsinchu. More impressively, in the same time period, close to one thousand engineer has been relocated among TSMC’s fabs in Hsinchu, Taichung and Tainan. All these are prepared for the 20 SoC’s ramp-up. This magnitude of mobilization, I believe, is not an easy job. We move people around that show our strength in manufacturing and this highly mobilization is not moving the tool or just a handful around. We’re talking about we’re moving the engineer and operator among TSMC’s fabs. In the meanwhile, we have installed more than 1,500 major tools for this 20 SoC ramp.

Of course, the faster ramp has done with a very good device reliability and a very good wafer defect density. Without those, the fast ramp will make no sense. Now how important are these 20 SoC ramp? Well we knew that 28 nanometer provided the engine of TSMC’s profitable growth in the years of 2012 and 2013 and similarly, we expect 20 SoC will provide the engine of TSMC’s profitable growth in year 2014 and 2015.

Now let me switch gear to advanced assembly technologies. The purpose of — for us to develop advanced assembly technology is to provide our customer a better performance and a lower power consumption, while at a lower cost as compared to the previous assembly solution. For example, we have developed CoWoS and CoWoS has been developed to connect two dies or more dies together to have a very high performance and a very low power consumption and today CoWoS is in a small volume production already. However, the cost structure of CoWoS has made CoWoS only suitable for some very high performance applications and the products. To address the cost structure issue and for those mobile — very large volume mobile devices, we have developed a derivative technology called InFO; that stands for integrated fan-out.

InFO will have significant lower cost as compared to CoWoS and at the same time, InFO also can have the same capability to connect multiple dies together just as the CoWoS did. Currently, we’re working with major customers and the InFO, to incorporate this structure into their future product. We have delivered many functional dies to our customers already and the process optimization are ongoing.

In fact, we are very excited about TSMC’s advanced assembly technology development as we’re building a innovative solution for our customers product, which requires high performance, lower power consumption and at a very reasonable cost structure.

Michael Chou – Deutsche Bank – Analyst

I don’t know, C.C. Wei, could you give us more color on the advanced packaging you just mentioned. What’s the difference between this one and CoWoS?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Ltd – President & Co-CEO

The difference between the InFO and the CoWoS is actually the geometry to connect multi-dies together. In the CoWoS, actually we are using very small geometry, actually 65 nanometers of geometry to connect the multi-dies together. In InFO, we’re using the larger geometry, which are still technical confidential information. But the cost is much, much lower.

Brett Samson – Arete Research – Analyst

Just had a quick question. Can you give us a sense within the 28 nanometer nodes, how does that split between poly-SiON and high-K and how do you think this might trend through this year?
Elizabeth Sun – Taiwan Semiconductor Manufacturing Company Ltd – Director, Corporate Communications
So Brett’s question is what is really the mix between poly-SiON, that is our 28 LP, versus our high-k metal gate and what is going to be the trend with respect to that kind of mix throughout this year?

Mark Liu – Taiwan Semiconductor Manufacturing Company Ltd – President & Co-CEO

Allow me to answer that. Our 28 nanometer high-k metal gate has three options, 28HP, 28HPM and 28HPC. And this year these 28 high-k metal gate technology will be about 85% of the overall 28 nanometer in terms of the wafer.

Dan Heyler – Bank of America Merrill Lynch – Analyst

… I want to follow up on this InFO, this is quite interesting. Could you just maybe elaborate a bit more on what exactly are you going to be attaching, so which devices are we talking about in terms of what – with CoWoS it was pretty much PLD [Programmable Logic Devices, like Altera] companies were there and others, some baseband. So what devices are you attaching on the initial generation between the different chips? And second part of that question would be what kind of — how many customers do you expect to manage to have in this area, because you start peddling lots of devices and lots of customers it gets really complicated, you start to look more like an OSAT [Outsourced Semiconductor Assembly and Test]. So I wonder if this is going to be a pretty small group of high volume products? And finally on — as you attach — are you actually doing a chip attach or will you be doing only the wafer level activity and will you be having — working with the OSATs to do the actual chip attach?

C.C. Wei – Taiwan Semiconductor Manufacturing Company Ltd – President & Co-CEO

Dan, to answer your question, the InFO actually we’re right now working on application processor together with memory dies. That’s good enough for you. I cannot say anything more than that. We’re working with mobile product customers and we did not — we expect very high volume, but we did not with many, many customers as current status. We’re working on the wafer level process, stacking die, and couple of them, we’re able to do the complete line all here.

Q4 2013:


Morris Chang – Taiwan Semiconductor Manufacturing Co., Ltd. – Chairman

Good afternoon, ladies and gentlemen. Today, our comments are scheduled as on the slide on your left. First, I’m very glad to have the opportunity to introduce our new top management team.

I’d first start with Lora, although I think everyone knows Lora well. Lora has a bachelor’s degree from Chengchi University, a master’s degree from National Taiwan University, both degrees in finance. She worked for Cyanamid, Wyse, Thomas & Betts and TI-Acer before she joined TSMC in 1999. And she has been TSMC’s CFO since 2003.

Next, Dr. C. C. Wei. C. C. has a bachelor’s degree from Chiao Tung University and a Ph. D. from Yale University, both in electrical engineering. C. C. worked for TI, SGS, Chartered before joining TSMC in 1998. C. C. has been Senior VP of Operations, Senior VP of Business Development, Co-COO, and in the Co-COO job CC was successively responsible for R&D and Operations. Now C.C. is President and Co-CEO.

C.C. is 60 years old and I should add that Lora is 57 years old.

Mark Liu; Mark has a B.S. from National Taiwan University and a Ph. D. from Berkeley, both in electrical engineering and computer science. Mark worked for Intel, Bell Telephone Labs before joining TSMC in 1993. And at TSMC he has been VP, Senior VP of Operations and he was also a Co-COO, and all the time he was Co-COO he was responsible for our sales, marketing and planning.

And now Mark and C.C. are Presidents and Co-CEOs of the Company. Mark is 59 years old.

C.C. Wei – Taiwan Semiconductor Manufacturing Co., Ltd. – President & Co-CEO
[about the technology aspects of TSMC’s growth engine]

Good afternoon everybody. I am C.C. Wei and I will give you the update of our 28-nanometer high-K metal gate version. Let me recap the history. We started 28-nanometers volume production in year 2011 mainly on the 28LP, the oxynitride version. And since then the business continued to grow. So last year, we had tripled 28-nanometers of business versus year 2012. That in this year, year 2014, the business for 28-nanometer will continue to grow at least by another 20%, and all the increase are coming from the 28-nanometers high-K metal gate version, which is we name it 28HPM.

Let me add more color to it. We expect we’re going to have about more than 100 tape-outs from about 60 customers in this year in 28HPM. Now you may ask it why? Why there are so many products that were designed on this technology? One of the main reason I can give it to you is the performance, the superior performance. For example, 28 HPM compare with the 28LP that will gain another 30% of the speed at the same kind of power consumption, or you can say that at the same power consumption — at the same speed, you will consume 15% less power. And everybody knows that the power consumption in the mobile device is very important. That’s why we think we have a very high, good business on the 28 HPM.

Now, furthermore, after the 28HPM, we also offer 28HPC, which is a low-cost version of the 28HPM. The 28HPC is developed to meet the customers’ demand to compete in the mid-to-low-end smartphone market. We expect that this 28HPC will have a very strong demand in the next two years. That’s what we have.

Okay, let me give you some information on the competition to explain why we are so confident on this 28 nanometers high-K metal gate business. If you still remember that long time ago, we mentioned about gate-first and gate-last. Still remember that terminology? All right. So, simply to say that gate-last version will give you better performance and a better process control. As a result, all our customers will enjoy using the gate-last versions that technology to have a higher or better performance than other products which are designed with a different approach.

In addition to that I’ll say that because of the better process control and TSMC’s manufacturing excellence, we have a much better yield than our competitor, so that our customer will enjoy the lower die cost. That’s what we have. And that’s why we explained that our confidence that the 28 nanometers business continue a very good business for us.

Now, let me switch the gear to 20-SoC. That’s another exciting news that we have, I want to share with you. 20-SoC is a technology that we developed to enable TSMC’s customer to lead in the mobile device market. And this technology we are believe in this year, next year, well I have a very good business to capture. So, what is the status now of the 20-SoC? We have two fab, Fab 12 and Fab 14 that complete the qual of 20-SoC. And as a matter of fact, we started production. We are in volume production as we speak right now. So, it’s in the high-volume production as we are speaking right now.

Let me add more information to that. First, there are more than $10 billion had been committed to build capacity. Second, we have more than 2,500 engineers and 1,500 operators right now in manufacturing, doing the 20SoC volume production. The ramping rate will be the fastest one in TSMC’s history. Using the ramping rate, you can get the hint of the business, how big the business is.

Another fact to share with you, we have probably — at the end of this year, we have more than dozens of tape-out from about a dozen customers that they are producing the 20SoC product, okay? You may ask, good business, how about the competition? If you have a very strong competition, you might — cannot have too much of confidence on the future. Let me talk about the competition.

I’m very confident that our 20SoC is the highest gate density in volume production at 20 nanometers node. And please remember that; highest gate density and a high volume production. I don’t see any company today can claim on this kind of production and with this kind of gate density at this time, nobody. And most of our competitors, to be frank with you, they’re not even into this game yet. So we are confident to have a good business that will contribute to TSMC’s revenue — wafer revenue by probably around 10% this year. And with that I conclude my presentation and thank you for your attendance.

Mark Liu – Taiwan Semiconductor Manufacturing Co., Ltd. – President & Co-CEO
[about TSMC’s competitiveness versus Intel and Samsung]

I will start this topic by update you our recent development status of our 16-FinFET technology. 16-FinFET technology has been a very fast paced development work in TSMC and we have achieved the risk production milestone of 16-FinFET in November 2013, November last year. And this month, we should pass the 1,000 hours so-called the technology qualification. So the technology is ready for customer product tape-out.

Our 16-FinFET yield improvement has been ahead of our plan. This is because we have been leveraging the yield learning of 20SoC. Currently 16-FinFET SRAM yield is already close to 20SoC. And with this status we are developing an enhanced transistor version of 16-FinFET plus, with 15% performance improvement. It will be the highest performance technology among all available 16 and 14 nanometer technology in 2014. The above progress status is well ahead of Samsung.

Let me comment on the Intel’s recent graph shown in their investor meetings, showing on the screen. We usually do not comment on other company’s’ technology, but this is — because this has been talking about TSMC technology and as Chairman said, has been misleading. To me it’s erroneous, based on outdated data. So I like to make the following rebuttal.

futureICT - 2013--Intel Is Committed to Press Ahead on Density - Enables a 'Transistor Like' Lead in Density

2013: Intel Is Committed to Press Ahead on Density – Enables a “Transistor Like” Lead in Density

futureICT - Jan-2014--Density Comparison by TSMC vs Intel 2013 statement

January 14, 2014: Density Comparison by TSMC vs. Intel’s 2013 statement at its Investor Meeting

On this view graph, the vertical axis is the chip area on a log scale. Basically this is compared at chip area reduction. On the horizontal axis, it shows four different technologies; 32/28, 22/20, 14/16-FinFET and 10-nanometer. 32 is Intel technology and 28 is TSMC technology. So is the following three nodes; the smaller number 20, but on 14-FinFET is Intel, 16-FinFET is the TSMC. On the view graph shown at Intel investor meeting, it is with the grey plots showing here. The grey plots shows the 32 and 20 nanometer, TSMC is ahead of the area scaling, but however, with 16, the data, grey data shows a little bit uptick. And following the same slope, go down to the 10 nanometer. What’s the correct data we show on the red line, that’s our current TSMC data. The 16, we have been volume production on 20 nanometer, as C.C. just mentioned, this is the highest density technology in production today.

We took the approach of significantly using the FinFET transistor to improve the transistor performance on top of the similar back-end technology of our 20 nanometer. Therefore, we leveraged the volume experience into volume production this year, to be able to immediately go down to 16 volume production next year, within one year. And this transistor performance and innovative layout methodology can improve the chip size by about 15%. This is because the driving of the transistor is much stronger, so that you don’t need such a big area to deliver the same driving circuitries.

And for the 10 nanometer, we haven’t announced it, but we did communicate with many of our customers that that will be the aggressive scaling of technology we’re doing. And so, in the summary, our 10 FinFET technology will be qualified by the end of 2015. 10 FinFET transistor will be our third generation FinFET transistor. This technology will come with industry’s leading performance and density. So, I want to leave this slide by 16 FinFET scaling is much better than Intel said, but still a little bit behind Intel.

However, the real competition is between our customer’s product and Intel’s product or Samsung’s product. TSMC’s Grand Alliance; that is the alliance of us, our customers, EDA, IP, communities and our supplier is the largest and the only open technology platform for the widest range of product innovations in the industry today. As for the tape-out of our 16 FinFET, more than 20 customer product tape-outs on 16 FinFET technology is scheduled this year already. They include wide range of applications; baseband, application processors, application processor SoCs, graphics, networking, hard disk drive, field programmable array, CPUs and servers. Our 16 FinFET technology captured the vast portion of products in the semiconductor industry.

We’ve been actively working with our customer’s designer on this since last year. TSMC’s speed and productization of the customer’s product and our ability to execute for a short time-to-market for a customer are far superior than Intel and Samsung.

Lastly, I would comment on the mobile products. With this 16 FinFET technology and the innovations of processor architecture and various IP from our customers, we are confident that this planned, 16 FinFET mobile product, which is going to tape out to us, will be better than Samsung’s 14 nanometer and better than Intel’s 14 SoC. Thank you very much.

Roland Shu – Citigroup Global Markets – Analyst

… Is the 16-plus is improving from the design you were saying or this is just for the performance enhancement or are we going to consider to change our 16-plus to — even to the — same as the 14-nanometer? …

Mark Liu – Taiwan Semiconductor Manufacturing Co., Ltd. – President & Co-CEO

16 FinFET-plus is a transistor enhancement. For the design — back-end design rule are similar to 16 FinFET, therefore designer can design on 16 FinFET and re-characterize, upgrade their product performance. This transistor, as I mentioned, also can reduce the cell size, standard cell size, and with the enhanced performance transistor. That’s the way to reduce the chip size. So we do not intend to change the naming. I mean this is engineering, this is the word — this is the name that we chose earlier based on the physical consistent number and we do not intend to change name.

Randy Abrams – Credit Suisse – Analyst

My first question on the management structure now with the Co-COOs promoted to Co-CEOs. If you could talk about how the responsibilities would change with their promotion to Co-CEO? And for yourself, Dr. Chairman, how will your activities change versus before this move? So if you could talk about the roles for each of the different Co-CEOs and yourself now.

Morris Chang – Taiwan Semiconductor Manufacturing Co., Ltd. – Chairman

We started with President and the Co-CEO in November, and it has been now two months. And if you ask me now, has my life changed in the last two months? My answer is no. It has not changed. But I think that my effort, my time has been spent more on the coaching aspects. I think that — I do believe that I do more coaching. If I spend 100 hours and — I now perhaps spend 20 hours of the 100 hours on coaching, whereas in the past, I’d probably spend only 5 or 10 hours of the 100 hours on coaching.

Now, actually, this is an overseas call, is this correct? Yes. So let me just explain very briefly what the Taiwan law and customs are in relation to a Chairman’s authority and responsibility. Basically, by both law and custom, the Chairman of a company has the ultimate authority and responsibility, basically. However, he may delegate his authority and responsibility to the President. He may also take it back anytime. He can delegate any and all, any or all of the responsibilities to the President. And now these two gentlemen, their titles is President and co-CEO. President comes first. They are, in a very legal sense, Presidents. Now the co-CEO is basically a Western term. And then in the United States, a CEO usually bears the final ultimate responsibility and authority as a Chairman in Taiwan does. In the US, it’s the CEO. Now — so my role in the future is really to convert these two gentlemen from the Taiwan sense President to the US sense CEO, and it will be a gradual process.

Donald Lu – Goldman Sachs – Analyst So Chairman, (spoken in foreign language).

First question is, I want to ask the Chairman, how would you — are you satisfied with the transition so far and also, how the two Presidents would share their work? Are they still rotating or not? And (multiple speakers) but probably not now. And maybe give us some details about how the Company is run. And I have a follow-up question on competition.

Morris Chang – Taiwan Semiconductor Manufacturing Co., Ltd. – Chairman

All right. I am quite satisfied with the transition. And these two gentlemen; Mark is now responsible for sales, marketing, strategic planning, business development, and yes, information technology and materials management, all those. And C.C. is responsible for operations, all the operations, and he is also responsible for specialty technology R&D. Specialty technology incidentally accounts for 25% of our total business. So now, Donald, your other question is whether they’re going to rotate. My plan currently is, I don’t plan it that way, I don’t plan it that way right now. However, I deem it’s a pretty flexible thing. Tomorrow, I may take one part of Mark’s and give it to C.C. or vice versa. But I’m not considering rotation, per se. Yes, does that answer your first question?

Donald Lu – Goldman Sachs – Analyst

… Okay, since we are already doing it, why don’t you give us more color? 16-nanometer, for example, are we saying that in terms of die size, performance, our product will be very similar to Intel’s 40-nanometer FinFET? And also, Mark commented that for the FinFET tape-outs, specifically there’s a CPU and server chips, and can we say that TSMC’s CPU and server chips will have the similar physical performance as Intel’s products today?

Morris Chang – Taiwan Semiconductor Manufacturing Co., Ltd. – Chairman

Well, I think, Donald, we have already given everybody enough information on our 16-FinFET. I think that if we keep giving more, we would be helping our competitors who have picked on us. And so, now, we do stand on what we said. We are going to — our Grand Alliance will out-compete Intel and Samsung. Our Grand Alliance on the 16-FinFET will out-compete. By that I don’t mean that we’ll completely exclude them, no, no, no. We can’t do it. We won’t be able to do that. But our Grand Alliance, with us as foundry supplier, will capture a large share of the 16-nanometer. You agree with that don’t you?

Mark Liu – Taiwan Semiconductor Manufacturing Co., Ltd. – President & Co-CEO

The fabless companies in China are very aggressive approaching leading-edge technologies. To tell you, our 16-FinFET this year, already some of the fabless companies will be using it in tape-outs. So, I think all those fabless companies’ subsidy will propel them into the leading-edge technology more.

July 20, 2013: TSMC takes on rivals with Grand Alliance strategy, says Chang [Global Data Point] by TMC News

(Global Data Point Via Acquire Media NewsEdge) Taiwan Semiconductor Manufacturing Company (TSMC) chairman and CEO Morris Chang, at a July 18 investors conference, talked about the importance of the foundry’s close ties with customers and ecosystem partners, and described how TSMC has formed a “Grand Alliance” with EDA, IP, software IP, systems software and design services providers.

TSMC has been competitive against fellow pure-play foundries, said Chang. In the face of rising competition from IDMs, TSMC with its ability to deliver cutting-edge technologies and advanced manufacturing capacity is also able to outshine the rivals, Chang indicated.

With the industry moving towards sub-20nm technologies, Chang believes that TSMC will become more capable of fending off rivals like Samsung Electronics and Intel. “Now in this new era of competition, the competition is not between foundries. It is not between foundries and IDMs. It is between ‘Grand Alliances’ and IDMs,” Chang pointed out.

Chang named ARM, Imagination, Cadence and Mentor as some of TSMC’s IP and EDA partners.

TSMC’s so-called “Grand Alliance” seems like an expansion of its Open Innovation Platform (OIP), which was announced in 2008. TSMC’s OIP is a business strategy aiming to provide integrated services from design to manufacturing to testing and packaging. According to TSMC, the platform is to bring together the thinking of customers and partners under the common goal of shortening design time, minimizing time-to-volume and speeding time-to-market.

In addition, Chang noted that TSMC’s 28nm process technology is on track to triple in wafer sales in 2013. TSMC made 29% of its NT$155.89 billion (US$5.18 billion) revenues from selling 28nm chips in the second quarter of 2013.

Chang also reiterated TSMC’s plans that 20nm technology will begin volume production in early 2014, followed by volume production of 16nm FinFETs within one year.

MediaTek’s next 10 years’ strategy for devices, wearables and IoT

After what happened last year with MediaTek is repositioning itself with the new MT6732 and MT6752 SoCs for the “super-mid market” just being born, plus new wearable technologies for wPANs and IoT are added for the new premium MT6595 SoC [this same blog, ]. The last 10 years’ strategy was incredible!

MediaTek - The Next 10 years Enablement NOWEnablement is the crucial differentiator for MediaTek’s next 10 years’ strategy, as much as it was for the one in the last 10 years. Therefore it will be presented in details below as follows:

I. Existing Strategic Initiatives

I/1. MediaTek CorePilot™ to get foothold in the tablet market and to conquer the high-end smartphone market (MediaTek Super Mid Logo and MediaTek Helio Logo are detailed here)
I/2. MediaTek’s exclusive display technology quality enhancements
I/3. CorePilot™ 2.0 especially targeted for the extreme performance tablet and smartphone markets

II. Brand New Strategic Initiatives

II/1. CrossMount: “Whatever DLNA can plus a lot more”
II/2. LinkIt™ One Development Platform for wearables and IoT
II/3. LinkIt™ Connect 7681 development platform for WiFi enabled IoT

MWC 2015: MediaTek LinkIt Dev Platforms for Wearables & IoT – Weather Station & Smart Light Demos

MediaTek Labs technical expert Philip Handschin introduces us to two demonstrations based on LinkIt™ development platforms:
-Weather Station uses a LinkIt ONE development board to gather temperature, humidity and pressure data from several sensors. The data is then uploaded to the MediaTek Cloud Sandbox where it’s displayed in graphical form.
-Smart Light uses a LinkIt Connect 7681 development board. It receives instructions over a Wi-Fi connection, from a smartphone app, to control the color of an LED light bulb.

II/4. MediaTek Labs Logo: The best free resources for Wearables and IoT
II/5. MediaTek Ventures Logo: to enable a new generation of world-class companies

III. Stealth Strategic Initiatives (MWC 2015 timeframe)

III/1. SoCs for Android Wear and Android based wearables

Before these details let’s however understand the strategic reasoning for all that!
March 5, 2015: MediaTek CMO Johan Lodenius* at MWC 2015

MediaTek appoints Johan Lodenius as its new Chief Marketing Officer [press release, Dec 20, 2012]

MediaTek's Brand New World - The Big Picture -- MWC2015

MediaTek's Brand New World - Device Evolution -- MWC2015

MediaTek's Brand New World - Business Evolution -- MWC2015MediaTek's Brand New World -- MWC2015

MediaTek - Enabling a Brand New World

Next here is also a historical perspective (as per my blog) on MediaTek progress so far:
– First I would recommend to read the “White-box (Shanzhai) vendors” and “MediaTek as the catalyst of the white-board ecosystem” parts in the Be aware of ZTE et al. and white-box (Shanzhai) vendors: Wake up call now for Nokia, soon for Microsoft, Intel, RIM and even Apple! Feb 21, 2011 post of mine in order to understand the recipe for its last 10 years success ⇒ Johan Lodenius NOW:
MediaTek was the pioneer of manufacturable reference design
– Then it is worth to take a look at the following posts directly related to MediaTek if you want to understand the further evolution of the company’s formula of success :

MediaTek on Experiencing the Cloud

#2 Boosting the MediaTek MT6575 success story with the MT6577 announcement — UPDATED with MT6588/83 coming in Q4 2012 and 8-core MT6599 in 2013 (The MT6588 was later renamed MT6589). On the chart below of the “Global market share held by leading smartphone vendors Q4’09-Q4’14” by Statista it is quite well visible the effect of MT6575 (see from Q3’12 on) as this enabled a huge number of 3d Tier or no-name companies, predominantly from China to enter the smartphone market quickly and extremely competitively (and Nokia’s new strategy to fail as well)
Global market share held by leading smartphone vendors 4Q09-4Q14 by Statista#6 MT6577-based JiaYu G3 with IPS Gorilla glass 2 sreen of 4.5” etc. for $154 (factory direct) in China and $183 internationally (via LightTake)
#13 MediaTek’s ‘smart-feature phone’ effort with likely Nokia tie-up
#16 UPDATE Aug’13: Xiaomi $130 Hongmi superphone END MediaTek MT6589 quad-core Cortex-A7 SoC with HSPA+ and TD-SCDMA is available for Android smartphones and tablets of Q1 delivery
#24 Eight-core MT6592 for superphones and big.LITTLE MT8135 for tablets implemented in 28nm HKMG are coming from MediaTek to further disrupt the operations of Qualcomm and Samsung
#42 MediaTek MT6592-based True Octa-core superphones are on the market to beat Qualcomm Snapdragon 800-based ones UPDATE: from $147+ in Q1 and $132+ in Q2
#67 MediaTek is repositioning itself with the new MT6732 and MT6752 SoCs for the “super-mid market” just being born, plus new wearable technologies for wPANs and IoT are added for the new premium MT6595 SoC
#93 Phablet competition in India: $258 Micromax-MediaTek-2013 against $360 Samsung-Broadcom-2012
#128 MediaTek’s 64-bit ARM Cortex-A53 octa-core SoC MT8752 is launched with 4G/LTE tablets in China
#205 Now in China and coming to India: 4G LTE True Octa-core™premium superphones based on 32-bit MediaTek MT6595 SoC with upto 20% more performance, and upto 20% less power consumption via its CorePilot™ technology
#237 Micromax is in a strategic alliance with operator Aircel and SoC vendor MediaTek for delivery of bundled complete solution offers almost equivalent to cost of the device and providing innovative user experience
#281 ARM Cortex-A17, MediaTek MT6595 (devices: H2’CY14), 50 billion ARM powered chips

As an alternative I can recommend here the February 2, 2015 presentation by Grant Kuo, Managing Director, MediaTek India on the IESA Vision Summit 2015 event in India:

– MediaTek Journey Since 1997
→ Accent on the turnkey handset solution: 200 eng. ⇒ 30-40 eng., time-to-market down to 4 months
→ Resulting in MediaTek share with local brands in India of 70% by 2014
 The Next Big Business after Mobile
 Partnering for New Business Opportunity
 Innovations & Democratization
→ Everyday Genius and Super-Mid Market

I. Existing Strategic Initiatives

I/1. MediaTek CorePilot™ to get foothold in the tablet market and to conquer the high-end smartphone market
(MediaTek Super Mid Logo and MediaTek Helio Logo are detailed here)

July 15, 2013:
Technology Spotlight: Making the big.LITTLE difference

MediaTek 2014 Market PerformanceNo matter where the mobile world takes us, MediaTek is always at the forefront, ensuring that the latest technologies from our partners are optimized for every mobile eventuality.

To maximize the performance and energy efficiency benefits of the ARM big.LITTLE™ architecture, MediaTek has delivered the world’s first mobile system-on-a-chip (SoC) – the MT8135 – with Heterogeneous Multi-Processing (HMP), featuring MediaTek’s CorePilot™ technology.

ARM big.LITTLE™ is the pairing of two high performance CPUs, with two power efficient CPUs, on a single SoC. MediaTek’s CorePilot™ technology uses HMP to dynamically assign software tasks to the most appropriate CPU or combination of CPUs according to the task workload, therefore maximizing the device’s performance and power efficiency.

In our recently produced whitepaper, we discussed the advantages of HMP over alternative forms of big.LITTLE™ architecture, and noted that while – HMP overcomes the limitations of other big.LITTLE™ architectures, MediaTek’s CorePilot™ maximizes the performance and power-saving potential of HMP with interactive power management, adaptive thermal management and advanced scheduler algorithms.

To learn more, please download MediaTek’s CorePilot™ whitepaper.

Leader in HMP

As a founding member of the Heterogeneous System Architecture (HSA) Foundation, MediaTek actively shapes the future of heterogeneous computing.

HSA Foundation is a not-for-profit consortium of SoC and software vendors, OEMs and academia.

July 15, 2013:
MediaTek CorePilot™ Heterogeneous Multi-Processing Technology [whitepaper]

Delivering extreme compute performance with maximum power efficiency

In July 2013, MediaTek delivered the industry’s first mobile system on a chip with Heterogeneous Multi-Processing. The MT8135 chipset for Android tablets features CorePilot technology that maximizes performance and power saving with interactive power management, adaptive thermal management and advanced scheduler algorithms.

Table of Contents

  • ARM big.LITTLE Architecture
  • big.LITTLE Implementation Models
  • Cluster Migration
  • CPU Migration
  • Heterogeneous Multi-Processing
  • MediaTek CorePilot Heterogeneous Multi-Processing Technology
  • Interactive Power Management
  • Adaptive Thermal Management
  • Scheduler Algorithms
  • The MediaTek HMP Scheduler
  • The RT Scheduler
  • Task Scheduling & Performance
  • CPU-Intensive Benchmarks
  • Web Browsing
  • Task Scheduling & Power Efficiency
  • SUMMARY

Oct 29, 2013:
CorePilot Task Scheduling & Performance

Mobile SoCs have a limited power consumption budget.

  • With ARM big.LITTLE, SoC platforms are capable of asymmetric computing where by tasks can be allocated to CPU cores in line with their processing needs.
  • From the three available software models for configuring big.LITTLE SoC platforms, Heterogeneous Multi-Processing offers the best performance.
  • MediaTek CorePilot technology is designed to deliver the maximum compute performance from big.LITTLE mobile SoC platforms with low power consumption.
  • The MediaTek CorePilot MT8135 chipset for Android is the industry’s first Heterogeneous Multi-Processing implementation.
  • MediaTek leads in the heterogeneous computing space and will release further CorePilot innovations in 2014.
(*) The #1 market position in Digital TVs is the result of the acquisition of the MStar Semiconductor in 2012.

Feb 9, 2015 by Bidness Etc:
Qualcomm Inc. (QCOM) To Lose Market Share: UBS

UBS expects Mediatek to gain at Qualcomm’s expense this year

Eric Chen, Sunny Lin, and Samson Hung, analysts at UBS, suggest that Mediatek will control 46% of the 4G smartphone market in China in 2015. Last year the company had a 30% market share. Low-end customers, as well as high-end clients that are switching to Mediatek MT6795 from Snapdragon 800 will help Mediatek advance.

“We believe clients for MT6795 include Sony, LGE, HTC, Xiaomi, Oppo, Meizu, TCL, and Lenovo, among others. With its design win of over 10 models, we anticipate Mediatek will ship 2m units per month in Q215 and 4m units per month in H215. That indicates revenue will reach 14% in Q215 and 21% in H215, up from 3% in Q115,” reads the report.UBS also said that Mediatek will witness a 20% increase in its revenue and report 48% in gross margin in 2015. Earnings are forecasted to grow 21%.

“We forecast MT6752/MT6732 shipments (mainly at MT6752 of US$18-20) to reach 3m units per month in Q215, up from 1.5m units per month in Q115,” read the report*.

My insert here the Sony Xperia E4g [based on MT6732] – Well-priced LTE-Smartphone Hands On at MWC 2015 by Mobilegeeks.de
* From another excerpt of the UBS report: “In the mid-end, UBS believes China’s largest smartphone manufacturers from LenovoGroup (992.HK) to Huawei will switch from Qualcomm’s MSM 8939 [Snapdragon 615] to Mediatek’s MT 6752 because of Qualcomm’s “inferior design.” ” 

Further below in the section I/2. there will be an image enhancement demonstration with an MT6752-based Lenovo A936 smartphone, actually a typical “lower end super-mid” device sold in China since December just for ¥998 ($160) — for 33% lower price than the mid-point for the super-mid market indicated a year ago by MediaTek (see the very first image in this post).

July 15, 2014 by Mobilegeeks.de:
MediaTek 64-bit LTE Octa-Core Smartphone Reference Design “In Shezhen MediaTek showed off their new Smartphone Reference Design for their LTE Octa-Core line up. The MT6595 and MT6795 are both high end processors capable of taking on Qualcomm in terms of benchmarks at a budget price.

July 15, 2014 by Mobilegeeks.de:
MediaTek: How They Came To Take on Qualcomm?

Brief content: When I found out about the MediaTek press conference, I immediately picked up a ticket and planned to spend a few days in one of my favorite cities on earth, Shenzhen. MediaTek is one of the fastet growing companies in mobile. But where exactly did this Taiwanese company come from?
Back in 1997 MediaTek was spun off from United Micro Electronics Corporation which was Taiwan’s first semi conductor company back in 1980 and started out making chipsets for home entertainment centers and optical drives. In 2004 they entered the mobile phone market with a different approach, instead of just selling SoCs they sold complete packages, a chip with an operating system all ready baked on, they were selling reference designs.
This cut entire heavily manned teams out of the process and more importantly reduced barriers for entry, small companies sold phones under their own brand. This is why most people have never heard of MediaTek, they merely enabled the success of others. Particularly in emerging markets like China.
With Feature phones peaking in 2012 and Smartphones finally taking over the top spot in 2013, they had to move on, it took them longer then they should have, regardless, they are here now. MediaTek is applying the same strategy for dominating the Feature Phone market to low and mid range Smartphones. They are already in bed with all the significant emerging market players like ZTE, Huawei & Alcatel. And getting manufacturers on your side works for gaining market share when carriers don’t have much control. Unsubsidized handsets make people purchase more affordable devices.
Despite its enormous success in the category where the next Billion handsets are going to be sold, they have yet to make a name for themselves in the West. Fair enough, being known for cheap handsets will create challengers to entering the high end market. But that hasn’t stopped them from coming out with the world first 4G LTE Octa core processor. And they even set up shop in Qualcomm’s backyard by opening an office in San Deigo. Which is a pretty big statement, especially when you take into consideration that MediaTek is bigger than Broadcom & Nvidia.
But as they push into the US, Qualcomm seeks to gain a foothold in China. So let’s take a closer look at that because this race has less to do with SoC’s than it does with LTE. MediaTek’s Octa-core processor with LTE put Qualcomm on alert because they always had to lead when it came to LTE. I found some stats on Android Authority from Strategy Analytics in Q3 2013 66% of their cellular revenue came from LTE, which MediaTek claimed second place at 12%, and Intel in third with 7%. Qualcomm even has a realtionship with China Mobile to get their LTE devices into the hands of its local market.

Even still, it is a numbers game and if MediaTek’s SoC performance is at the same level as Qualcomm’s mid range SoC offering but at a lower price, it won’t take MediaTek to catch up. But even on a more base level, let me tell you about a meeting that I had in Shenzhen with Gionee I asked about developing on Qualcomm vs MediaTek. They said, MediaTek will get back to you within the hour, Qualcomm will get back to you the next day, and when I mentioned Intel they just laughed.

Consumers might find it frustrating that MediaTek takes a little longer to come out with the latest version of Android, but the reason is that they are doing all the work for their partners. When you’re competing with a company that understand customer service better than anyone else right now, it’s going to be hard not to see them as a real threat.

MediaTek Introduces Industry Leading Tablet SoC, MT8135

TAIWAN, Hsinchu – July 29, 2013 – MediaTek Inc., (2454: TT), a leading fabless semiconductor company for wireless communications and digital multimedia solutions, today announced its breakthrough MT8135 system-on-chip (SoC) for high-end tablets. The quad-core solution incorporates two high-performance ARM Cortex™-A15 and two ultra-efficient ARM Cortex™-A7 processors, and the latest GPU from Imagination Technologies, the PowerVR™ Series6. Complemented by a highly optimized ARM® big.LITTLE™ processing subsystem that allows for heterogeneous multi-processing, the resulting solution is primed to deliver premium user experiences. This includes the ability to seamlessly engage in a range of processor-intensive applications, including heavy web-downloading, hardcore gaming, high-quality video viewing and rigorous multitasking – all while maintaining the utmost power efficiency.

In line with its reputation for creating innovative, market-leading platform solutions, MediaTek has deployed an advanced scheduler algorithm, combined with adaptive thermal and interactive power management to maximize the performance and energy efficiency benefits of the ARM big.LITTLE™ architecture. This technology enables application software to access all of the processors in the big.LITTLE cluster simultaneously for a true heterogeneous experience. As the first company to enable heterogeneous multi-processing on a mobile SoC, MediaTek has uniquely positioned the MT8135 to support the next generation of tablet and mobile device designs.

“ARM big.LITTLE™ technology reduces processor energy consumption by up to 70 percent on common workloads, which is critical in the drive towards all-day battery life for mobile platforms,” said Noel Hurley, vice president, Strategy and Marketing, Processor Division, ARM. “We are pleased to see MediaTek’s MT8135 seizing on the opportunity offered by the big.LITTLE architecture to enable new services on a heterogeneous processing platform.”

“The move towards multi-tasking devices requires increased performance while creating greater power efficiency that can only be achieved through an optimized multi-core system approach. This means that multi-core processing capability is fast becoming a vital feature of mobile SoC solutions. The MT8135 is the first implementation of ARM’s big.LITTLE architecture to offer simultaneous heterogeneous multi-processing.  As such, MediaTek is taking the lead to improve battery life in next-generation tablet and mobile device designs by providing more flexibility to match tasks with the right-size core for better computational, graphical and multimedia performance,” said Mike Demler, Senior Analyst with The Linley Group.

The MT8135 features a MediaTek-developed four-in-one connectivity combination that includes Wi-Fi, Bluetooth 4.0, GPS and FM, designed to bring highly integrated wireless technologies and expanded functionality to market-leading multimedia tablets. The MT8135 also supports Wi-Fi certified Miracast™ which makes multimedia content sharing between devices remarkably easier.

In addition, the tablet SoC boasts unprecedented graphics performance enabled by its PowerVR™ Series6 GPU from Imagination Technologies. “We are proud to have partnered with MediaTek on their latest generation of tablet SoCs” says Tony King-Smith, EVP of marketing, Imagination. “PowerVR™ Series6 GPUs build on Imagination’s success in mobile and embedded markets to deliver the industry’s highest performance and efficient solutions for graphics-and-compute GPUs. MediaTek is a key lead partner for Imagination and its PowerVR™ Series6 GPU cores, so we expect the MT8135 to set an important benchmark for high-end gaming, smooth UIs and advanced browser-based graphics-rich applications in smartphones, tablets and other mobile devices. Thanks to our PowerVR™ Series6 GPU, we believe the MT8135 will deliver five-times or more the GPU-compute-performance of the previous generation of tablet processors.”

“At MediaTek, our goal is to enable each user to take maximum advantage of his or her mobile device.  The implementation and availability of the MT8135 brings an enjoyable multitasking experience to life without requiring users to sacrifice on quality or energy. As the leader in multi-core processing solutions, we are constantly optimizing these capabilities to bring them into the mainstream, so as to make them accessible to every user around the world,” said Joe Chen, GM of the Home Entertainment Business Unit at MediaTek.

The MT8135 is the latest SoC in MediaTek’s highly successful line of quad-core processors, which since its launch last December* has given rise to more than 350 projects and over 150 mobile device models across the world. This latest solution, along with its comprehensive accompanying Reference Design, will like their predecessors fast become industry standards, particularly in the high-end tablet space.

* MediaTek Strengthens Global Position with World’s First Quad-Core Cortex-A7 System on a Chip – MT6589 [press release, Dec 12, 2012]

See also: Imagination Welcomes MediaTek’s Innovation in True Heterogeneous Multi-Processing With New SoC Featuring PowerVR Series6 GPU [press release, Aug 28, 2013]

MediaTek Super Mid LogoMediaTek Announces MT6595, World’s First 4G LTE Octa-Core Smartphone SOC with ARM Cortex-A17 and Ultra HD H.265 Codec Support

MediaTek CorePilot™ Heterogeneous Multi-Processing Technology enables outstanding performance with leading energy efficiency

TAIWAN, Hsinchu – 11 February, 2014 – MediaTek today announces the MT6595, a premium mobile solution with the world’s first 4G LTE octa-core smartphone SOC powered by the latest Cortex-A17™ CPUs from ARM®.

The MT6595 employs ARM’s big.LITTLE™ architecture with MediaTek’s CorePilot™ technology to deliver a Heterogeneous Multi-Processing (HMP) platform that unlocks the full power of all eight cores. An advanced scheduler algorithm with adaptive thermal and interactive power management delivers superior multi-tasking performance and excellent sustained performance-per-watt for a premium mobile experience.

Excellent Performance-Per-Watt
  • Four ARM Cortex-A17™, each with significant performance improvement over previous-generation processors, plus four Cortex-A7™ CPUs
  • ARM big.LITTLE™ architecture with full-system coherency performs sophisticated tasks efficiently
  • Integrated Imagination Technologies PowerVR™ Series6 GPU for high-performance graphics
  • Integrated 4G LTE Multi-Mode Modem
  • Rel. 9, Category 4 FDD and TDD LTE with data rates up to 150Mbits/s downlink and 50Mbits/s uplink
  • DC-HSPA+ (42Mbits/s), TD-SCDMA and EDGE for legacy 2G/3G networks
  • 30+ 3GPP RF bands support to meet operator needs worldwide

World-Class Multimedia Subsystems

  • World’s first mobile SOC with integrated, low-power hardware support for the new H.265 Ultra HD (4K2K) video record & playback, in addition to Ultra HD video playback support for H.264 & VP9
  • Supports 24-bit 192 kHz Hi-Fi quality audio codec with high performance digital-to-analogue converter (DAC) to head phone >110dB SNR
  • 20MP camera capability and a high-definition WQXGA (2560 x 1600) display controller
  • MediaTek ClearMotion™ technology eliminates motion jitter and ensures smooth video playback at 60fps on mobile devices
  • MediaTek MiraVision™ technology for DTV-grade picture quality

First MediaTek Mobile Platform Supporting 802.11ac

  • Comprehensive complementary connectivity solution that supports 802.11ac
  • Multi-GNSS positioning systems including GPS, GLONASS, Beidou, Galileo and QZSS
  • Bluetooth LE and ANT+ for ultra-low power connectivity with fitness tracking devices

World’s First Multimode Wireless Charging Receiver IC

  • Multi-standard inductive and resonant wireless charging functionality available
  • Supported by MediaTek’s companion multimode wireless power receiver IC

“MediaTek is focused on delivering a full-range of 4G LTE platforms and the MT6595 will enable our customers to deliver premium products with advanced features to a growing market,” said Jeffrey Ju, General Manager of the MediaTek Smartphone Business Unit.

“Congratulations to MediaTek on being in a leading position to implement the new ARM Cortex-A17 processor in mobile device”, said Noel Hurley, Vice President and Deputy General Manager, ARM Product Division. “MediaTek has a keen understanding of the smartphone market and continues to identify innovative ways to bring a premium mobile experience to the masses.”

The MT6595 platform will be commercially available by the first half of 2014, with devices expected in the second half of the year.

Sept 21, 2014:
MediaTek CorePilot™ Technology

MediaTek Super Mid LogoMediaTek Launches 64-bit True Octa-core™ LTE Smartphone SoC with World’s First 2K Display Support

TAIWAN, Hsinchu – July 15, 2014 – MediaTek today announced MT6795, the 64-bit True Octa-core™ LTE smartphone System on Chip (SoC) with the world’s first 2K display support. This is MediaTek’s flagship smartphone SoC designed to empower high-end device makers to leap into the Android™ 64-bit era.

The MT6795 is currently set to be the first 64-bit, LTE, True Octa-core SoC targeting the premium segment, with speed of up to 2.2GHz, to hit the market. The SoC features MediaTek’s CorePilot™ technology providing world-class multi-processor performance and thermal control, as well as dual-channel LPDDR3 clocked at 933MHz for top-end memory bandwidth in a smartphone.

The high-performance SoC also satisfies the multimedia requirements of even the most demanding users, featuring multimedia subsystems that support many technologies never before possible or seen in a smartphone, including support for 120Hz displays and the capability to create and playback 480 frames per second (fps) 1080p Full HD Super-Slow Motion videos.

With the launch of MT6795, MediaTek is accelerating the global transition to LTE and creating opportunities for device makers to gain first-mover advantage with top-of-the-line devices in the 64-bit Android device market. Coupled with 4G LTE support, MT6795 completes MediaTek’s 64-bit LTE SoC product portfolio: MT6795 for power users, MT6752 for mainstream users and MT6732 for entry level users. This extensive portfolio allows everyone to embrace the improved speed from 4G LTE and parallel computing capability from CorePilot and 64-bit processors.

Key features of MT6795:

  • 64-bit True Octa-core LTE SoC* with clock speed up to 2.2GHz
  • MediaTek CorePilot unlocks the full power of all eight cores
  • Dual-channel LPDDR3 memory clocked at 933MHz
  • 2K on device display (2560×1600)
  • 120Hz mobile display with Response Time Enhancement Technology and MediaTek ClearMotion™
  • 480fps 1080p Full HD Super-Slow Motion video feature
  • Integrated, low-power hardware support for H.265 Ultra HD (4K2K) video record & playback, Ultra HD video playback support for H.264 & VP9, as well as for graphics-intensive games and apps
  • Support for Rel. 9, Category 4 FDD and TDD LTE (150Mbps/50Mbps), as well as modems for 2G/3G networks
  • Support for Wi-Fi 802.11ac/Bluetooth®/FM/GPS/Glonass/Beidou/ANT+
  • Multi-mode wireless charging supported by MediaTek’s companion multi-mode wireless power receiver IC

“MediaTek has once again demonstrated leading engineering capabilities by delivering breakthrough technology and time-to-market advantage that enable limitless opportunities for our partners and end users, while setting the bar even higher for our competition,” said Jeffrey Ju, General Manager of the MediaTek Smartphone Business Unit.  “With a complete and inclusive 64-bit LTE SoC product portfolio, we are firmly on track to lead the industry in delivering premium mobile user experiences for years to come.”

MT6795-powered devices will be commercially available by the end of 2014.

*  Instead of a heterogeneous multi-processing architecture, the MT6795 features eight identical Cortex-A53 cores. 4xA53 + 4xA57 big.LITTLE was unofficially before.
See also: The Cortex-A53 as the Cortex-A7 replacement core is succeeding as a sweet-spot IP for various 64-bit high-volume market SoCs to be delivered from H2 CY14 on [this same blog, Dec 23, 2013]

MediaTek Launches MT6735Mainstream WorldModeSmartphone Platform

By adding CDMA2000, MediaTek accomplishes WorldMode modem capability in a single platform and meets wireless operator requirements globally; putting advanced yet affordable smartphones into the hands of consumers

TAIWAN, Hsinchu – 15 October, 2014 – MediaTek today announced a new 64-bit mobile system on chip (SoC), MT6735, incorporating the modem and RF needs of wireless operators globally.  By offering a unified mobile platform, MediaTek is enabling its customers to develop on the MT6735 and sell mobile devices globally, thereby creating significant R&D cost savings and economies of scale in manufacturing.  The MT6735 builds upon MediaTek’s existing line-up of mainstream LTE platforms by adding the critical WorldMode modem capability.

The MT6735 incorporates four 64-bit ARM® Cortex®-A53 processors, delivering significantly higher performance than Cortex-A7 for a premium mobile computing experience, driving greater choice of smart devices at affordable prices for consumers. As projected, MediaTek sees a continued consolidation of smartphones into a very large mid-range, termed by MediaTek as the “Super-mid market”.

“The MT6735 is a breakthrough product from MediaTek,” said Jeffrey Ju, SVP and General Manager of Wireless Communication at MediaTek. “With CDMA2000, we offer global reach, driving high performance technology into the hands of users everywhere. We also strongly believe that as LTE becomes mainstream in all markets, the processing power must be consistently high to ensure the best possible user experience. That’s why 64-bit CPUs and CorePilotTM technology are standard features across all of our LTE solutions.”

“MediaTek wants to make the world a more inclusive place, where the best, fully-connected user experiences do not mean expensive,” said Johan Lodenius, Chief Marketing Officer for MediaTek. “We are committed to creating powerful devices that accelerate the transformation of the global market and strive to put high-quality technology in the hands of everyone.”

The MT6735 platform includes:

Next-Generation 64-bit Mobile Computing System

  • Quad-core, up to 1.5GHz ARM Cortex-A53 64-bit processors with MediaTek’s leading CorePilot multi-processor control system, providing a performance boost for mainstream mobile devices
  • Mali-T720 GPU with support for the Open GL ES 3.0 and Open CL 1.2 APIs and premium graphics for gaming and UI effects

Advanced Multimedia Features

  • Supports low-power, 1080p, 30fps video playback on the emerging video codec standard H.265 and legacy H.264 and 1080p, 30fps H.264 video recording
  • Integrated 13MP camera image signal processor with support for unique features like PIP (Picture-in-Picture), VIV (Video in Video) and Video Face Beautifier
  • Display support up to HD 1280×720 resolution with MediaTek MiraVision™ technology for DTV-grade picture quality

Integrated 4G LTE WorldModeModem & RF

  • Rel. 9, Category 4 FDD and TDD LTE (150 Mb/s downlink, 50 Mb/s uplink)
  • 3GPP Rel. 8, DC-HSPA+ (42 Mb/s downlink, 11 Mb/s uplink), TD-SCDMA and EDGE are supported for legacy 2G/3G networks
  • CDMA2000 1x/EVDO Rev. A
  • Comprehensive RF support (B1 to B41) and the ability to mix multiple low-, mid-, and high bands for a global roaming solution

Integrated Connectivity Solutions

  • Supports dual-band Wi-Fi to effortlessly connect to a wide array of wireless routers and enable new applications like video sharing over Miracast
  • Bluetooth 4.0, supporting low-power connection to fitness gadgets, wearables and other accessories, such as Bluetooth headsets

MT6735 is sampling to early customers in Q4, 2014, with the first commercial devices to be available in Q2, 2015.

March 4, 2015:
LTE WorldMode with MediaTek MT6735 and MT6753, now ready for the USA market and worldwide!

ARMdevices.net (Charbax): MediaTek now supports worldwide LTE with the 64bit MT6735 quad-core ARM Cortex-A53 and the 64bit MT6753 octa-core ARM Cortex-A53. This is LTE Cat4, they claim to be 10-15% faster in single SIM mode and 20-30% faster with dual-sim support compared to perhaps a Qualcomm LTE Worldmode. They also claim to use less power than competitor for standby, 3G and LTE mode. Many LTE Telcos around the world have already certified support for these new MediaTek LTE parts, with support on Vodafone, Orange, T-Mobile, Three, Deutsche Telekom, China Unicom, China Mobile, Telefonica, Verizon and AT&T. This LTE Worldmode processor is a big deal for MediaTek as it opens up the USA market for them, where MediaTek previously didn’t have much support for American telecoms as MediaTek’s previous 3G and LTE solutions mainly was working outside of the USA.

MediaTek Super Mid LogoMediaTek Releases the MT6753: A WorldMode 64-bit Octa-core Smartphone SoC

Complete with integrated CDMA2000 technology that looks to meet the needs of the high-end smartphone market worldwide 

SPAIN, Barcelona – March 1, 2015 – MediaTek, today announced the release of the MT6753, a 64-bit Octa-core mobile system-on-chip (SoC) with support for WorldMode modem capability. Coupled with the previously announced MT6735 quad-core SoC, the new MT6753 is designed with high performance features for an ever more demanding mid-range market.

Reinforcing MediaTek’s commitment to driving the latest technology to customers across the world, the MT6753 SoC will be offered at a price that creates strong value for customers, especially as it comes with integrated CDMA2000 to ensure compatibility in every market. The eight ARM Cortex-A53 64-bit processors and Mali-T720 GPU helps to ensure customers can meet graphic-heavy multimedia requirements while also maintaining battery efficiency for high-end devices.

“The launch of the MT6753 again demonstrates MediaTek’s desire to offer more power and choice to our 4G LTE product line, while also giving customers worldwide greater diversity and flexibility in their product layouts”, said Jeffrey Ju, Senior Vice President at MediaTek.

The MT6753, which is compatible with the previously announced MT6735 for entry smartphones, also enables handset makers to reduce time to market, simplify product development and manage product differentiation in a more cost effective way. MT6753 is sampling to customers now, with the first commercial devices to be available in Q2, 2015.

Key Features of MT6753 include:

Next-Generation 64-bit Mobile Computing System

  • Octa-core, up to 1.5GHz ARM Cortex-A53 64-bit processors with MediaTek’s leading CorePilot multi-processor technology, providing a perfect balance of performance and power for mainstream mobile devices
  • Mali-T720 GPU with support for the Open GL ES 3.0 and Open CL 1.2 APIs and premium graphics for gaming and UI effects

Advanced Multimedia Features

  • Supports low-power, 1080p, 30fps video playback on the emerging video codec standard H.265 and legacy H.264 and 1080p, 30fps H.264 video recording
  • Integrated 16MP camera image signal processor with support for unique features like PIP (Picture-in-Picture), VIV (Video in Video) and Video Face Beautifier
  • Display support up to HD 1920×1080 60fps resolution with MediaTek MiraVision™ technology for DTV-grade picture quality

Integrated 4G LTE WorldModeModem & RF

  • Rel. 9, Category 4 FDD and TDD LTE (150 Mb/s downlink, 50 Mb/s uplink)
  • 3GPP Rel. 8, DC-HSPA+ (42 Mb/s downlink, 11 Mb/s uplink), TD-SCDMA and EDGE are supported for legacy 2G/3G networks
  • CDMA2000 1x/EVDO Rev. A
  • Comprehensive RF support (B1 to B41) and the ability to mix multiple low-, mid-, and high bands for a global roaming solution

Integrated Connectivity Solutions

  • Supports dual-band Wi-Fi to effortlessly connect to a wide array of wireless routers and enable new applications like video sharing over Miracast
  • Bluetooth 4.0, supporting low-power connection to fitness gadgets, wearables and other accessories, such as Bluetooth headsets

January 2015: MediaTek leaked smartphone roadmap (note the MT67xx scheduled for 4Q 2015 and using 20nm technology, as well that the new smartphone SoCs, except the very entry 3G with Cortex-A7, are based on Cortex-A53 cores while still using 28nm)
See also: The Cortex-A53 as the Cortex-A7 replacement core is succeeding as a sweet-spot IP for various 64-bit high-volume market SoCs to be delivered from H2 CY14 on [this same blog, Dec 23, 2013]

MediaTek leaked smartphone SoC roadmap -- Jan-2015
The availability dates shown above are for the first commercial devices!

MediaTek rebranding the high-end smartphone SoC family into MediaTek Helio Logo(starting with the MT6795 now denominated as Helio X10), after the Greek word for sun, “helios”:

March 12, 2015:
MediaTek Helio explained by CMO Johan Lodenius at MWC 2015

MediaTek rebranding the high-end smartphone SoC family into Helio

March 5, 2015:
MediaTek SVP Jeffrey Ju introducing the new Helio branding for premium (P) and extreme (X) performance segments of the smartphone SoCs at MWC 2015


I/2. Then continues with the presentation of MediaTek’s exclusive display technology quality enhancements (click on the links to watch the related brief videos):
MiraVision picture quality enhancement
SmartScreen as “the best viewing experience across extreme lighting conditions”
120Hz LCD display technology for a whole new experience (vs. the current 60Hz used by everyone)
– Super-SlowMotion, meaning 1/16 speed 480fps video playback (world’s first)
Instant Focus, meaning phase-detection autofocus (PDAF) technology on mobile devices cameras
– preliminary information on the new high-end Helio SoC with the new Cortex-A72 relying on 20nm technology, MiraVision Plus and their 3d genaration modem in 2nd half of 2015 (so it is quite likely the MT67xx mentioned in the above roadmap)

Note that 3d party companies are providing additional imaging enhancements to the device manufacturers, like the ones demonstrated by ArcSoft for the Lenovo Golden Warrior Note 8 TD-LTE (A936) at MWC 2015. Designed with Chinese (TD) and global market capabilities in mind, and available there from Dec’14 for ¥998 ($160) it is based on the MT6752 octacore which is providing the ARM Mali™-T760 GPU. ArcSoft is exploiting the GPU Compute capabilities for the additional imaging features shown in the video below:


I/3. CorePilot™ 2.0 especially targeted for the extreme performance tablet and smartphone markets

MediaTek To Redefine the Android Tablet Industry with world-first ARM® Cortex®-A72-based tablet SoC – MT8173

Revolutionary new 64-bit ARM processor ramps up tablet performance and battery life for heavy content Android users

SPAIN, Barcelona – March 1, 2015 – MediaTek today announced the first tablet system-on-chip (SoC) in a family that features an ARM® Cortex®-A72 processor, the industry’s highest-performing mobile CPU. The Quad-core MT8173 is designed to maximize the benefits of the new processor and greatly increase tablet performance, while extending battery life to ensure a premium tablet experience. The MT8173 meets the growing demand for 4K Ultra HD content and graphic-heavy gaming by everyday mobile computing device users.

The MT8173 is designed with a 64-bit Multi-core big.LITTLE architecture that combines two Cortex-A72 CPUs and two Cortex-A53 CPUs, extending performance and power efficiency further. MT8173 boasts a six-fold  increase in performance compared to the MT8125 released in 2013. MT8173 offers up to 2.4GHz performance, supporting OpenCL with the deployment of MediaTek Corepilot® 2.0, and enables heterogeneous computing between the CPU and GPU. The SoC also ensures the ultimate in display clarity and motion fluency on 120Hz display, promising smooth scrolling with crystal clarity as compared to a normal 60Hz display.

“MT8173 highlights the significant shift in how mobile devices, such as Android tablets, are used and, with the combination of ARM’s latest technology, we are delivering a platform that answers the growing demand for improved mobile multimedia performance and power usage.  By presenting CPU specs that outperform any other device currently on the market, we are bringing PC-like performance to tablet form factor, reinforcing MediaTek’s continued commitment to deliver premium technology to everyone across the globe.” said Joe Chen, Senior Vice President of MediaTek.

“MediaTek has been a strong adopter of ARM big.LITTLE processing architecture, extending it with CorePilot, to deliver extreme performance, while maintaining power efficiency,” said Noel Hurley, General Manager, CPU group, ARM. “Decisively and quickly incorporating the second-generation of our 64-bit technology into a market-ready product, underscores the partnership between ARM and MediaTek.”

The MT8173 platform features:

True Heterogeneous 64-bit Multi-Core big.LITTLE architecture up to 2.4GHz

  • Features ARM Cortex®-A72 and ARM Cortex®-A53 64-bit CPU
  • Big cores and LITTLE cores can run at full speed at the same time for peak performance requirement
  • Performance of up to 2.4GHz

Imagination PowerVR GX6250 GPU

  • Supports OpenGL ES 3.1, OpenCL for future applications
  • Delivers 350Mtri/s and 2.8 Gpix/s performance
  • Provides uncompromised user experience for WQXGA display at 60fps

Comprehensive Multimedia Features

  • 120Hz mobile display
  • Ultra HD 30fps H.264/HEVC(10-bit)/VP9 hardware video playback
  • WQXGA display support with TV-grade picture quality enhancement
  • HDMI and Miracast support for multi-screen applications
  • 20MP camera ISP with video face beautify and LOMO effects

Security hardware accelerator

  • Supports Widevine Level 1, Miracast with HDCP
  • HDCP 2.2 for premium video to 4k TV display

MT8173 is available for customers now, and will be featured in the first commercial tablets in second half of this year. MT8173 is being demonstrated at 2015 Mobile World Congress in Barcelona, Spain at MediaTek’s booth – Hall 6, Stand 6E21 

MediaTek CorePilot 2.0 technology

March 5, 2015:
MediaTek CTO Kevin Jou on CorePilot 2.0 at MWC 2015


Then continues with the presentation of the new:
– MT8173 with the world’s first Cortex-A72 in the “big” role
– worldmode modem technology with LTE category 6

CrossMount technology as the “not yet another DLNA solution, as it does whatever DLNA can plus a lot more


II. Brand New Strategic Initiatives

Feb 27, 2015:
II/1. CrossMount
Unite your devices: Open up new possibilities

Technology makes it easy to share the things we love, but only when we use it in a certain way.

Making a video call just needs a smartphone with a camera, for instance, but what if you want to talk using the big screen on your HDTV? And what happens when you want to watch video from your set-top IPTV box on your tablet when you’re lying in bed — and use your smartwatch as a remote control?

With technology playing an increasingly important part in our lives, these are the kind of problems we can expect to face every day. And they’re the kind of problems MediaTek solves with CrossMount.

MediaTek CrossMount is a new standard for sharing hardware and software resources between a whole host of consumer electronics.

Based on the UPnP protocol, CrossMount connects compatible devices wirelessly, using either a home Wi-Fi network or a Wi-Fi Direct connection, to allow one to seamlessly access the features of another.

So you can start watching streaming video on the living room TV, for example, then switch it to your tablet when you move to another room, or use your TV’s speakers for a hands-free phone call with your smartphone. The possibilities are endless.

The CrossMount Alliance for MediaTek partners, customers and developers makes developing CrossMount applications as easy as possible, many brands and developers are already on board.

CrossMount will be available in late 2015 for MediaTek-based Android devices

MediaTek Introduces a New Convergence Standard for Cross-device Sharing with CrossMount

Fast and easy sharing of content, hardware and software resources enable multiple devices to combine and act together as a single, more powerful device

SPAIN, Barcelona – March 1, 2015– MediaTek today announced CrossMount – a new technology that simplifies hardware and software resource sharing between different consumer devices. Designed to be a new standard in cross-device convergence, the CrossMount framework ensures any compatible device can seamlessly use and share hardware or software resources authorized by the user. CrossMount is an open and simple-to-implement technology for the wide ecosystem of MediaTek customers and partners that opens the possibilities for multiple devices effectively working as one or sharing applications and hardware resources.

CrossMount defines its service mounting standard based on UPnP protocol, and can be implemented primarily in Android and Linux as well as other platforms. CrossMount works through simple discovery, pairing, authorization and use between devices of both hardware and software resources across smartphones, tablets and TVs. Communication between devices is achieved directly between devices via home gateways (Wireless LAN) or peer to peer (Wi-Fi Direct). Discovery and sharing are granted through an easy software implementation that allows all Wi-Fi capable devices to share resources without the need for cloud servers.

“Consumers have adopted a wide array of Internet-connected devices at home, in schools and workplaces, CrossMount sets the new standard for easy cross-device interaction and resource sharing.  We are particularly keen to open up this innovation to our wide ecosystem of customers and partners around the world, unleashing their imagination to create new immersive experiences that further enrich peoples’ lives” said, Joe Chen, Senior Vice President, MediaTek.

With CrossMount enabled devices, for example, viewers can simply pair their TV sound to their smartphone earphones or use their smartphone microphone as a voice controller to search content on their smart TV.  This is a breakthrough in user experience as the CrossMount standard means several devices can act as one together rather than simply share content.

“CrossMount is a lot more than mirroring from phone to TV – as has already been developed within the industry”, added Joe Chen. “CrossMount goes the extra mile with hardware and software capability sharing between smart devices, thereby creating many useful and more complex use cases, such as mounting a smartphone camera to a TV and enabling the TV for video conferencing session.”

To further drive the adoption of CrossMount as an industry standard, MediaTek is establishing the CrossMount Alliance to bring its wide ecosystem of partners and customers together and explore new possibilities to drive the technology forward. CrossMount will be open for developers to further expand the ability for innovative and new applications to be created, potentially changing the way we use and share devices and content. Chang-hong, Hisense, Lenovo and TCL are the first MediaTek customers to support CrossMount.

CrossMount will be made available to MediaTek customers and partners in the third quarter for Android-based smartphone, tablet and TV products, with devices expected on the market by end of this year.


II/2. LinkIt™ One Development Platform for wearables and IoT

March 20, 2015: MediaTek Labs – IoT Lab at 4YFN Barcelona

See how MediaTek Labs supported the winning team, Playdrop, who took advantage of the MediaTek LinkIt™ ONE development board to build an innovative water monitoring and control system prototype. Playdrop was just one of the competitors at IoT Lab, where developers from all over the world formed new teams and had less than two days to ideate, create business cases and working prototypes for the Internet of Things (IoT).
You can also hear from VP of MediaTek Labs, Marc Naddell, about how our development platforms will be supporting more developers on their journey into Wearables and IoT devices.

To find out more about MediaTek Labs & our offerings:
LinkIt ONE development platform: http://labs.mediatek.com/one
MediaTek Cloud Sandbox: http://labs.mediatek.com/sandbox
Get the tools you need to build your own Wearables and IoT devices, register now:http://labs.mediatek.com/register

Feb 18, 2015: What is MediaTek LinkIt™ ONE Development Platform?

MediaTek LinkIt™ ONE development platform enables you to design and prototype Wearables and Internet of Things (IoT) devices, using hardware and an API that are similar to those offered for Arduino boards.

The platform is based around the world’s smallest commercial System-on-Chip (SoC) for Wearables, MediaTek Aster (MT2502). This SoC works with MediaTek’s energy efficient Wi-Fi and GNSS companion chipsets also. This means you can easily create devices that connect to other smart devices or directly to cloud applications and services.

To make it easy to prototype Wearables and IoT devices and their applications, the platform delivers:

  • The LinkIt ONE Software Development Kit (SDK) for the creation of apps for LinkIt ONE devices. This SDK integrates with the Arduino software to deliver an API and development process that will be instantly familiar.
  • The LinkIt ONE Hardware Development Kit (HDK) for prototyping devices. Based on a MediaTek hardware reference design, the HDK delivers the LinkIt ONE development board from Seeed Studio.

Key features of LinkIt ONE development platform:

  • Optimized performance and power consumption to offer consumers appealing, functional Wearables and IoT devices
  • Based on MediaTek Aster (MT2502) SoC, offering comprehensive communications and media options, with support for GSM, GPRS, Bluetooth 2.1 and 4.0, SD Cards, and MP3/AAC Audio, as well as Wi-Fi and GNSS (hardware dependent)
  • Delivers an API, to access key features of the Aster SoC, that is similar to that for Arduino; enabling existing Arduino apps to be quickly ported and new apps created with ease
  • LinkIt ONE developer board from partner Seeed Studio with similar pin-out to the Arduino UNO enabling a wide range of peripheral and circuits to be connected to the board
  • LinkIt ONE SDK (for Arduino) offering instant familiarity to Arduino developers and a easy to learn toolset for beginners

LinkIt ONE SDK                                                      LinkIt ONE HDK

MediaTek LinkIt™ ONE Development Platform

LinkIt ONE architecture

Running on top of the Aster (MT2502) and, where used, its companion GNSS and Wi-Fi chipsets, the LinkIt ONE developer platform is based on an RTOS kernel. On top of this kernel is a set of drivers, middleware and protocol stacks that expose the features of the chipsets to a Framework. A run-time Environment then provides services to the Arduino porting layer that delivers the LinkIt ONE API for Arduino. The API is used to develop Arduino Sketches with the LinkIt ONE SDK (for Arduino).

MediaTek LinkIt ONE architecture*MT3332 (GPS) and MT5931 (WiFi) are optional

Hardware core: Aster (MT2502)

The hardware core for LinkIt ONE development platform is MediaTek Aster (MT2502). This chipset also works with our Wi-Fi and GNSS chips, offering high performance and low power consumption to Wearables and IoT devices.

MediaTek Aster (MT2502)Aster’s highly integrated System-on-Chip (SoC) design avoids the need for multiple chips, meaning smaller devices and reduced costs for device creators, as well as eliminating the need for compatibility tests.

With Aster, it’s now easier and cheaper for device manufacturers and the maker community to produce desirable, functional wearable products.

Key features

  • The smallest commercial System-on-Chip (5.4mm*6.2mm) currently on the market
  • CPU core: ARM7 EJ-S 260MHz
  • Memory: 4MB RAM, 4MB Flash
  • PAN: Dual Bluetooth 2.1 (SPP) and 4.0 (GATT)
  • WAN: GSM and GPRS modem
  • Power: PMU and charger functions, low power mode with sensor hub function
  • Multimedia: Audio (list formats), video (list formats), camera (list formats/resolutions)
  • Interfaces: External ports for LCD, camera, I2C, SPI, UART, GPIO, and more

MediaTek Aster (MT2502) vs the competition

Get started with the LinkIt ONE development platform

Nov 12, 2014: MediaTek Labs – LinkIt workshop presentation


MediaTek Labs technical expert Pablo (Yuhsian) Sun provides an overview of the LinkIt Development Platform, in this presentation recorded at XDA:DevCon. Pablo describes what LinkIt is and discusses why its capabilities — such as support for Wi-Fi, SMS, Groove peripherals, and more — make it the ideal, cost effective tool for prototyping wearable and IoT devices. He also covers the LinkIt ONE board, offering an in-depth look at its hardware, introduced the APIs, and shows you how software is developed with Ardunio and the LinkIt SDK. Link to additional resources are also provided. If you haven’t used the LinkIt development platform, this video provides you with all the basics to get started.

MediaTek Launches LinkIt™ Platform for Wearables and Internet of Things

TAIWAN, Hsinchu – June 3, 2014 – MediaTek today announced LinkIt™, a development platform built to accelerate the wearable and Internet of Things (IoT) markets. LinkIt integrates the MediaTek’s Aster System on Chip (SoC), the smallest wearable SoC currently on the market. The MediaTek Aster SoC is designed to enable the developer community to create a broad range of affordable wearable and IoT products and solutions, for the billions of consumers in the rising Super-mid market to realize their potential as Everyday Geniuses.

Key features of MediaTek Aster and LinkIt:

  • MediaTek Aster, the smallest SoC in a package size of 5.4×6.2mm specifically designed for wearable devices.
  • LinkIt integrates the MediaTek’s Aster SoC and is a developer platform supported by reference designs that enable creation of various form factors, functionalities, and internet connected services.
  • Synergies between microprocessor unit and communication modules, facilitating development and saving time in new device creation.
  • Modularity in software architecture provides developers with high degree of flexibility.
  • Supports over-the-air (OTA) updates for apps, algorithms and drivers which enable “push and install” software stack (named MediaTek Capsule) from phones or computers to devices built with MediaTek Aster.
  • Plug-in software development kit (SDK) for Arduino and VisualStudio. Support for Eclipse is planned for Q4 this year.
  • Hardware Development Kit (HDK) based on LinkIt board by third party.

“MediaTek is now in a unique position to assume leadership by accelerating development for wearables and IoT, thanks to our LinkIt platform,” said J.C. Hsu, General Manager of New Business Development at MediaTek.  “We are enabling an ecosystem of device makers, application developers and service providers to create innovations and new solutions for the Super-mid market.”

Eric Li, Vice President, China’s Internet giant said, “Baidu provides a wealth of services for its users on our Internet portal, and our offerings will enable MediaTek-powered devices to do much more than they already can. The IoT is inter-connecting devices, and we’re connecting people with information via such devices. Our partnership with MediaTek will bring both of us closer to our respective goals.”

Gonzague de Vallois, Senior Vice President of Gameloft, another one of MediaTek’s ecosystem partners, said, “The wearable devices era is a fascinating one for a game developer. Proliferation of devices equipped with all sorts of different sensors and measured information from human body are creating possibilities for us to develop games that are played differently and in ways that were never imagined before. We are pleased to be a partner of MediaTek, who is enabling the wearable devices future for us to continuously bring innovative games to gamers around the world.”

The launch of LinkIt is a part of MediaTek’s wider initiative for the developer community called MediaTek Labs™ which will officially launch later this year. MediaTek Labs will stimulate and support the creation of wearable devices and IoT applications based on the LinkIt platform. Developers and device makers who are interested in joining the MediaTek Labs program are invited to email labs-registration@mediatek.com to receive a notification once the program launches. For more information and ongoing updates, please go to http://labs.mediatek.com.

Oct 30, 2014:
LinkIt ONE Plus Version from SeedStudio (http://www.seeedstudio.com/depot/LinkIt-ONE-p-2017.html)


Jan 3, 2015:
II/3. What is MediaTek LinkIt™ Connect 7681 development platform?

There is an increasing trend towards connecting every imaginable electrical or electronic device found in the home. For many of these applications developers simply want to add the ability to remotely control a device — turn on a table lamp, adjust the temperature setting of an air-conditioner or unlock a door. This is where the MediaTek MT7681 comes in.

MediaTek MT7681

MediaTek MT7681 is a compact Wi-Fi System-on-Chip (SoC) for IoT devices with embedded TCP/IP stack. By adding the MT7681 to an IoT device it can connect to other smart devices or to cloud applications and services. Connectivity on the MT7681 is achieved using Wi-Fi in either Wi-Fi station or access point (AP) mode.

In Wi-Fi station mode, MT7681 connects to a wireless AP and can then communicate with web services or cloud servers. A typical use of this option would be to enable a user to control the heating in their home from a home automation website.

To simplify the connection of an MT7681 chip to a wireless AP in Wi-Fi station mode, the MediaTek Smart Connection APIs are provided. These APIs enable a smart device app to remotely provision a MT7681 chip with AP details (SSID, authentication mode and password).

In AP mode, an MT7681 chip acts as an AP, enabling other wireless devices to connect to it directly. Using this mode, for example, the developer of a smart light bulb could offer users a smartphone application that enables bulbs to be controlled from within the home.

To control the device an MT7681 is incorporated into, the chip provides five GPIO pins and one UART port. In addition PWM is supported in software, for applications such as LED dimming.

MediaTek LinkIt Connect 7681 development platform

To enable developers and makers to take advantage of the features of the MT7681, MediaTek Labs offers the MediaTek LinkIt Connect 7681 development platform, consisting of an SDK, HDK and related documentation.

For software development MediaTek LinkIt Connect 7681 SDK is provided for Microsoft Windows and Ubuntu Linux. Based on the Andes Development Kit, the SDK enables developers to create firmware to control an IoT device in response to instructions received wirelessly.

For IoT device prototyping, the LinkIt Connect 7681 development board is provided. The development board consists of a LinkIt Connect 7681 module, micro-USB port and pins for each of the I/O interfaces of the MT7681 chip. This enables you to quickly connect external hardware and peripherals to create device prototypes. The LinkIt Connect 7681 module, which measures just 15x18mm, is designed to easily mount on a PCB as part of production versions of an IoT device.

Key Features of MT7681

  • Wi-Fi station and access point (AP) modes
  • 802.11 b/g/n (in station mode) and 802.11 b/g (in AP mode)
  • Smart connection APIs to easily create Android or iOS apps to provision a device with wireless AP settings
  • TCP/IP stack
  • Firmware upgrade over UART, APIs for FOTA implementation
  • Software PWM emulation for LED dimming
  • Firmware upgrade over UART, APIs for FOTA implementation

Jan 15, 2015:
II/4. Join MediaTek Labs

MediaTek Labs Logo: The best free resources for Wearables and IoT

Get the tools and resources you and your company need to go from idea to prototype to product.

Register now to get access to:

  • MediaTek SDKs
  • MediaTek hardware reference designs
  • Comment posting in our active developer forum
  • Private messaging with other MediaTek Labs members
  • Our solutions catalog, where you can share your project privately with MediaTek to unlock our support and matchmaking services

MediaTek Labs gives you the help you need to develop innovative hardware and software based on MediaTek products. From smart light bulbs, to the next-generation fitness tracker and the exciting world of the smartwatch, you can make your journey with our help.

As a registered Labs member you’ll be able to put your project in front of our business development team, who’ll help you find the partners you need to get you on the road to success. We’re here to help guide you through the exciting possibilities offered by the next wave in developer opportunities: Wearables and IoT.

MediaTek Labs is free to join:

Register today!

About MediaTek Labs

MediaTek is a young and entrepreneurial company that has grown quickly into a market leader. We identify with creative and driven pioneers in the maker and developer communities, and recognize the benefits of building an ecosystem that fosters your talents and your efforts to innovate.

MediaTek Labs is the developer hub for all our products. It builds on our track record for delivering industry-leading reference designs that offer the shortest time-to-market for our extensive customer and partner base.

MediaTek Launches Labs Developer Program to Jumpstart Wearable and IoT Device Creation

Unveils LinkIt™ platform; simplifies the development of hardware and software for developers, designers and makers

TAIWAN, Hsinchu — Sept 22, 2014 — MediaTek today launched MediaTek Labs (http://labs.mediatek.com), a global initiative that allows developers of any background or skill level to create wearable and Internet of Things (IoT) devices. The new program provides developers, makers and service providers with software development kits (SDKs), hardware development kits (HDKs), and technical documentation, as well as technical and business support.

“With the launch of MediaTek Labs we’re opening up a new world of possibilities for everyone — from hobbyists and students through to professional developers and designers — to unleash their creativity and innovation,” says Marc Naddell, vice president of MediaTek Labs. “We believe that the innovation enabled by MediaTek Labs will drive the next wave of consumer gadgets and apps that will connect billions of things and people around the world.”

The Labs developer program also features the LinkIt™ Development Platform, which is based on theMediaTek Aster (MT2502) chipset. The LinkIt development Platform is the one of the best connected platforms, offering excellent integration for the package size and doing away with the need for additional connectivity hardware.  LinkIt makes creating prototype wearable and IoT devices easy and cost effective by leveraging MediaTek’s proven reference design development model. The LinkIt platform consists of the following components:

  • System-on-Chip (SoC)MediaTek Aster (MT2502), the world’s smallest commercial SoC for Wearables, and companion Wi-Fi (MT5931) and GPS (MT3332) chipsets offering powerful, battery efficient technology.
  • LinkIt OS — an advanced yet compact operating system that enables control software and takes full advantage of the features of the Aster SoC, companion chipsets, and a wide range of sensors and peripheral hardware.
  • Hardware Development Kit (HDK) — Launching first with LinkIt ONE, a co-design project with Seeed Studio, the HDK will make it easy to add sensors, peripherals, and Arduino Shields to LinkIt ONE and create fully featured device prototypes.
  • Software Development Kit (SDK) — Makers can easily migrate existing Arduino code to LinkIt ONE using the APIs provided. In addition, they get a range of APIs to make use of the LinkIt communication features: GSM, GPRS, Bluetooth, and Wi-Fi.

To ensure developers can make the most of the LinkIt offering, the MediaTek Labs website includes a range of additional services, including:

  • Comprehensive business and technology overviews
  • A Solutions Catalog where developers can share information on their devices, applications, and services and become accessible for matchmaking to MediaTek’s customers and partners
  • Support services, including comprehensive FAQ, discussion forums that are monitored by MediaTek technical experts, and — for developers with solutions under development in the Solutions Catalog — free technical support.

“While makers still use their traditional industrial components for new connected IoT devices, with the LinkIt ONE hardware kit as part of MediaTek LinkIt Developer Platform, we’re excited to help Makers bring prototypes to market faster and more easily,” says Eric Pan, founder and chief executive officer of Seeed Studio.

Makers, designers and developers can sign up to MediaTek Labs today and download the full range of tools and documentation at http://labs.mediatek.com.

Mar 2, 2015:
MediaTek Labs Partner Connect

Taking any Wearables or IoT project beyond the prototype stage can be a daunting prospect, whether you’re a small startup or an established company making its first foray into new devices.

To make the path to market easier, MediaTek Labs Partner Connect will help find you the partners you need to make your idea a reality. The program includes some of the world’s best EMS, OEM and ODM companies, as well as distributors of MediaTek products and suppliers of device components. But the real benefit comes from our MediaTek Labs experts, who will work with you to match your requirements with the right partner or partners.

Getting started is simple. Once you have registered your company on MediaTek Labs, submit your Wearables or IoT project to our confidential device Devices Catalog. And it doesn’t matter where you are in the development process — perhaps you have an early prototype running on a LinkIt development board or have full CAD and BOM for your product — our experts can help. Simply select the “Seeking Partner” option when you submit your device for review by MediaTek Labs and, once approved, one of our partner managers will review your requirements and get to work finding the right partners for you.

Designers and developers

Our design partners can assist with specific expertise in electrical engineering, mechanical engineering and computer aided design (CAD), industrial design, regulatory compliance testing, software and more. Whether you’re looking for specific expertise to assist with a single aspect of your project or want a turnkey solution that delivers the vision of your prototype directly to manufacturing, these partners can help.

Manufacturers

From the late stages of development, where you need batches of production prototypes, through low volume pilot runs for consumer testing and marketing, to full production these partners can help. From your designs they’re able to turn your Wearables or IoT idea into a commercial consumer or enterprise product. They’ll do this employing the latest in manufacturing technology, from flexible facilities that can adapt to your needs as you find success in the market.

MediaTek distributors

If you have your own manufacturing facilities or partner and are looking to source MediaTek chipset modules in volumes beyond retail, these partners will be able to help you. In addition to providing for your volume requirements, they’ll also provide additional technical information and support to ensure you make optimal use of MediaTek chipsets in your product and manufacturing process.

Component suppliers

This group of partners will be able to assist you from prototype to production: from selecting the components for your pre-production prototypes through to production run quantities of specific components. Batteries, sensors, screens and much more can be supplied by these partners, from evaluation batches to production quantities delivered to your manufacturing facility or manufacturing partner.

MediaTek Labs Launches New Partner Program to Help Bring Wearables and IoT Devices to Market Faster

MediaTek Labs Partner Connect provides matchmaking for developers and partners in support of MediaTek LinkIt™

SPAIN, Barcelona March02, 2015 – As part of its global developer initiative, MediaTek today announced MediaTek Labs Partner Connect at this year’s Mobile World Congress. The new supply chain partner program will help developers of Wearables and IoT devices design and launch their products based on MediaTek LinkIt by matching them with members of MediaTek’s extensive network of partners.

Today’s launch complements existing development platform offerings from MediaTek Labs and aims to reduce time to market for developers of new Wearables and IoT devices:

  • MediaTek LinkIt is a portfolio of development platforms – currently consisting of MediaTek LinkIt ONE and MediaTek LinkIt Connect 7681– for Wearables and IoT, offering a broad range of connectivity options and the software and hardware development kits (SDKs, HDKs from Seeed Studio and Modules from AcSiP) needed for makers to create their own devices powered by MediaTek chipsets.
  • MediaTek Cloud Sandbox (also launched today) is a complimentary cloud-based IoT platform and playground to store, display and remotely access IoT device data during prototyping.
  • MediaTek Labs Partner Connect will assist registered developers of MediaTek Labs in finding appropriate supply chain partners to help with design, development and manufacturing, sourcing of MediaTek chipset based modules and other key components. MediaTek ODM partners have available off-the-shelf reference designs to rapidly serve developers with different design capabilities.

“Taking any Wearables or IoT project beyond the prototype stage can be a daunting prospect, whether you’re a small startup or an established company making its first foray into new devices”, said Marc Naddell, VP of MediaTek Labs. “To make the path to market easier, MediaTek Labs Partner Connect will help developers find the partners they need to make their ideas a reality”.

To gain access to Partner Connect matchmaking services, Labs member companies submit their Wearables or IoT projects to the Labs confidential Device Solutions Catalog and select the “Seeking Partner” option. MediaTek Labs will then help evaluate the business case and technical feasibility of the device, and, once vetted and approved, a partner manager will get to work finding the right partners in MediaTek’s network.

MediaTek Labs Partner Connect is the latest initiative to expand the company’s 17-year legacy of working closely across a wide ecosystem of TV, phone, tablet, navigation, router, gaming and multimedia customers. Over the years, MediaTek has provided efficient, turnkey solutions that give these clients a cost-effective and rapid time to market for their new devices and has taken many start-ups from humble beginnings to established, global enterprises. Now MediaTek is extending partner network to developers and makers in the Wearables and IoT space through MediaTek Labs Partner Connect.

To hear more about MediaTek Labs and see live demos of its unique offerings for developers, visit MediaTek’s booth at Mobile World Congress – Hall 6, Stand 6E21. To learn more about MediaTek Labs Partner Connect, visit http://labs.mediatek.com/partners.

Mar 2, 2015:
Introduction to MediaTek Cloud Sandbox

When prototyping Wearables and IoT devices, you may want to collect, sort and visualize the data captured by your prototype. You may also want to test how your device could be controlled remotely and make these features available to testers and collaborators.

To save you from having to find and pay for cloud services, MediaTek Cloud Sandbox offers you a free service that you can use to quickly prototype your planned cloud implementation.

Using a RESTful API you collect data from your devices, which you can view in a powerful web-based dashboard. The dashboard offers a range of display and graphing options. Then you can control your Wearables and IoT devices by issuing commands from the dashboard. In addition, a complementary smartphone app lets you review collected data and control your devices from anywhere.

Key Features

  • Define Wearables and IoT prototype device data and other properties
  • Define data types such as geo-location, temperature, humidity and more
  • Create multiple devices from one profile
  • Push and Pull data between a device and the sandbox using a RESTful API
  • Remotely control devices using states, such as switch-state and more
  • Visualize data graphically
  • Receive notifications when data are collected or changed
  • Manage and control remotely, using the complementary mobile app
  • Create reports about prototypes and collected data
  • Perform FOTA firmware updates
  • Control access to data and devices with granular security control
  • Includes full API reference, FAQ and set of tutorials

Access MediaTek Cloud Sandbox

MediaTek Labs Helps Simplify Wearables and IoT Development with Free Cloud Service

MediaTek Cloud Sandbox accelerates device prototyping by offering developers a complimentary cloud service to host prototype-device data

SPAIN, Barcelona – March 2, 2015 – MediaTek Labs, today announced general availability of its new Cloud Sandbox data platform to better help developers bring their ideas to life for the Internet of Things (IoT). The new service, which is free to all registered MediaTek Labs members globally, offers convenient storage of and access to data from wearable and IoT devices during prototyping.

“MediaTek recognizes the importance of a cloud-based IoT platform and playground to service developers and makers who are prototyping wearable or IoT devices”, said Marc Naddell, VP of MediaTek Labs. “With this complimentary offering from MediaTek Labs, IoT developers no longer need to set up and manage their own web server or source third-party cloud platform services. Instead they can focus on their IoT device prototyping and value proposition, accelerating the time from solution ideation to prototype and proof-of-concept”.

A considerable challenge for developers in the early stages of device creation is not only the management of large amounts of data but also a convenient and simple way to visualize the data and demo prototypes to collaborators. MediaTek Cloud Sandbox helps solve this with a variety of invaluable features , including:

  • Data storage and visual charting
  • Data monitoring with notifications
  • Device remote control
  • Firmware upgrades over-the-air (FOTA)
  • RESTful API support, TCP socket connection
  • Web or mobile app based access

MediaTek Cloud Sandbox (MCS) will be on display in MediaTek’s booth at Mobile World Congress – Hall 6, Stand 6E21 – with three compelling demonstrations, including:

  1. Wine Brewer – A MediaTek LinkIt™ ONE development board implementation that won first prize in ITRI Mobilehero 2014 competition in Taiwan.
  2. Weather Station – A LinkIt ONE development board implementation that gathers and pushes real-time temperature, humidity and pressure data to MCS, and is able to have its fan controlled from MCS and the MCS companion mobile app.
  3. MediaTek LinkIt Connect 7681 demo – A LinkIt Connect 7681 development board implementation and its companion mobile app to demonstrate real-time LED color control.

MediaTek Labs was launched in September 2014 and continues to provide developers, makers and service providers with SDKs, HDKs and documentation, as well as technical and business support. To learn more about MediaTek Cloud Sandbox and get access , visit http://labs.mediatek.com/mcs .


MediaTek Ventures Logo
II/5. MediaTek Ventures launches to enable a new generation of world-class companies

MediaTek Allocates US$300m to Invest in New Business Opportunities

SPAIN, Barcelona March1, 2015 – MediaTek, today announced the launch of MediaTek Ventures – a new strategic investment arm within the company.  Headquartered in Hsinchu, Taiwan, MediaTek Ventures will initially invest in startups in Greater China, Europe, Japan and North America, with a US$300m reserve.

MediaTek Ventures will actively invest into innovative startups in semiconductor-system-and devices, Internet infrastructure, services and IoT, with the goal of creating a collaborative ecosystem around MediaTek’s corporate objectives in communication, computing, online media and analytics.  Investments will include all stages of funding with a disciplinary approach focused on value creation.  Through MediaTek Ventures, the company is seeking to extend its 17 year heritage of innovation to the broader electronics value-chain, diversify product solutions, and monetize opportunities with the next generation of entrepreneurs.

“We will not constrain ourselves to any single region in pursuit of innovation and excellence. Through MediaTek Ventures, a new generation of world-class companies will be empowered. We are excited to enable entrepreneurs and start-ups in achieving their dreams and fostering companies that have the potential to create value to end-users around the world and solve the world’s biggest problems”, said David Ku, Chief Financial Officer, MediaTek.

Further details on the company’s investment strategy and roadmaps will be made available in the second half of 2015.

Entities interested in investment by MediaTek should send proposals to: ventures@mediatek.com. For more information, please visit www.mediatekventures.com.

March 5, 2015: MediaTek CFO David Ku on MediaTek Ventures at MWC 2015

MediaTek Ventures - Mission and current investment amount

MediaTek Investment Focus

MediaTek Cross-Platform Synergy and User Experience

MediaTek Geographic Focus

III. Stealth Strategic Initiatives (MWC 2015 timeframe)

III/1. SoC for Android Wear and Android based standalone wearables

Note that this initiative is also for the standalone Android based wearables (i.e. working with no reliance on an Android based smarthone) as shown by the new production line of MediaTek’s lead and pilot partner Burg Wearables: Android 4.4 based Smart 3G & WiFi WatchPhones of 55g weight (and up), 10mm depth (and up) [this same blog, March 15, 2015]

Charbax: Best of MWC: MediaTek MT2601 Smartwatch and Smart Glass!!

MediaTek launches their MT2601 Android Wear ready (soon) Smartwatch platform, and also they show off their Kopin micro LCD Smart Glass solution on MediaTek Aster MT2502. In my opinion, these are the best looking Smartwatch and Smartglass at Mobile World Congress 2015. The Smartwatch that MediaTek is showing is designed by GoerTek and it runs Android 4.4 for now but Android 5.0 with Android Wear UI is coming soon for the MT2601 platform according to MediaTek. MediaTek MT2502 is running an ARM11 core to run the MediaTek LinkIt OS while MediaTek MT2601 is a dual-core ARM Cortex-A7 to run a full Android Lollipop with Android Wear soon supported! This is perhaps the optimal low cost Smartwatch and Smart Glass solution for the market, finally available from MediaTek, with soon to come Lollipop thus Android Wear supported.

Specs of the GoerTek MediaTek MT2601 Smartwatch:
– 1.5” circular TFT LCD 320×320
– IPX7 waterproofing
– BT/BLE, Wi-Fi, GPS, 3G cellular supported
– Android 4.4 OS (Lollipop Android Wear soon!)
– PPG heart-rate sensing
– Built-in microphone and speaker

Remark: This GoerTek smartwatch uses the same MediaTek technology than the WatchPhones from Burg Wearables. The software technology is however quite better on the latter ones.

MediaTek Introduces MT2601 in Support of Google’s Android Wear Software

Higher energy efficiency and reduced component count of MT2601 in relation to competition translate into significant cost, size and usage time benefits

Las Vegas – Jan. 6, 2015 – MediaTek, a leading fabless semiconductor company for wireless communications and digital multimedia solutions, today announced its MT2601 System on Chip (SoC) for wearable devices based on Google’s Android Wear software. By enabling Android Wear on MT2601, MediaTek is offering a comprehensive platform solution for device makers to implement their own hardware and software, and introduces a multitude of possibilities in Android Wear devices for the fast-growing consumer class globally.

The MT2601 packs a robust set of features in its small size with 41.5 percent fewer components and lower current consumption when compared with other chipsets in the market. Its design advantages translate into lower bill of materials (BoM) costs, smaller printed circuit board (PCB) size and longer battery life, which in turn yield fashionable wearable devices with long usage times and affordable prices.

The MT2601 includes 1.2 GHz dual-core ARM Cortex-A7, ARM Mali-400 MP GPU, and supports qHD display resolution. The MT2601 interfaces with a whole host of external sensors and the wireless connectivity SoC MT6630 for Bluetooth – all in a PCBA footprint of less than 480 mm2. This small PCB size meets the design requirements of the widest variety of wearable devices in sports and fitness, location tracking, and various other categories. MediaTek is a strong supporter of Android Wear and will continue to evolve MT2601 to align with the Android Wear road map.

“The MT2601 has an incredibly small die size and is highly optimized for cost and power performance. The platform solution, comprised of MT2601 integrated with Android Wear software, will fuel the maker revolution and empower the application developer community worldwide to create a broad range of innovative applications and services,” said J.C. Hsu, General Manager of New Business Development at MediaTek.

The MT2601 is in mass production now and ready for inclusion in Android Wear devices.

Mediatek Announces MT6630, World’s First Five-in-One Combo Wireless Connectivity SOC for Mobile Devices

TAIWAN, Hsinchu – 25 February, 2014 – Mediatek today announced MT6630, the world’s first five-in-one combo wireless system-on-a-chip (SOC) to support full featured smartphones, tablets and other premium mobile devices.

The MT6630 dramatically reduces the component count and eBOM while improving ease-of-design for manufacturers by eliminating external low noise amplifiers (LNAs) and integrating the Wi-Fi 2.4 GHz and 5 GHz power amplifiers (PAs), Bluetooth PA, and transmit-receive (T/R) switch into a PCBA footprint less than 65 mm2.

Key features

  • Dual-band single-stream 802.11a/b/g/n/ac with 20/40/80MHz channel bandwidth
  • 802.11v time of flight protocol support and management engines to enable higher accuracy of indoor positioning via Wi-Fi
  • Advanced support for Wi-Fi Direct Services and Miracast™ optimization for easier pairing, increased robustness, advanced use-cases and lower power
  • Bluetooth 4.1 with Classic, High-Speed and Low-Energy support, and ANT+ for compatibility with the latest fitness tracking, health monitoring and point of information devices and applications
  • Concurrent tri-band reception of GPS, GLONASS, Beidou, Galileo and QZSS with industry leading sensitivity, low power, positioning accuracy, and the longest prediction engine
  • FM transceiver with RDS/RBDS
  • Integrated engines and algorithms for full concurrent operation and co-existence, including industry-leading throughput during LTE transmission

MT6630 delivers full concurrent operation of all 5 systems operating at maximum compute intensity with no degradation compared to single-system operation while offloading the mobile device CPU for design ease and extended battery life.

As a focus on low power and digital home convergence, the MT6630 uses a configurable PA architecture to save current at commonly used power levels, including those used for Miracast™ Wi-Fi Direct services. MT6630 implements advanced co-existence techniques, including for LTE to deliver industry-leading throughputs. MT6630 also supports Wi-Fi diversity for premium smartphones and tablets to improve antenna angle sensitivity and handheld scenarios.

“MT6630 makes it simple for manufacturers to bring mobile devices to market with sophisticated wireless features, lower power and uncompromised performance,” said SR Tsai, General Manager of MediaTek’s Connectivity Business Unit. “MT6630 furthers MediaTek’s focus to deliver the best experiences across the digital home and mobile applications by using its unique leadership position in digital TV host processors, smartphone platforms, and connectivity.”

The small-footprint design is available in 5 x 5mm WLCSP (Wafer Level Chip Scale Package) or a 7 x 7mm QFN (Quad Flat No-Leads) and requires only 44 components, which is around half that of other integrated wireless solutions.

Mediatek MT6630 is sampling now and complements the recently announced MT6595 octa-core SOC with LTE for premium mobile devices. The first commercially available devices to use MT6630 are expected in the second half of 2014.

Then there is a MediaTek Protea platform for Android based wearables:

For mobile networking the optional parts of the Protea have the following characteristics:
– 2G Quad Band GSM (900/1800/850/1900)
– 3G Mono Band WCDMA (2100 or 1900) GPRS, EDGE, HSDAP, HSDAP+


Potential generic tags when clustered (selected ones are in blue):

#1 CLUSTER:
strategy: 671,000,000
business development“: 164,000,000
time to market“:  70,200,000
ecosystem: 53,200,000
marketing strategy“: 34,500,000
business strategy“: 25,800,000
rebranding: 6,750,000
strategy development“: 5,720,000
market strategy“: 1,430,000
developer program“: 751,000
strategic investment“: 609,000
corporate objectives” 526,000
partner ecosystem“: 397,000
design partners“: 439,000
supply chain partners“: 331,000
investment focus“: 364,000
reduce time to market“: 305,000
business development strategy“: 417,000
strategy development process“: 268,000
ecosystem partners“: 177,000
innovative startups“: 159,000
enable entrepreneurs“: 39,300
semiconductor market“: 244,000
smartphone strategy“: 77,700
strategic reasoning” 58,200
semiconductor vendors“: 49,000
SoC market“: 28,800
chip strategy“: 17,000
SoC vendors“: 11,300
“semiconductor strategy”: 9,280
“SoC strategy”: 4,330
“fabless strategy”: 3,840
“MediaTek strategy”: 616

#2 CLUSTER:
SoC: 238,000,000
quad-core“: 60,700,000
octa-core“: 2,660,000
smartphone chip“: 207,000
mobile SoC“: 63,400
smartphone SoC“: 43,700
SoC family“: 35,500
tablet SoC“: 23,500
SoC market“: 28,700
smartphone SoC market“: 14,200
SoC vendors“: 11,300
system on a chip“: 733,000
system on chip“: 625,000
fabless semiconductor“: 323,000
“fabless semiconductor vendors”: 4,890

#3 CLUSTER:
smartphone market“: 964,000
“phone market”: 632,000
“high-end”: 159,000,000
high-end market“: 425,000
LTE-Smartphone“: 605,000
“high-end phone”: 221,000
high-end smartphone“: 444,000
high-end smartphone market“: 67,900
“premium phone”: 356,000
premium smartphone“: 348,000
“premium smartphones”: 94,400
premium market“: 467,000
superphone: 435,000
superphones: 264,000
“superphone market”: 4,640
super-mid“: 177,000
“super-mid market”: 5610
“super mid-range”: 5570
mid-range“: 61,000,000
mid-range smartphone“: 400,000
“mid-range smartphones”: 153,000
“mid-range market”: 89,500
mid range smartphones: 2,460,000
best mid range android phone: 15,800,000
android phone“: 30,200,000
android smartphone“: 13,800,000

#4 CLUSTER:
tablets: 578,000,000
Android tablet“: 22,200,000
Android tablets“: 15,100,000
tablet market“: 919,000
premium tablet“: 152,000
high-end tablet“: 111,000
tablet chips“: 26,000
“high-end tablet market”: 8,450
“premium tablet market”: 8,050

#5 CLUSTER:
64-bit“: 135,000,000
forefront of technology“: 8,740,000
“64-bit computing”: 411,000
extreme performance“: 630,000
CPU+GPU“: 1,010,000
heterogeneous computing“: 312,000
“Heterogeneous System Architecture”: 60,100
“Heterogeneous Multi-Processing”: 42,800
“heterogeneous computing with OpenCL”: 15,400
big.LITTLE“: 863,000
“ARM big.LITTLE”: 95,800
“big.LITTLE Architecture”: 22,400
“big.LITTLE computing”: 18,400
“big.LITTLE cluster”: 16,600
multi-core processing“: 153,000
GPU compute“: 92,300
display technology“: 1,070,000
HDTV: 84,200,000
“power efficiency”: 819,000
premium mobile computing“: 762,000
“premium mobile computing experience”: 17,800
LTE: 170,000,000
“LTE World Mode”: 3,220
“LTE WorldMode”: 2,280
“LTE with CDMA”: 3,660
integrated CDMA2000“: 13,400
development platform“: 928,000
reference design“: 940,000
cloud computing“: 114,000,000

Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft

Update: Gates Says He’s Very Happy With Microsoft’s Nadella [Bloomberg TV, Oct 2, 2014] + Bill Gates is trying to make Microsoft Office ‘dramatically better’ [The Verge, Oct 3, 2014]

This is the essence of Microsoft Fiscal Year 2014 Fourth Quarter Earnings Conference Call(see also the Press Release and Download Files) for me, as the new, extremely encouraging, overall setup of Microsoft in strategic terms (the below table is mine based on what Satya Nadella told on the conference call):

image

These are extremely encouraging strategic advancements vis–à–vis previously publicized ones here in the following, Microsoft related posts of mine:

I see, however, particularly challenging the continuation of the Lumia story with the above strategy, as with the previous, combined Ballmer/Elop(Nokia) strategy the results were extremely weak:

image

Worthwhile to include here the videos Bloomberg was publishing simultaneously with Microsoft Fourth Quarter Earnings Conference Call:

Inside Microsoft’s Secret Surface Labs [Bloomberg News, July 22, 2014]

July 22 (Bloomberg) — When Microsoft CEO Satya Nadella defined the future of his company in a memo to his 127,100 employees, he singled out the struggling Surface tablet as key to a future built around the cloud and productivity. Microsoft assembled an elite team of designers, engineers, and programmers to spend years holed up in Redmond, Washington to come up with a tablet to take on Apple, Samsung, and Amazon. Bloomberg’s Cory Johnson got an inside look at the Surface labs.

Will Microsoft Kinect Be a Medical Game-Changer? [Bloomberg News, July 22, 2014]

July 23 (Bloomberg) — Microsoft’s motion detecting camera was thought to be a game changer for the video gaming world when it was launched in 2010. While appetite for it has since decreased, Microsoft sees the technology as vital in its broader offering as it explores other sectors like 3d mapping and live surgery. (Source: Bloomberg

Why Microsoft Puts GPS In Meat For Alligators [Bloomberg News, July 22, 2014]

July 23 (Bloomberg) — At the Microsoft Research Lab in Cambridge, scientists track animals and map climate change all on the off chance they’ll stumble across the next big thing. (Source: Bloomberg)

To this it is important to add: How Pier 1 is using the Microsoft Cloud to build a better relationship with their customers [Microsoft Server and Cloud YouTube channel, July 21, 2014]

In this video, Pier 1 Imports discuss how they are using Microsoft Cloud technologies such as Azure Machine Learning to to predict which the product the customer might want to purchase next, helping to build a better relationship with their customers. Learn more: http://www.azure.com/ml

as well as:
Microsoft Surface Pro 3 vs. MacBook Air 13″ 2014 [CNET YouTube channel, July 21, 2014]

http://cnet.co/1nOygqh Microsoft made a direct comparison between the Surface Pro 3 and the MacBook Air 13″, so we’re throwing them into the Prizefight Ring to settle the score once and for all. Let’s get it on!

Surface Pro 3 vs. MacBook Air (2014) [CTNtechnologynews YouTube channel, July 1, 2014]

The Surface Pro 3 may not be the perfect laptop. But Apple’s MacBook Air is pretty boring. Let’s see which is the better device!

In addition here are some explanatory quotes (for the new overall setup of Microsoft) worth to include here from the Q&A part of Microsoft’s (MSFT) CEO Satya Nadella on Q4 2014 Results – Earnings Call Transcript [Seeking Alpha, Jul. 22, 2014 10:59 PM ET]

Mark Moerdler – Sanford Bernstein

Thank you. And Amy one quick question, we saw a significant acceleration this quarter in cloud revenue, or I guess Amy or Satya. You saw acceleration in cloud revenue year-over-year what’s – is this Office for the iPad, is this Azure, what’s driving the acceleration and how long do you think we can keep this going?

Amy Hood

Mark, I will take it and if Satya wants to add, obviously, he should do that. In general, I wouldn’t point to one product area. It was across Office 365, Azure and even CRM online. I think some of the important dynamics that you could point to particularly in Office 365; I really think over the course of the year, we saw an acceleration in moving the product down the market into increasing what we would call the mid-market and even small business at a pace. That’s a particular place I would tie back to some of the things Satya mentioned in the answer to your first question.

Improvements to analytics, improvements to understanding the use scenarios, improving the product in real-time, understanding trial ease of use, ease of sign-up all of these things actually can afford us the ability to go to different categories, go to different geos into different segments. And in addition, I think what you will see more as we initially moved many of our customers to Office 365, it came on one workload. And I think what we’ve increasingly seen is our ability to add more workloads and sell the entirety of the suite through that process. I also mentioned in Azure, our increased ability to sell some of these higher value services. So while, I can speak broadly but all of them, I think I would generally think about the strength of being both completion of our product suite ability to enter new segments and ability to sell new workloads.

Satya Nadella

The only thing I would add is it’s the combination of our SaaS like Dynamics in Office 365, a public cloud offering in Azure. But also our private and hybrid cloud infrastructure which also benefits, because they run on our servers, cloud runs on our servers. So it’s that combination which makes us both unique and reinforcing. And the best example is what we are doing with Azure active directory, the fact that somebody gets on-boarded to Office 365 means that tenant information is in Azure AD that fact that the tenant information is in Azure AD is what makes EMS or our Enterprise Mobility Suite more attractive to a customer manager iOS, Android or Windows devices. That network effect is really now helping us a lot across all of our cloud efforts.

Keith Weiss – Morgan Stanley

Excellent, thank you for the question and a very nice quarter. First, I think to talk a little bit about the growth strategy of Nokia, you guys look to cut expenses pretty aggressively there, but this is – particularly smartphones is a very competitive marketplace, can you tell us a little bit about sort of the strategy to how you actually start to gain share with Lumia on a going forward basis? And may be give us an idea of what levels of share or what levels of kind unit volumes are you going to need to hit to get to that breakeven in FY16?

Satya Nadella

Let me start and Amy you can even add. So overall, we are very focused on I would say thinking about mobility share across the entire Windows family. I already talked about in my remarks about how mobility for us even goes beyond devices, but for this specific question I would even say that, we want to think about mobility not just one form factor of a mobile device because I think that’s where the ultimate price is.

But that said, we are even year-over-year basis seen increased volume for Lumia, it’s coming at the low end in the entry smartphone market and we are pleased with it. It’s come in many markets we now have over 10% that’s the first market I would sort of say that we need to track country-by-country. And the key places where we are going to differentiate is looking at productivity scenarios or the digital work and life scenario that we can light up on our phone in unique ways.

When I can take my Office Lens App use the camera on the phone take a picture of anything and have it automatically OCR recognized and into OneNote in searchable fashion that’s the unique scenario. What we have done with Surface and PPI shows us the way that there is a lot more we can do with phones by broadly thinking about productivity. So this is not about just a Word or Excel on your phone, it is about thinking about Cortana and Office Lens and those kinds of scenarios in compelling ways. And that’s what at the end of the day is going to drive our differentiation and higher end Lumia phones.

Amy Hood

And Keith to answer your specific question, regarding FY16, I think we’ve made the difficult choices to get the cost base to a place where we can deliver, on the exact scenario Satya as outlined, and we do assume that we continue to grow our units through the year and into 2016 in order to get to breakeven.

Rick Sherlund – Nomura

Thanks. I’m wondering if you could talk about the Office for a moment. I’m curious whether you think we’ve seen the worst for Office here with the consumer fall off. In Office 365 growth in margins expanding their – just sort of if you can look through the dynamics and give us a sense, do you think you are actually turned the corner there and we may be seeing the worse in terms of Office growth and margins?

Satya Nadella

Rick, let me just start qualitatively in terms of how I view Office, the category and how it relates to productivity broadly and then I’ll have Amy even specifically talk about margins and what we are seeing in terms of I’m assuming Office renewals is that probably the question. First of all, I believe the category that Office is in, which is productivity broadly for people, the group as well as organization is something that we are investing significantly and seeing significant growth in.

On one end you have new things that we are doing like Cortana. This is for individuals on new form factors like the phones where it’s not about anything that application, but an intelligent agent that knows everything about my calendar, everything about my life and tries to help me with my everyday task.

On the other end, it’s something like Delve which is a completely new tool that’s taking some – what is enterprise search and making it more like the Facebook news feed where it has a graph of all my artifacts, all my people, all my group and uses that graph to give me relevant information and discover. Same thing with Power Q&A and Power BI, it’s a part of Office 365. So we have a pretty expansive view of how we look at Office and what it can do. So that’s the growth strategy and now specifically on Office renewals.

Amy Hood

And I would say in general, let me make two comments. In terms of Office on the consumer side between what we sold on prem as well as the Home and Personal we feel quite good with attach continuing to grow and increasing the value prop. So I think that’s to address the consumer portion.

On the commercial portion, we actually saw Office grow as you said this quarter; I think the broader definition that Satya spoke to the Office value prop and we continued to see Office renewed in our enterprise agreement. So in general, I think I feel like we’re in a growth phase for that franchise.

Walter Pritchard – Citigroup

Hi, thanks. Satya, I wanted to ask you about two statements that you made, one around responsibly making the market for Windows Phone, just kind of following on Keith’s question here. And that’s a – it’s a really competitive market it feels like ultimately you need to be a very, very meaningful share player in that market to have value for developer to leverage the universal apps that you’re talking about in terms of presentations you’ve given and build in and so forth.

And I’m trying to understand how you can do both of those things once and in terms of responsibly making the market for Windows Phone, it feels difficult given your nearest competitors there are doing things that you might argue or irresponsible in terms of making their market given that they monetize it in different ways?

Satya Nadella

Yes. One of beauties of universal Windows app is, it aggregates for the first time for us all of our Windows volume. The fact that even what is an app that runs with a mouse and keyboard on the desktop can be in the store and you can have the same app run in the touch-first on a mobile-first way gives developers the entire volume of Windows which is 300 plus million units as opposed to just our 4% share of mobile in the U.S. or 10% in some country.

So that’s really the reason why we are actively making sure that universal Windows apps is available and developers are taking advantage of it, we have great tooling. Because that’s the way we are going to be able to create the broadest opportunity to your very point about developers getting an ROI for building to Windows. For that’s how I think we will do it in a responsible way.

Heather Bellini – Goldman Sachs

Great. Thank you so much for your time. I wanted to ask a question about – Satya your comments about combining the next version of Windows and to one for all devices and just wondering if you look out, I mean you’ve got kind of different SKU segmentations right now, you’ve got enterprise, you’ve got consumer less than 9 inches for free, the offering that you mentioned earlier that you recently announced. How do we think about when you come out with this one version for all devices, how do you see this changing kind of the go-to-market and also kind of a traditional SKU segmentation and pricing that we’ve seen in the past?

Satya Nadella

Yes. My statement Heather was more to do with just even the engineering approach. The reality is that we actually did not have one Windows; we had multiple Windows operating systems inside of Microsoft. We had one for phone, one for tablets and PCs, one for Xbox, one for even embedded. So we had many, many of these efforts. So now we have one team with the layered architecture that enables us to in fact one for developers bring that collective opportunity with one store, one commerce system, one discoverability mechanism. It also allows us to scale the UI across all screen sizes; it allows us to create this notion of universal Windows apps and being coherent there.

So that’s what more I was referencing and our SKU strategy will remain by segment, we will have multiple SKUs for enterprises, we will have for OEM, we will have for end-users. And so we will – be disclosing and talking about our SKUs as we get further along, but this my statement was more to do with how we are bringing teams together to approach Windows as one ecosystem very differently than we ourselves have done in the past.

Ed Maguire – CLSA

Hi, good afternoon. Satya you made some comments about harmonizing some of the different products across consumer and enterprise and I was curious what your approach is to viewing your different hardware offerings both in phone and with Surface, how you’re go-to-market may change around that and also since you decided to make the operating system for sub 9-inch devices free, how you see the value proposition and your ability to monetize that user base evolving over time?

Satya Nadella

Yes. The statement I made about bringing together our productivity applications across work and life is to really reflect the notion of dual use because when I think about productivity it doesn’t separate out what I use as a tool for communication with my family and what I use to collaborate at work. So that’s why having this one team that thinks about outlook.com as well as Exchange helps us think about those dual use. Same thing with files and OneDrive and OneDrive for business because we want to have the software have the smart about separating out the state carrying about IT control and data protection while me as an end user get to have the experiences that I want. That’s how we are thinking about harmonizing those digital life and work experiences.

On the hardware side, we would continue to build hardware that fits with these experiences if I understand your question right, which is how will be differentiate our first party hardware, we will build first party hardware that’s creating category, a good example is what we have done with Surface Pro 3. And in other places where we have really changed the Windows business model to encourage a plethora of OEMs to build great hardware and we are seeing that in fact in this holiday season, I think you will see a lot of value notebooks, you will see clamshells. So we will have the full price range of our hardware offering enabled by this new windows business model.

And I think the last part was how will we monetize? Of course, we will again have a combination, we will have our OEM monetization and some of these new business models are about monetizing on the backend with Bing integration as well as our services attached and that’s the reason fundamentally why we have these zero-priced Windows SKUs today.

Microsoft BUILD 2014 Day 2: “rebranding” to Microsoft Azure and moving toward a comprehensive set of fully-integrated backend services

  1. “Rebranding” into Microsoft Azure from the previous Windows Azure
  2. Microsoft Azure Momentum on the Market
  3. The new Azure Management Portal (preview)
  4. New Azure features: IaaS, web, mobile and data announcements

Microsoft Announces New Features for Cloud Computing Service [CCTV America YouTube channel, April 3, 2014]

Day two of the Microsoft Build developer conference in San Francisco wrapped up with the company announcing 44 new services. Most of those are based on Microsoft Azure – it’s cloud computing platform that manages applications across data centers. CCTV’s Mark Niu reports from San Francisco.

Watch the first 10 minutes of this presentation for a brief summary of the latest state of Microsoft Azure: #ChefConf 2014: Mark Russinovich, “Microsoft Azure Group” [Chef YouTube channel, April 16, 2014]

Mark Russinovich is a Technical Fellow in the Windows Azure Group at Microsoft working on Microsoft’s cloud platform. He is a widely recognized expert in operating systems, distributed systems, and cybersecurity. In this keynote from #ChefConf 2014, he gives an overview of Microsoft Azure and a demonstration of the integration between Azure and Chef

Then here is a fast talk and Q&A on Azure with Scott Guthrie after his keynote preseantation at BUILD 2014:
Cloud Cover Live – Ask the Gu! [jlongo62 YouTube channel, published on April 21, 2014]

With Scott Guthrie, Executive Vice President Microsoft Cloud and Enterprise group

The original: Cloud Cover Live – Ask the Gu! [Channel 9, April 3, 2014]

Details:

  1. “Rebranding” into Microsoft Azure from the previous Windows Azure
  2. Microsoft Azure Momentum on the Market
  3. The new Azure Management Portal (preview)
  4. New Azure features: IaaS, web, mobile and data announcements

[2:45:47] long video record of the Microsoft Build Conference 2014 Day 2 Keynote [MSFT Technology News YouTube channel, recorded on April 3, published on April 7, 2014]

Keynote – April 2-4, 2014 San Francisco, CA 8:30AM to 11:30AM

The original video record on Channel 9
Day 2 Keynote transcript by Microsoft


1. “Rebranding” into Microsoft Azure from the previous Windows Azure

Yes, you’ve noticed right: the Windows prefix has gone, and the full name is now only Microsoft Azure! The change happened on April 3 as evidenced by change of the cover photo on the Facebook site, now also called Microsoft Azure:

image

from this cover photo used from July 23, 2013 on:

image

And it happened without any announcement or explanation as even the last, April 1 Microsoft video carried the Windows prefix: Tuesdays with Corey //build Edition

We can’t believe he said that! This week, Corey gets us in trouble by spilling all sorts of //build secrets. Check it out!

as well as the last, March 14 video ad: Get Your Big Bad Wolf On (Extended)

Go get your big bad wolf on, today: http://po.st/01rkCL


2. Microsoft Azure Momentum on the Market

The day began with Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group, touting Microsoft progress with Azure for the last 18 months when:

… we talked about our new strategy with Azure and our new approach, a strategy that enables me to use both infrastructure as a service and platform as a service capabilities together, a strategy that enables developers to use the best of the Windows ecosystem and the best of the Linux ecosystem together, and one that delivers unparalleled developer productivity and enables you to build great applications and services that work with every device

  • Last year … shipped more than 300 significant new features and releases
  • … we’ve also been hard at work expanding the footprint of Azure around the world. The green circles you see on the slide here represent Azure regions, which are clusters of datacenters close together, and where you can go ahead and run your application code. Just last week, we opened two new regions, one in Shanghai and one in Beijing. Today, we’re the only global, major cloud provider that operates in mainland China. And by the end of the year, we’ll have more than 16 public regions available around the world, enabling you to run your applications closer to your customers than ever before.
  • More than 57 percent of the Fortune 500 companies are now deployed on Azure.
  • Customers run more than 250,000 public-facing websites on Azure, and we now host more than 1 million SQL databases on Azure.
  • More than 20 trillion objects are now stored in the Azure storage system. We have more than 300 million users, many of them — most of them, actually, enterprise users, registered with Azure Active Directory, and we process now more than 13 billion authentications per week.
  • We have now more than 1 million developers registered with our Visual Studio Online service, which is a new service we launched just last November.

Let’s go beyond the big numbers, though, and look at some of the great experiences that have recently launched and are using the full power of Azure and the cloud.

Titanfall” was one of the most eagerly anticipated games of the year, and had a very successful launch a few weeks ago. “Titanfall” delivers an unparalleled multiplayer gaming experience, powered using Azure.

Let’s see a video of it in action, and hear what the developers who built it have to say.

[Titanfall and the Power of the Cloud [xbox YouTube channel, April 3, 2014]]

‘Developers from Respawn Studios and Xbox discuss how cloud computing helps take Titanfall to the next level.

One of the key bets the developers of “Titanfall” made was for all game sessions on the cloud. In fact, you can’t play the game without the cloud, and that bet really paid off.

As you heard in the video, it enables much, much richer gaming experiences. Much richer AI experiences. And the ability to tune and adapt the game as more users use it.

To give you a taste of the scale, “Titanfall” had more than 100,000 virtual machines deployed and running on Azure on launch day. Which is sort of an unparalleled size in terms of a game launch experience, and the reviews of the game have been absolutely phenomenal.

Another amazing experience that recently launched and was powered using Azure was the Sochi Olympics delivered by NBC Sports.

NBC used Azure to stream all of the games both live and on demand to both Web and mobile devices. This was the first large-scale live event that was delivered entirely in the cloud with all of the streaming and encoding happening using Azure.

Traditionally, with live encoding, you typically run in an on-premises environment because it’s so latency dependent. With the Sochi Olympics, Azure enabled NBC to not only live encode in the cloud, but also do it across multiple Azure regions to deliver high-availability redundancy.

More than 100 million people watched the online experience, and more than 2.1 million viewers alone watched it concurrently during the U.S. versus Canada men’s hockey match, a new world record for online HD streaming.

RICK CORDELLA [Senior Vice President and General Manager of NBC Sports Digital]: The company bets about $1 billion on the Olympics each time it goes off. And we have 17 days to recoup that investment. Needless to say, there is no safety net when it comes to putting this content out there for America to enjoy. We need to make sure that content is out there, that it’s quality, that our advertisers and advertisements are being delivered to it. There really is no going back if something goes wrong.

The decision for that was taken more than a year ago: Windows Azure Teams Up With NBC Sports Group [Microsoft Azure YouTube channel, April 9, 2013]

Rick Cordella, senior vice president and general manager of digital media at NBC Sports Group discusses how they use Windows Azure across their digital platforms


3. The new Azure Management Portal (preview)

But in fact a new way of providing a comprehensive set of fully-integrated backend services had significantly bigger impact on the audience of developers. According to Microsoft announces new cloud experience and tools to deliver the cloud without complexity [The Official Microsoft Blog, April 3, 2014]

The following post is from Scott Guthrie, Executive Vice President, Cloud and Enterprise Group, Microsoft.

On Thursday at Build in San Francisco, we took an important step by unveiling a first-of-its kind cloud environment within Microsoft Azure that provides a fully integrated cloud experience – bringing together cross-platform technologies, services and tools that enable developers and businesses to innovate with enterprise-grade scalability at startup speed. Announced today, our new Microsoft Azure Preview [Management]Portal is an important step forward in delivering our promise of the cloud without complexity.

image

When cloud computing was born, it was hailed as the solution that developers and business had been waiting for – the promise of a quick and easy way to get more from your business-critical apps without the hassle and cost of infrastructure. But as the industry transitions toward mobile-first, cloud-first business models and scenarios, the promise of “quick and easy” is now at stake. There’s no question that developing for a world that is both mobile-first and cloud-first is complicated. Developers are managing thousands of virtual machines, cobbling together management and automation solutions, and working in unfamiliar environments just to make their apps work in the cloud – driving down productivity as a result.

Many cloud vendors tout the ease and cost savings of the cloud, but they leave customers without the tools or capabilities to navigate the complex realities of cloud computing. That’s why today we are continuing down a path of rapid innovation. In addition to our groundbreaking new Microsoft Azure Preview [Management] Portal, we announced several enhancements our customers need to fully tap into the power of the cloud. These include:

  • Dozens of enhancements to our Azure services across Web, mobile, data and our infrastructure services
  • Further commitment to building the most open and flexible cloud with Azure support for automation software from Puppet Labs and Chef.
  • We’ve removed the throttle off our Application Insights preview, making it easier for all developers to build, manage and iterate on their apps in the cloud with seamless integration into the IDE

<For details see the separate section 4. New Azure features: IaaS, web, mobile and data announcements>

Here is a brief presentation by a Brazilian specialist: Microsoft Azure [Management] Portal First Touch [Bruno Vieira YouTube channel, April 3, 2014]

From Microsoft evolves the cloud experience for customers [press release, April 3, 2014]

… Thursday at Build 2014, Microsoft Corp. announced a first-of-its-kind cloud experience that brings together cross-platform technologies, services and tools, enabling developers and businesses to innovate at startup speed via a new Microsoft Azure Preview [Management] Portal.

In addition, the company announced several new milestones in Visual Studio Online and .NET that give developers access to the most complete platform and tools for building in the cloud. Thursday’s announcements are part of Microsoft’s broader vision to erase the boundaries of cloud development and operational management for customers.

“Developing for a mobile-first, cloud-first world is complicated, and Microsoft is working to simplify this world without sacrificing speed, choice, cost or quality,” said Scott Guthrie, executive vice president at Microsoft. “Imagine a world where infrastructure and platform services blend together in one seamless experience, so developers and IT professionals no longer have to work in disparate environments in the cloud. Microsoft has been rapidly innovating to solve this problem, and we have taken a big step toward that vision today.”

One simplified cloud experience

The new Microsoft Azure Preview [Management] Portal provides a fully integrated experience that will enable customers to develop and manage an application in one place, using the platform and tools of their choice. The new portal combines all the components of a cloud application into a single development and management experience. New components include the following:

  • Simplified Resource Management. Rather than managing standalone resources such as Microsoft Azure Web Sites, Visual Studio Projects or databases, customers can now create, manage and analyze their entire application as a single resource group in a unified, customized experience, greatly reducing complexity while enabling scale. Today, the new Azure Manager is also being released through the latest Azure SDK for customers to automate their deployment and management from any client or device.

  • Integrated billing. A new integrated billing experience enables developers and IT pros to take control of their costs and optimize their resources for maximum business advantage.

  • Gallery. A rich gallery of application and services from Microsoft and the open source community, this integrated marketplace of free and paid services enables customers to leverage the ecosystem to be more agile and productive.

  • Visual Studio Online. Microsoft announced key enhancements through the Microsoft Azure Preview [Management] Portal, available Thursday. This includes Team Projects supporting greater agility for application lifecycle management and the lightweight editor code-named “Monaco” for modifying and committing Web project code changes without leaving Azure. Also included is Application Insights, an analytics solution that collects telemetry data such as availability, performance and usage information to track an application’s health. Visual Studio integration enables developers to surface this data from new applications with a single click.

Building an open cloud ecosystem

Showcasing Microsoft’s commitment to choice and flexibility, the company announced new open source partnerships with Chef and Puppet Labs to run configuration management technologies in Azure Virtual Machines. Using these community-driven technologies, customers will now be able to more easily deploy and configure in the cloud. In addition, today Microsoft announced the release of Java Applications to Microsoft Azure Web Sites, giving Microsoft even broader support for Web applications.

From BUILD Day 2: Keynote Summary [by Steve Fox – DPE (MSFT) on MSDN Blogs, April 3, 2014]

….
Bill Staples then came on stage to show off the new Azure [management] portal design and features. Bill walked through a number of the new innovations in the portal, such as improved UX, app insights, “blade” views [the “blade” term is used for the dropdown that allows a drilldown], etc. A screen shot of the new portal is shown below.

image

image

Bill also walked through the comprehensive analytics (such as compute and billing) that are now available on the portal. He also walked through “Application Insights,” which is a great way to instrument your code in both the portal and in your code with easy-to-use, pre-defined code snippets. He completed his demo walkthrough by showing the Azure [management] portal as a “NOC” [Network Operations Center] view on a big-screen TV.

image

The above image is at the [1:44:24] point in time of the keynote video record on Channel 9 and it is giving more information if we provide here the part of transcript around it:

BILL STAPLES at [1:43:39]: Now, to conclude the operations part of this demo, I wanted to show you an experience for how the new Azure Portal works on a different device. You’ve seen it on the desktop, but it works equally well on a tablet device, that is really touch friendly. Check it out on your Surface or your iPad, it works great on both devices.

But we’re thinking as well if you’ve got a big-screen TV or a projector lying around your team room, you might want to think about putting the Microsoft Azure portal as your own personal NOC.

In this case, I’ve asked the Office developer team if we could have access to their live site log. So they made me promise, do not hit the stop button or the delete button, which I promised to do.

[1:44:24] This is actually the Office developer log site. And you can see it’s got almost 10 million hits already today running on Azure Websites. So very high traffic.

They’ve customized it to show off the browser usage on their website. Imagine we’re in a team Scrum with the Office developer guys and we check out, you know, how is the website doing? We’ve got some interesting trends here.

In fact, there was a spike of sessions it looks like going on about a week ago. And page views, that’s kind of a small part. It would be nice to know which page it was that spiked a week ago. Let’s go ahead and customize that.

This screen is kind of special because it has touch screen. So I can go ahead and let’s make that automatically expand there. Now we see a bigger view. Wow, that was a really big spike last week. What page was that? We can click into it. We get the full navigation experience, same on the desktop, as well as, oh, look at that. There’s a really popular blog post that happened about a week ago. What was that? Something about announcing Office on the iPad you love. Makes sense, huh? So we can see the Azure Portal in action here as the Office developer team might imagine it. [1:45:44]

The last thing I want to show is the Azure Gallery.

image

We populated the gallery with all of the first-party Microsoft Azure services, as well as the [services from] great partners that we’ve worked with so far in creating this gallery.

image

And what you’re seeing right here is just the beginning. We’ve got the core set of DevOps experiences built out, as well as websites, SQL, and MySQL support. But over the coming months, we’ll be integrating all of the developer and IT services in Microsoft as well as the partner services into this experience.

Let me just conclude by reminding us what we’ve seen. We’ve seen a first-of-its-kind experience from Microsoft that fuses our world-class developer services together with Azure to provide an amazing dev-ops experience where you can enjoy the entire lifecycle from development, deployment, operations, gathering analytics, and iterating right here in one experience.

We’ve seen an application-centric experience that brings together all the dev platform and infrastructure services you know and love into one common shell. And we’ve seen a new application model that you can describe declaratively. And through the command line or programmatically, build out services in the cloud with tremendous ease. [1:47:12]

More information on the new Azure [Management] Portal:

Today, at Build, we unveiled a new Azure [Management] Portal experience we are building.  I want to give you some insights into the work that VS Online team is doing to help with it.  I’m not on the Azure team and am no expert on how they’d like to describe to the world, so please take any comments I make here about the new Azure portal as my perspective on it and not necessarily an official one.

Bill Staples first presented to me almost a year ago an idea of creating a new portal experience for Azure designed to be an optimal experience for DevOps.  It would provide everything a DevOps team needs to do modern cloud based development.  Capabilities to provision dev and test resources, development and collaboration capabilities, build, release and deployment capabilities, application telemetry and management capabilities and more.  Pretty quickly it became clear to me that if we could do it, it would be awesome.  An incredibly productive and easy way for devs to do soup to nuts app development.

What we demoed today (and made available via http://portal.azure.com”) is the first incarnation of that.  My team (the VS Online Team) has worked very hard over the past many months with the Azure team to build the beginnings of the experience we hope to bring to you.  It’s very early and it’s nowhere near done but it’s definitely something we’d love to start getting some feedback on.

For now, it’s limited to Azure websites, SQL databases and a subset of the VS Online capabilities.  If you are a VS Online/TFS user, think of this as a companion to Visual Studio, Visual Studio Online and all of the tools you are used to.  When you create a team project in the Azure portal, it’s a VS Online Team Project like any other and is accessible from the Azure portal, the VS Online web UI, Visual Studio, Eclipse and all the other ways your Visual Studio Online assets are available.  For now, though, there are a few limitations – which we are working hard to address.  We are in the middle of adding Azure Active Directory support to Visual Studio Online and, for a variety of reasons, chose to limit the new portal to only work with VS Online accounts linked to Azure Active Directory.

The best way to ensure this is just to create a new Team Project and a new VS Online account from within the new Azure portal.  You will need to be logged in to the Azure portal with an identity known to your Azure Active Directory tenant and to add new users, rather than add them directly in Visual Studio Online, you will add them through Azure Active directory.  One of the ramifications of this, for now, is that you can’t use an existing VS Online account in the new portal – you must create a new one.  Clearly that’s a big limitation and one we are working hard to remove.  We will enable you to link existing VS Online accounts to Active Directory we just don’t have it yet – stay tuned.

I’ll do a very simple tour.  You can also watch Brian Keller’s Channel9 video.

Brian Keller talks with Jonah Sterling and Vishal Joshi about the new Microsoft Azure portal preview. This Preview portal is a big step forward in the journey toward integrated DevOps tools, technologies, and cloud services. See how you can deliver and scale business-ready apps for every platform more easily and rapidly—using what you already know and whatever toolset you like most

Further information:


4. New Azure features: IaaS, web, mobile and data announcements

According to Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group:

image

[IaaS] First up, let’s look at some of the improvements we’re making with our infrastructure features and some of the great things we’re enabling with virtual machines.

Azure enables you to run both Windows and Linux virtual machines in the cloud. You can run them as stand-alone servers, or join them together to a virtual network, including one that you can optionally bridge to an on-premises networking environment.

This week, we’re making it even easier for developers to create and manage virtual machines in Visual Studio without having to leave the VS IDE: You can now create, destroy, manage and debug any number of VMs in the cloud. (Applause.)

Prior to today, it was possible to create reusable VM image templates, but you had to write scripts and manually attach things like storage drives to them. Today, we’re releasing support that makes it super-easy to capture images that can contain any number of storage drives. Once you have this image, you can then very easily take it and create any number of VM instances from it, really fast, and really easy. (Applause.)

Starting today, you can also now easily configure VM images using popular frameworks like Puppet, Chef, and our own PowerShell and VSD tools. These tools enable you to avoid having to create and manage lots of separate VM images. Instead, you can define common settings and functionality using modules that can cut across every type of VM you use.

You can also create modules that define role-specific behavior, and all these modules can be checked into source control and they can also then be deployed to a Puppet Master or Chef server.

And one of the things we’re doing this week is making it incredibly easy within Azure to basically spin up a server farm and be able to automatically deploy, provision and manage all of these machines using these popular tools.

We’re also excited to announce the general availability of our auto-scale service, as well as a bunch of great virtual networking capabilities including point-to-site VPN support going GA, new dynamic routing, subnet migration, as well as static internal IP address. And we think the combination of this really gives you a very flexible environment, as you saw, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud.

So we think infrastructure as a service is super-flexible, and it really kind of enables you to manage your environments however you want.


We also, though, provide prebuilt services and runtime environments that you can use to assemble your applications as well, and we call these platform as a service [PaaS] capabilities.

One of the benefits of these prebuilt services is that they enable you to focus on your application and not have to worry about the infrastructure underneath it.

We handle patching, load balancing, high availability and auto scale for you. And this enables you to work faster and do more.

What I want to do is just spend a little bit of time talking through some of these platform as a service capabilities, so we’re going to start talking about our Web functionality here today.

image

[Web] One of the most popular PaaS services that we now have on Windows Azure is something we call the Azure Website Service. This enables you to very easily deploy Web applications written in a variety of different languages and host them in the cloud. We support .NET, NOJS, PHP, Python, and we’re excited this week to also announce that we’re adding Java language support as well.

image

This enables you as a developer to basically push any type of application into Azure into our runtime environment, and basically host it to any number of users in the cloud.

Couple of the great features we have with Azure include auto-scale capability. What this means is you can start off running your application, for example, in a single VM. As more load increases to it, we can then automatically scale up multiple VMs for you without you having to write any script or take any action yourself. And if you get a lot of load, we can scale up even more.

You can basically configure how many VMs you maximally want to use, as well as what the burn-down rate is. And as your traffic — and this is great because it enables you to not only handle large traffic spikes and make sure that your apps are always responsive, but the nice thing about auto scale is that when the traffic drops off, or maybe during the night when it’s a little bit less, we can automatically scale down the number of machines that you need, which means that you end up saving money and not having to pay as much.

One of the really cool features that we’ve recently introduced with websites is something we call our staging support. This solves kind of a pretty common problem with any Web app today, which is there’s always someone hitting it. And how do you stage the deployments of new code that you roll out so that you don’t ever have a site in an intermediate state and that you can actually deploy with confidence at any point in the day?

And what staging support enables inside of Azure is for you to create a new staging version of your Web app with a private URL that you can access and use to test. And this allows you to basically deploy your application to the staging environment, get it ready, test it out before you finally send users to it, and then basically you can push one button or send a single command called swap where we’ll basically rotate the incoming traffic from the old production site to the new staged version.

What’s nice is we still keep your old version around. So if you discover once you go live you still have a bug that you missed, you can always swap back to the previous state. Again, this allows you to deploy with a lot of confidence and make sure that your users are always seeing a consistent experience when they hit your app.

Another cool feature that we’ve recently introduced is a feature we call Web Jobs. And this enables you to run background tasks that are non-HTTP responsive that you can actually run in the background. So if it takes a while to run it, this is a great way you can offload that work so that you’re not stalling your actual request response thread pool.

Basically, you know, common scenario we see for a lot of people is if they want to process something in the background, when someone submits something, for example, to the website, they can go ahead and simply drop an item into a queue or into the storage account, respond back down to the user, and then with one of these Web jobs, you can very easily run background code that can pull that queue message and actually process it in an offline way.

And what’s nice about Web jobs is you can run them now in the same virtual machines that host your websites. What that means is you don’t have to spin up your own separate set of virtual machines, and again, enables you to save money and provides a really nice management experience for it.

The last cool feature that we’ve recently introduced is something we call traffic manager support. With Traffic Manager, you can take advantage of the fact that Azure runs around the world, and you can spin up multiple instances of your website in multiple different regions around the world with Azure.

What you can then do is use Traffic Manager so you can have a single DNS entry that you then map to the different instances around the world. And what Traffic Manager does is gives you a really nice way that you can actually automatically, for example, route all your North America users to one of the North American versions of your app, while people in Europe will go routed to the European version of your app. That gives you better performance, response and latency.

Traffic Manager is also smart enough so that if you ever have an issue with one of the instances of your app, it can automatically remove it from those rotations and send users to one of the other active apps within the system. So this gives you also a nice way you can fail over in the event of an outage.

And the great thing about Traffic Manager, now, is you can use it not just for virtual machines and cloud services, but we’ve also now enabled it to work fully with websites.

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Scott then invited Mads Kristensen on stage to walk through a few of the features that Scott discussed at a higher level. Specifically, he walked through the new ASP.NET templates emphasizing the creation of the DB layer and then showing PowerShell integration to manage your web site. He then showed Angular integration with Azure Web sites, emphasizing easy and dynamic ways to update your site showing  deep browser and Visual Studio integration (Browser Link), showing updates that are made in the browser show up in the code in Visual Studio. Very cool!!
image
He also showed how you can manage staging and production sites by using the “swap” functionality built into the Azure Web sites service. He also showed Web Jobs to show how you can also run background jobs and Traffic Manager functionality to ensure your customers have the best performing web site in their regions.

So as Mads showed, there are a lot of great features that we’re kind of unveiling this week. A lot of great announcements that go with it.

These include the general availability release of auto-scale support for websites, as well as the general availability release of our new Traffic Manager support for websites as well. As you saw there, we also have Web Job support, and one of the things that we didn’t get to demo which is also very cool is backup support so that automatically we can have both your content as well as your databases backed up when you run them in our Websites environment as well.

Lots of great improvements also coming in terms of from an offer perspective. One thing a lot of people have asked us for with Websites is the ability not only to use SSL, but to use SSL without having to pay for it. So one of the cool things that we’re adding with Websites and it goes live today is we’re including one IP address-based SSL certificate and five SNI-based SSL certificates at no additional cost to every Website instance. (Applause.)

Throughout the event here, you’re also going to hear a bunch of great sessions on some of the improvements we’re making to ASP.NET. In terms of from a Web framework perspective, we’ve got general availability release of ASP.NET MVC 5.1, Web API 2.1, Identity 2.0, as well as Web Pages 3.1 So a lot of great, new features to take advantage of.

As you saw Mads demo, a lot of great features inside Visual Studio including the ability every time you create an ASP.NET project now to automatically create an Azure Website as part of that flow. Remember, every Azure customer gets 10 free Azure Websites that you can use forever. So even if you’re not an MSDN customer, you can take advantage of that feature in order to set up a Web environment literally every time you create a new project. So pretty exciting stuff.

So that was one example of some of the PaaS capabilities that we have inside Azure.


[Mobile] I’m going to move now into the mobile space and talk about some of the great improvements that we’re making there as well.

One of the great things about Azure is the fact that it makes it really easy for you to build back ends for your mobile applications and devices. And one of the cool things you can do now is you can develop those back ends with both .NET as well as NOJS, and you can use Visual Studio or any other text editor on any other operating system to actually deploy those applications into Azure.

And once they’re deployed, we make it really easy for you to go ahead and connect them to any type of device out there in the world.

image

Now, some of the great things you can do with this is take advantage of some of the features that we have, which provide very flexible data handling. So we have built-in support for Azure storage, as well as our SQL database, which is our PaaS database offering for relational databases, as well as take advantage of things like MongoDB and other popular NoSQL solutions.

image

We support the ability not only to reply to messages that come to us, but also to push messages to devices as well. One of the cool features that Mobile Services can take advantage of — and it’s also available as a stand-alone feature — is something we call notification hubs. And this basically allows you to send a single message to a notification hub and then broadcast it to, in some cases, devices that might be registered to it.

We also support with Mobile Services a variety of flexible authentication options. So when we first launched mobile services, we added support for things like Facebook login, Google ID, Twitter ID, as well as Microsoft Accounts.

One of the things we’re excited to demo here today is Active Directory support as well. So this enables you to build new applications that you can target, for example, your employees or partners, to enable them to sign in using the same enterprise credentials that they use in an on-premises Active Directory environment.

What’s great is we’re using standard OAuth tokens as part of that. So once you authenticate, you can take that token, you can use it to also provide authorization access to your own custom back-end logic or data stores that you host inside Azure.

We’re also making it really easy so that you can also take that same token and you can use it to access Office 365 APIs and be able to integrate that user’s data as well as functionality inside your application as well.

The beauty about all of this is it works with any device. So whether it’s a Windows device or an iOS device or an Android device, you can go ahead and take advantage of this capability.

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Yavor Georgiev then came on stage to walk through a Mobile Services demo. He showed off a new Mobile Services Visual Studio template, test pages with API docs, local and remote debugging capabilities, and a LOB app that enables Facilities departments to manage service requests—this showed off a lot of the core ASP.NET/MVC features along with a quick publish service to your Mobile Services service in Azure. Through this app, he showed how to use Active Directory to build the app—which prompts you to log into the app with your corp/AD credentials to use the app. He then showed how the app integrates with SharePoint/O365 such that the request leverages the SharePoint REST APIs to publish a doc to a Facilities doc repository. He also showed how you can re-use the core code through Xamarin to repurpose the code for iOS.
The app is shown here native in Visual Studio.

image

This app view is the cross-platform build using Xamarin.

image

Kudos to Yavor! This was an awesome demo that showcases how far Mobile Services has come in a short period of time—love the extensibility and the cross-platform capabilities. Very nice!

One of the things that kind of Yavor showed there is just sort of how easy it is now to build enterprise-grade mobile applications using Azure and Visual Studio.

And one of the key kind of lynchpins in terms of from a technology standpoint that really makes this possible is our Azure Active Directory Service. This basically provides an Active Directory in the cloud that you can use to authenticate any device. What makes it powerful is the fact that you can synchronize it with your existing on-premises Active Directory. And we support both synch options, including back to Windows Server 2003 instances, so it doesn’t even require a relatively new Windows Server, it works with anything you’ve got.

We also support a federate option as well if you want to use ADFS. Once you set that environment up, then all your users are available to be authenticated in the cloud and what’s great is we ship SDKs that work with all different types of devices, and enables you to integrate authentication into those applications. And so you don’t everyone have to have your back end hosted on Azure, you can take advantage of this capability to enable single sign-on with any enterprise credential.

And what’s great is once you get that token, that same token can then be used to program against Office 365 APIs as well as the other services across Microsoft. So this provides a really great opportunity not only for building enterprise line-of-business apps, but also for ISVs that want to be able to build SaaS solutions as well as mobile device apps that integrate and target enterprise customers as well.

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Scott then invited Grant Peterson from DocuSign on stage to discuss how they are using Azure, who demoed AD integration with DocuSign’s iOS app. Nice!

image

image

This is really huge for those of you building apps that are cross-platform but have big investments in AD and also provides you as developers a way to reach enterprise audiences.

So I think one of the things that’s pretty cool about that scenario is both the opportunity it offers every developer that wants to reach an enterprise audience. The great thing is all of those 300 million users that are in Azure Active Directory today and the millions of enterprises that have already federated with it are now available for you to build both mobile and Web applications against and be able to offer to them an enterprise-grade solution to all of your ISV-based applications.

That really kind of changes one of the biggest concerns that people end up having with enterprise apps with SaaS into a real asset where you can make it super-easy for them to go ahead and integrate and be able to do it from any device.

And one of the things you might have noticed there in the code that Grant showed was that it was actually all done on the client using Objective-C, and that’s because we have a new Azure Active Directory iOS SDK as well as an Android SDK in addition to our Windows SDK. And so you can use and integrate with Azure Active Directory from any device, any language, any tool.

Here’s a quick summary of some of the great mobile announcements that we’re making today. Yavor showed we now have .NET backend support, single sign-on with Active Directory.

One of the features we didn’t get a chance to show, but you can learn more about in the breakout talk is offline data sync. So we also now have built into Mobile Services the ability to sync and handle disconnected states with data. And then, obviously, the Visual Studio and remote debugging capabilities as well.

We’ve got not only the Azure SDKs for Azure Active Directory, but we also now have Office 365 API integration. We’re also really excited to announce the general availability or our Azure AD Premium release. This provides enterprises management capabilities that they can actually also use and integrate with your applications, and enables IT to also feel like they can trust the applications and the SaaS solutions that their users are using.

And then we have a bunch of great improvements with notification hubs including Kindle support as well as Visual Studio integration.

So a lot of great features. You can learn about all of them in the breakout talks this week.

So we’ve talked about Web, we’ve talked about mobile when we talk about PaaS.


[Data] I want to switch gears now and talk a little bit about data, which is pretty fundamental and integral to building any type of application.

image

And with Azure, we support a variety of rich ways to handle data ranging from unstructured, semistructured, to relational. One of the most popular services you heard me talk about at the beginning of the talk is our SQL database story. We’ve got over a million SQL databases now hosted on Azure. And it’s a really easy way for you to spin up a database, and better yet, it’s a way that we then manage for you. So we do handle things like high availability and patching.

You don’t have to worry about that. Instead, you can focus on your application and really be productive.

We’ve got a whole bunch of great SQL improvements that we’re excited to announce this week. I’m going to walk through a couple of them real quickly.

One of them is we’re increasing the database size that we support with SQL databases. Previously, we only supported up to 150 gigs. We’re excited to announce that we’re increasing that to support 500 gigabytes going forward. And we’re also delivering a new 99.95 percent SLA as part of that. So this now enables you to run even bigger applications and be able to do it with high confidence in the cloud. (Applause.)

Another cool feature we’re adding is something we call Self-Service Restore. I don’t know if you ever worked on a database application where you’ve written code like this, hit go, and then suddenly had a very bad feeling because you realized you omitted the where clause and you just deleted your entire table. (Laughter.)

And sometimes you can go and hopefully you have backups. This is usually the point when you discover when you don’t have backups.

And one of the things that we built in as part of the Self-Service Restore feature is automatic backups for you. And we actually let you literally roll back the clock, and you can choose what time of the day you want to roll it back to. We save up to I think 31 days of backups. And you can basically rehydrate a new database based on whatever time of the day you wanted to actually restore from. And then, hopefully, your life ends up being a lot better than it started out.

This is just a built-in feature. You don’t have to turn it on. It’s just sort of built in, something you can take advantage of. (Applause.)

Another great feature that we’re building in is something we call active geo-replication. What this lets you do now is you can actually go ahead and run SQL databases in multiple Azure regions around the world. And you can set it up to automatically replicate your databases for you.

And this is basically an asynchronous replication. You can basically have your primary in rewrite mode, and then you can actually have your secondary and you can have multiple secondaries in read-only mode. So you can still actually be accessing the data in read-only mode elsewhere.

In the event that you have a catastrophic issue in, say, one region, say a natural disaster hits, you can go ahead and you can initiate the failover automatically to one of your secondary regions. This basically allows you to continue moving on without having to worry about data loss and gives you kind of a really nice, high-availability solution that you can take advantage of.

One of the things that’s nice about Azure’s regions is we try to make sure we have multiple regions in each geography. So, for example, we have two regions that are at least 500 miles away in Europe, and in North America, and similarly with Australia, Japan and China. And what that means is that you know if you do need to fail over, your data is never leaving the geo-political area that it’s based in. And if you’re hosted in Europe, you don’t have to worry about your data ever leaving Europe, similarly for the other geo-political entities that are out there.

So this gives you a way now with high confidence that you can store your data and know that you can fail over at any point in time.

In addition to some of these improvements with SQL databases, we also have a host of great improvements coming with HDInsight, which is our big data analytics engine. This runs standard Hadoop instance and runs it as a managed service, so we do all the patching and management for you.

We’re excited to announce the GA of Hadoop 2.2 support. We also have now .NET 4.5 installed and APIs available so you can now write your MapReduce jobs using .NET 4.5.

We’re also adding audit and operation history support, a bunch of great improvements with Hive, and we’re now Yarn-enabling the cluster so you can actually run more software on it as well.

And we’re also excited to announce a bunch of improvements in the storage space, including the general availability of our read-access geo-redundant storage option.

So we’ve kind of done a whole bunch of kind of deep dives into a whole bunch of the Azure features.

More information:

It has been a really busy last 10 days for the Azure team. This blog post quickly recaps a few of the significant enhancements we’ve made.  These include:

  • [Web] Web Sites: SSL included, Traffic Manager, Java Support, Basic Tier
  • [IaaS] Virtual Machines: Support for Chef and Puppet extensions, Basic Pricing tier for Compute Instances
  • [IaaS] Virtual Network: General Availability of DynamicRouting VPN Gateways and Point-to-Site VPN
  • [Mobile] Mobile Services: Preview of Visual Studio support for .NET, Azure Active Directory integration and Offline support;
  • [Mobile] Notification Hubs: Support for Kindle Fire devices and Visual Studio Server Explorer integration
  • [IaaS] [Web] Autoscale: General Availability release
  • [Data] Storage: General Availability release of Read Access Geo Redundant Storage
  • [Mobile] Active Directory Premium: General Availability release
  • Scheduler service: General Availability release
  • Automation: Preview release of new Azure Automation service

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

… With the April updates to Microsoft Azure, Azure Web Sites offers a new pricing tier called Basic.  The Basic pricing tier is designated for production sites, supporting smaller sites, as well as development and testing scenarios. … Which pricing tier is right for me? … The new pricing tier is a great benefit to many customers, offering some high-end features at a reasonable cost. We hope this new offering will enable a better deployment for all of you.

Microsoft is launching support for Java-based web sites on Azure Web Sites.  This capability is intended to satisfy many common Java scenarios combined with the manageability and easy scaling options from Azure Web Sites.

The addition of Java is available immediately on all tiers for no additional cost.  It offers new possibilities to host your pre-existing Java web applications.  New Java web site development on Azure is easy using the Java Azure SDK which provides integration with Azure services.

With the latest release of Azure Web Sites and the new Azure Portal Preview we are introducing a new concept: Web Hosting Plans. A Web Hosting Plan (WHP) allows you to group and scale sites independently within a subscription.

Microsoft Azure offers load balancing services for [IaaS] virtual machines (IaaS) and [Webcloud services (PaaS) hosted in the Microsoft Azure cloud. Load balancing allows your application to scale and provides resiliency to application failures among other benefits.

The load balancing services can be accessed by specifying input endpoints on your services either via the Microsoft Azure Portal or via the service model of your application. Once a hosted service with one or more input endpoints is deployed in Microsoft Azure, it automatically configures the load balancing services offered by Microsoft Azure platform. To get the benefit of resiliency / redundancy of your services, you need to have at least two virtual machines serving the same endpoint.

The web marches on, and so does Visual Studio and ASP.NET, with a renewed commitment to making a great IDE for web developers of all kinds. Join Scott & Scott for this dive into VS2013 Update 2 and beyond. We’ll see new features in ASP.NET, new ideas in front end web development, as well as a peek into ASP.NET’s future.

When creating a Azure Mobile Service, a Notification Hub is automatically created as well enabling large scale push notifications to devices across any mobile platform (Android, iOS, Windows Store apps, and Windows Phone). For a background on Notification Hubs, see this overview as well as these tutorials and guides, and Scott Guthrie’s blog Broadcast push notifications to millions of mobile devices using Windows Azure Notification Hubs.

Let’s look at how devices register for notification and how to send notifications to registered devices using the .NET backend.

New tiers improve customer experience and provide more business continuity options

To better serve your needs for more flexibility, Microsoft Azure SQL Database is adding new service tiers, Basic and Standard, to work alongside its Premium tier, which is currently in preview. Together these service tiers will help you more easily support the needs of database workloads and application patterns built on Microsoft Azure. … Previews for all three tiers are available today.

The Basic, Standard, and Premium tiers are designed to deliver more predictable performance for light-weight to heavy-weight transactional application demands. Additionally, the new tiers offer a spectrum of business continuity features, a [Data] stronger uptime SLA at 99.95%, and larger database sizes up to 500 GB for less cost. The new tiers will also help remove costly workarounds and offer an improved billing experience for you.

… [Data] Active Geo-Replication: …

… [Data] Self-service Restore: …

Stay tuned to the Azure blog for more details on SQL Database later this month!

Also, if you haven’t tried Azure SQL Database yet, it’s a great time to start and try the Premium tier! Learn more today!

Azure HDInsight now supports [Data] Hadoop 2.2 with HDInsight cluster version 3.0 and takes full advantage of these platform to provide a range of significant benefits to customers. These include, most notably:

  • Microsoft Avro Library: …
  • [Data] YARN: A new, general-purpose, distributed, application management framework that has replaced the classic Apache Hadoop MapReduce framework for processing data in Hadoop clusters. It effectively serves as the Hadoop operating system, and takes Hadoop from a single-use data platform for batch processing to a multi-use platform that enables batch, interactive, online and stream processing. This new management framework improves scalability and cluster utilization according to criteria such as capacity guarantees, fairness, and service-level agreements.

  • High Availability: …

  • [Data] Hive performance: Order of magnitude improvements to Hive query response times (up to 40x) and to data compression (up to 80%) using the Optimized Row Columnar (ORC) format.

  • Pig, Sqoop, Qozie, Ambari: …

The first “post-Ballmer” offering launched: with Power BI for Office 365 everyone can analyze, visualize and share data in the cloud

… and everything you could know about Satya Nadella’s solution strategy so far (from Microsoft’s Cloud & Enterprise organization):

  1. Power BI as the lead business solution and the Microsoft’s visionary Data Platform solution built for it
  2. Microsoft’s vision of the unified platform for modern businesses

Keep in mind as well: Susan Hauser [CVP, EPG Group of Microsoft] interviews Microsoft CEO Satya Nadella [Microsoft, Feb 4, 2014; published on Microsoft Youtube channel, Feb 5, 2014]: [Microsoft, Feb 4, 2014: “Satya Nadella is a strong advocate for customers and partners, and a proven leader with strong technical and engineering expertise. Nadella addressed customers and partners for the first time as CEO during a Customer and Partner Webcast event.”]

[Contributor Profile: Susan Hauser, Corporate Vice President,
Enterprise and Partner Group, Microsoft]

As a teaser Q: [6:43] How do you think about consumer and business, and how do you see them benefiting each other?

A: You know, one of the things that when we think about our product innovation, we necessarily don’t compartementalize by consumer and business, we think about the user. In many of these cases, what needs to happen is experiences. That’s for sure have to have a strong notion of identity and security, so I.T. control, where it’s needed, still matters a lot, and that’s something that, again, we will uniquely bring to market. But it starts with the user. The user obviously is going to have a life at home and a life at work. So how do we bridge that as there more and more of what they do is digitally mediated? I want to be able to connect with my friends and family. I also want to be able to participate in the social network at work, and I don’t want the two things to be confused, but I don’t want to pick three different tools for doing the one thing I want to do seamlessly across my work and life. That’s what we are centered on. When we think about what we are doing in communications, what we are doing in productivity or social communications, those are all the places where we really want to bridge the consumer and business market, because that’s how we believe end-users actually work. [8:01]

More information:
Satya Nadella’s (?the next Microsoft CEO?) next ten years’ vision of “digitizing everything”, Microsoft opportunities and challenges seen by him with that, and the case of Big Data [‘Experiencing the Cloud’, Dec 13, 2013, 2013] … as one of the crucial issues for that (in addition to the cloud, mobility and Internet-of-Things), via the current tipping point as per Microsoft, and the upcoming revolution in that as per Intel … IMHO exactly in Big Data Microsoft’s innovations came to a point at which its technology has the best chances to become dominant and subsequently define the standard for the IT industry—resulting in “winner-take-all” economies of scale and scope. Whatever Intel is going to add to that in terms of “technologies for the next Big Data revolution” is going only to help Microsoft with its currently achieved innovative position even more. But for this reason I will include here the upcoming Intel innovations for Big Data as well.
Microsoft reorg for delivering/supporting high-value experiences/activities [‘Experiencing the Cloud’, July 11, 2013]
Microsoft partners empowered with ‘cloud first’, high-value and next-gen experiences for big data, enterprise social, and mobility on wide variety of Windows devices and Windows Server + Windows Azure + Visual Studio as the platform [‘Experiencing the Cloud’, July 11, 2013]
Will, with disappearing old guard, Satya Nadella break up the Microsoft behemoth soon enough, if any? [‘Experiencing the Cloud’, Feb 5, 2014]
John W. Thompson, Chairman of the Board of Microsoft: the least recognized person in the radical two-men shakeup of the uppermost leadership [‘Experiencing the Cloud’, Feb 6, 2014]
Modern Applications: The People Story for Business [MSCloudOS YouTube channel, Feb 11, 2014]

We’ve positioned the animation to tell a story that will appeal to non-technical customers (i.e. the business decision makers) that will augment the product and technical stories we have developed. Think of it as an opening gambit to the kind of conversation we want to have with them. This is a “people story” about modern apps for business. This animation is aimed at taking the business angle and reinforcing our strong business app story. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-business-apps.aspx

– THE BIG PICTURE: Microsoft Cloud OS Overview [MSCloudOS YouTube channel, Jan 21, 2014]

Hello!

imageMy name is Gavriella Schuster and I’m the general manager at the US server and cloud business. Today I’d like to talk to you about Microsoft’s vision of the unified platform for modern businesses and how—what we call the Cloud OS—can help you transform your business as you shift in a world demanding continuous, always on services at broad-scale accessed by a multitude of devices.

image

You are in the center of one of the largest IT transformations this industry have ever seen. No question what the big shifts are happening in IT today due to the strength mobility and devices, applications, Big Data and Cloud.

The proliferation of devices and the integration of technology has changed the way people live and work, and it opened the door for a multitude of new applications designed to meet every need. These applications are social, their mobile and they need to be scaleable which means many will have a cloud back-end.

These devices and applications produce a huge amount of data. In fact, the world of data is doubling every two to three years. More than ninety percent of the world’s data was developed just in the last couple of years. These trends are forcing IT to answer new and different question.

imageHow can you enable a mobile workforce work from anywhere on any device? How can you involve your applications to meet these new demand? How can you help businesses make faster and better decision? And, how do you ensure your infrastructure can and will scale to meet the demand?

Microsoft answer is the Cloud OS. The Cloud OS is Microsoft hybrid cloud solution comprised of Windows Server, Windows Azure, System Center, Windows Intune, and SQL Server. With shared planning, development, engineering and support across these technologies we’re bringing a comprehensive solution to support your business across a number of fronts—from infrastructure to data, to applications and devices.

image

When it comes to mobility and devices we empower people centric IT. Our solutions enable you to deliver a consistent and great user experience from anywhere, no matter the device, with the way to manage and protect it all.

Nearly every customer echoes the importance of enabling a bring-your-own-device environment as a direct driver of productivity.

Aston Martin, for instance, the luxury car manufacturer was challenged managing over 700 remote devices—laptops, desktops, smartphones—
across 145 dealerships in 41 countries. With Windows Intune in System Center Configuration Manager Aston Martin can now proactively manage these devices via a single cloud-based console, before employee productivity is affected. In any case where an employee’s device is stolen I can remotely wipe that device to protect your corporate data.

At the application level we enable modern business applications, so that you can quickly extend your applications with new capabilities and deploy on multiple devices, where your applications live, and move wherever you want.

In regards to data its all about Big Data, small data, and all data. The Cloud OS will help you unlock insights on any data, make it easier for everyone to access and perform analytics with tools they already use, like SharePoint an Excel, on any data, any size, from anywhere.

We have democratized access to this data so that the many not the few can uncover insights to power your business.

And lastly, at the core of the Cloud OS powering mobility applications and data is your infrastructure. Our goal is to help you transform your datacenter, to enable you to go from managing each server individually to enabling a single well-managed elastic and scaleable environment to power all your application compute, networking and storage needs.

We call this concept a datacenter without boundaries, where you get a consistent experience that takes you from the data center to the cloud and back if you wish, so that you have access to resources on-demand and the ability to move workloads around with maximum flexibility. This provides you with easy on, easy off with no cloud lock in.

image

What makes our Cloud OS vision different is this hybrid design at the core. You benefit from a common and consistent approach to development, management, identity, security, virtualization and data. Spending on premises to the cloud, your private cloud, a service provider cloud, and Windows Azure—Microsoft enterprise public cloud.

This is powerful for a number of reasons.

  • One, we deliver a flexible development environment to developers [that they] can code and deploy anywhere across Ruby, Java, PHP, Python or .NET. And, you get complete workload mobility to move these applications across cloud.
  • With System Center you get a single unified management solution to manage all your physical and virtual infrastructure resources across cloud in a single pane of glass.
  • Common identity is a third element of our consistent platform. With a federated Active Directory and multi-factor authentication you get a common identity across cloud, so your employees can enjoy a seamless, single sign-on experience.
  • Integrated virtualization is the fourth area. We go beyond traditional server virtualization where compute is virtualized and extended to other areas like storage and networking that are costly in your environment today.
  • Lasty being able to have a complete data platform, where your data can reside anywhere across these three clouds, is a value proposition that is huge as well. You can tap into it and all that data wherever you need, anytime.

Well I shared the core benefits Microsoft can deliver in this hybrid cloud approach.


One question I hear frequently from customers is: Oh, this is great. Can you tell me the best use case to get started with Azure?

Well, Azure can support a number of your infrastructure as a service [IaaS], and platform as a service [PaaS] needs. There are few simple areas I encourage you to look at first.

image

Let’s start with storage.

With today’s enormous growth in data everyone is looking for smarter, more cost-effective ways to manage and store their data. Windows Azure provides scaleable cloud storage and backup for any data big and small. Azure’s very cost-effective because you only pay for what you use at a cost that is lower than many on-premise solutions, SAN or NAS. Additionally we offer hybrid cloud storage option with our Store Simple appliance through Azure allowing you to access frequently use data locally and [put] tiered, less use data to the cloud. Your data is deduplicated, compressed and encrypted which means the data is smaller and therefore more cost effective to store and protect.

One customer example is Steelcase Corporation. There’re an office furniture supplier. They’ve backed up their SharePoint data with Store Simple on Azure, reducing their storage costs by 46 percent, and their restore times by 87 percent.

Another area to consider for Azure is your development and testing environment. You can easily and quickly self provision as many virtual machines as you need for your application development and testing in the cloud, without waiting for hardware procurement or internal processes. We offer complete virtual machine mobility so you can decide whether to deploy that application in production on Windows Azure, on-premises in your data center, or with a hosting provider. The choice is yours to deploy easily in whichever location with a few keystrokes.

And, if you’re looking to upgrade to the latest version of SharePoint or SQL [Server] Azure is a perfect option for testing in the cloud, with no impact to your production environment. You can roll out on-premises or in the cloud when you are ready.

On the topic of SQL [Server], backing up your on-premises SQL [Server] or Oracle databases is a must-have to help reduce your down time and minimize data loss. With Azure you can create a low-cost SQL Server 2012 or 2014 database replica without having to manage at separate data center or use expensive co-location facilities, offering you geo-redundancy and encryption.

Backing up your data base using Windows Azure Storage can save you up to 60 percent compared to on-premise SAN or tape solutions due to our compression technology.

And, our last scenario here for you to consider is identity. Managing identity across both the public cloud an on-premises applications provides you with the security you want in a great user experience. With Windows Azure Active Directory you can create new identities in the cloud or connect to an existing on-premises Active Directory to federate and manage access to your cloud application. More importantly you can synchronize on-premises identities with Windows Azure Active Directory and enable single sign-on for your users to access [your] cloud application.


I hope I provided you with a good overview of Microsoft hybrid cloud approach with the Cloud OS

In delivering global services at scale—like Bing, Skype and Xbox from our data centers—you can trust that our solutions are battle tested to meet the needs of your business.

And it’s not just battle tested by us but also by our customers. You heard a number of examples today of enterprises and organizations already benefiting from the Cloud OS vision. There are many-many more. This is a look at a small sampling.

image

We’re excited to see how each of you will transform IT and your businesses by taking advantage of our investments and solutions that are bringing the Cloud OS to life. So whether you’re testing the cloud for the first time, or going along with it, we have the platform and tools to help you every step of the way. Windows Azure in Windows Server support hybrid IT scenarios so you can flex to the cloud when you want, but still using your existing IT assets.

image

To get started today visit our Microsoft Cloud OS home page [Jan 20, 2014] to learn more and try out our solution.

Thank you for joining me.

Descriptors/tags:

Power BI as the lead business solution, Microsoft’s visionary Data Platform solution, unified platform for modern businesses, Microsoft Cloud OS, Cloud OS, mobility, apps, Big Data, cloud, Microsoft hybrid cloud solution, Windows Server, Windows Azure, System Center, Windows Intune, SQL Server, datacenter without boundaries, hybrid design, Microsoft Cloud OS vision, flexible development, unified management, common identity, integrated virtualization, complete data platform, storage, SharePoint, SQL, identity, self-service business intelligence solution, self-service analytics, self-service BI, analysis, visualization, collaboration, business intelligence models, Power BI for Office 365, Office 365, insights from data, data insights, Data Management Gateway, Power BI Sites, Power BI Mobile App, Mobile BI, natural language query, Q&A of Power BI, Microsoft Cloud & Enterprise Group, Microsoft’s Data Platform Vision, Power BI Jumpstart, autonomous marketing, Aston Martin, Microsoft’s Cloud OS home on YouTube, mobile device management, cloud computing, innovation, hybrid cloud, midmarket, datacenter modernization, consumerization of IT, hybrid cloud strategy, Business Intelligence, innovations, Microsoft Excel, Q&A, high-value activities, high-value experiences, high-value focus, MicrosoftMicrosoft strategy, value focus, Active Directory, application development, Azure AD, Cloud first, cloud infrastructure, cloud solutions, enterprise opportunities, PaaS, IaaS, Windows devices, Windows Phone


1. Power BI as the lead business solution and the Microsoft’s visionary Data Platform solution built for it

imageSelf-service business intelligence solution enables all kinds of business users to find relevant information, pull data from Windows Azure and other sources, and prepare business intelligence models for analysis, visualization and collaboration.
image
February 10: the top message on the Microsoft News Center 

Although it is just linking to this blog entry (no press release or anything like a big splash):
Power BI for Office 365 empowers everyone to analyze, visualize and share data in the cloud [The Official Microsoft Blog, Feb 10, 2014]

The following post is from Quentin Clark, Corporate Vice President, Data Platform Group.


On Monday we announced that Power BI for Office 365 – our self-service business intelligence solution designed for everyone – is generally available. Power BI empowers all kinds of business users to find relevant information, pull data from Windows Azure and other sources, and prepare compelling business intelligence models for analysis, visualization, and collaboration. 

Modernizing business intelligence

Today business intelligence is only used by a fraction of the people that could derive value from it. What we all need is modernized business intelligence which will help everyone get the information they need to understand their job or personal life better. Not just the type of information gained from an Internet search, but also information from expert sources. Now imagine you could bring together these different information sources, discover relationships between facets of information, create new insights and understand your world better. And that you could get others to see what you see, and enable them to collaborate and build on one another’s ideas. And imagine that available on any scale of data and any kinds of computation you might need. Now imagine it’s not just you – but that anyone can access this kind of data-driven discovery and learning. 

Power BI brings together many key aspects of the modernization of business intelligence: a public and corporate catalog of data sets and BI models, a way to search for data, a modern app and a Web-first experience, rich interactive visualizations, collaboration capabilities, tools for IT to govern data and models, and a groundbreaking natural language experience for exploring insights. Together, these capabilities will not just change the kinds of insights we can gain from data, but change the reach of those insights as well.

Bringing big data to a billion users

With Power BI, we have the opportunity to bring these types of data insights to a billion people. Office 365 is broadly adopted and growing – one in four of our enterprise customers now has Office 365. By making our business intelligence features part of Office, we ensure the tools are accessible, and through Office 365, we make the tools easy to adopt – not just the ease of using Web applications, but making things like collaboration, security, data discovery and exploration integrated and turnkey. 

I talked earlier about the importance of reach, and one of the ultimate forms of reach we discovered over the course of developing Power BI has been a feature we named Q&A, which allows anyone to type in search terms – just as they would in Bing – and  get instantaneous, visual results in the form of interactive charts or graphs.

Power BI for Office 365 Overview [MSCloudOS YouTube channel, Jan 22, 2014]

Power BI for Office 365: Self-service analytics for all your data. Learn how Power BI can help you discover, analyze and visualize your data while it empowers you to share your insights and collaborate with your colleagues. Ask questions with Q&A, schedule refreshes from on-prem or cloud data sources and access your reports anytime, anywhere. Try Power BI: http://www.microsoft.com/en-us/powerbi/default.aspx#fbid=lVtiyE9CkuC

Realizing value from data

I personally know how significant this all is – as you can imagine, at Microsoft we run our business on our own data platform and on Power BI. In my role as head of our data platform group, I don’t create a lot of models, but I consume a lot of them – everything from the business financials of the SQL Server business and team management to our engineering and services datasets. My mobile business intelligence application for Windows 8 allows me to interact with our daily engineering data. The ability to visualize and interact with data on my large PPI screen allows me and my finance and marketing partners to meet in my office and have a deep conversation about the business. Collaboration through Office 365 and SharePoint Online allows me to share perspective with my peers around the company.

Power BI for Office 365 has empowered me to realize deeper value from data. I’m excited to share this power with everyone.

Get Insights from Data [MSCloudOS YouTube channel, Jan 24, 2014]

One-minute video clip explaining the value of Power BI along with Office 365, focusing on how it addresses business’ pain points (once you have your data, how you get insights from it).

Big insights from big data at the World Economic Forum 2014 [Next at Microsoft Blog, Jan 22, 2014]

I’m at the World Economic Forum in Davos this week – where the world’s leaders, thought leaders and innovators gather to discuss the political, social and economic forces that are transforming the world and our lives. The other force that the World Economic  Forum calls out in their program (above all else) are the technological forces.

WEF 2014 education data with Power BI for Office 365 [Microsoft YouTube channel, Jan 21, 2014]

Education data from the World Economic Forum Global Competitive Index — visualized using Power BI for Office 365

Microsoft’s Vision Center sits directly across from the congress hall where all of these forces are being discussed and inside the center we’re showing how our technologies are helping turn data in to insight. As part of their work, the World Economic Forum produces a large volume of data and indices covering 148 countries. When I saw this data set in an Excel spreadsheet I knew it was ripe for transformation using Power BI for Office 365. As you can see in the video above, we’ve taken all of that data and are helping to deliver insight from it using Power View, Power Map and our Q&A technology. When you see health data below over a time period mapped country by country it really bring the data alive. When you can compare educational data across regions, countries and by type of education, once again the data comes alive. The real treat for me has been using Q&A to ask questions of the data much as you would ask questions of a data scientist.

WEF 2014 healthcare data with Power BI for Office 365 [Microsoft YouTube channel, Jan 21, 2014]

Healthcare data from the World Economic Forum Global Competitive Index — visualized using Power BI for Office 365

If you’ve not had a chance to see Power BI in action I’d encourage you to take up a trial of Office 365 and download the Power BI tools from PowerBI.com – it puts the decision making from data in the hands of anyone and I believe will help to deliver insights that answer some of the big questions at Davos this week and in the future. 

Source: World Economic Forum, Global Competitiveness Report series (various editions)

Find and Combine Data [MSCloudOS YouTube channel, Jan 24, 2014]

One-minute video clip explaining the value of Power BI along with Office 365, focusing on how it addresses business’ pain points (finding and combining data within the SMB).

Microsoft Releases Power BI for Office 365 [C&E News Bytes Blog*, Feb 10, 2014]

Today, Microsoft announced the general availability of Power BI for Office 365, a cloud-based business intelligence service that gives people a powerful new way to work with data in the tools they use every day, Excel and Office 365. Power BI for Office 365 brings together Microsoft’s strengths in cloud, productivity and business intelligence to enable people to easily analyze and visualize data in Excel, discover valuable insights, and share and collaborate on those insights from anywhere with Office 365.

Power BI for Office 365 with Excel allows business users to easily create reports and discover insights in Excel and share and collaborate on those insights in Office 365. Excel includes powerful data modeling and visualization capabilities which enables customer to easily discover, access, and combine their data. Customers also have the ability to create rich 3D geospatial visualizations in Excel.

With Office 365, customers have access to cloud-based capabilities to share visualizations and reports with their colleagues in real time and on mobile devices, interact with their data in new ways to gain faster insights and manage their work more effectively. These key cloud-based capabilities include:

  • A Data Management Gateway which enables IT to build connections to on-premise data sources and schedule refreshes. Business users always have the most up to date reports, whether on their desktop or over their device.
    [From the preview in Oct’13 here:] Through the Data Management Gateway, IT can enable on-premises data access for all reports published into Power BI so that users have the latest data. IT can also enable enterprise data search across their organization, making it easier for users to discover the data they need. The system also monitors data usage across the organization, providing IT with the information they need to understand manage the system overall.
  • [Power] BI Sites, dedicated workspaces optimized for BI projects, which allow business users to quickly find and share data and reports with colleagues and collaborate over BI results.
    [From the preview in Oct’13 here:] Power BI for Office 365 enables users to quickly create Power BI Sites, BI workspaces for users to share and view larger workbooks of up to 250MB, refresh report data, maintain data views for others and track who is accessing them, and easily find the answers they need with natural language query. Users can also stay connected to their reports in Office 365 from any device with HTML5 support for Power View reports and through a new Power BI mobile app for Windows.
  • Real-time access to BI Sites and data no matter where a user is located via mobile devices. Customers can access their data through the browser in HTML5 or through touch-optimized mobile application, available on the Windows Store.
    [From the preview in Oct’13 here:] The Power BI Mobile App is a new visualization app for Office that helps visualize graphs and data residing in an Excel workbook available in the Windows Store. The user is able to navigate through the data with multiple views and ability to zoom in and out at different levels. This app was first available for Windows 8, Windows RT, and Surface devices through the Windows Store and specifically for those customers using the Power BI for Office 365 Preview. It provides touch optimized access to BI reports and models stored in Office 365.
    Power BI App for Windows 8 and Windows RT now available in Store [“Welcome to the US SMB&D TS2 Team Blog”, Aug 21, 2013]
    Microsoft mobile app helps citizens report crimes more quickly to police in Delhi, India [The Fire Hose Blog, Jan 29, 2014]
  • A natural language query experienced called Q&A which allows users to ask questions of their data and receive immediate answers in the form of an interactive table, chart or graph.

Power BI for Office 365 provides an easy on-ramp for organizations who have bet on Office 365 to begin doing self-service BI today. Several customers have already started realizing the benefits of the service, including Revlon, MediaCom, Carnegie Mellon University and Trek.

For more information, read Quentin Clark, Corporate Vice President of the Data Platform Group’s, post [here you’ve already seen/read above] on the Official Microsoft Blog. Customers can find out more about how to purchase Power BI for Office 365 at powerbi.com.

[*About C&E News Bytes Blog: Here you will find a quick synopsis of all news from Microsoft’s Cloud & Enterprise organization as it is released with links to additional information.]

Share Data Insights [MSCloudOS YouTube channel, Jan 24, 2014]

One-minute video clip explaining the value of Power BI along with Office 365, focusing on how it addresses business’ pain points (once you get your data insights, how you can share it within your SMB and use the data to its fullest potential).

Broncos Road to the Big Game [MSCloudOS YouTube channel, Jan 31, 2014]

Power Map tour of the 2013 Broncos season and their road to the Super Bowl XLVIII.

Seahawks Road to the Big Game [MSCloudOS YouTube channel, Jan 31, 2014]

Power Map tour of the 2013 Seahawks season and their road to the Super Bowl XLVIII

What Drives Microsoft’s Data Platform Vision? [SQL Server Blog, Jan 29, 2014]

FEATURED POST BY:   Quentin Clark, Corporate Vice President, The Data Platform Group, Microsoft Corporation

imageIf you follow Microsoft’s data platform work, you have probably observed some changes over the last year or so in our product approach and in how we talk about our products.  After the delivery of Microsoft SQL Server 2012 and Office 2013, we ramped-up our energy and sharpened our focus on the opportunities of cloud computing.  These opportunities stem from technical innovation, the nature of cloud computing, and from an understanding of our customers.

In my role at Microsoft, I lead the team that is responsible for the engineering direction of our data platform technologies.  These technologies help our customers derive important insights from their data and make critical business decisions.  I meet with customers regularly to talk about their businesses and about what’s possible with modern data-intensive applications.  Here and in later posts, I will share some key points from those discussions to provide you with insight into our data platform approach, roadmap, and key technology releases.

Microsoft has made significant investments on the opportunities of cloud computing.  In today’s IT landscape, it’s clear that the enterprise platform business is shifting to embrace the benefits of cloud computing—accessibility to scale, increased agility, diversity of data, lowered TCO and more. This shift will be as significant as the move from the mainframe/mini era to the microprocessor era.  And, due to this shift, the shape and role of data in the enterprise will change as applications evolve to new environments.

Today’s economy is built on the data platform that emerged with the microprocessor era—effectively, transactional SQL databases, relational data warehousing and operational BI.  An entire cycle of business growth was led by the emergence of patterns around Systems of Record, everything from ERP applications to Point of Sale systems.  The shift to cloud computing is bringing with it a new set of application patterns, which I sometimes refer to as Systems of Observation (SoO).  There are several forms of these new application patterns: the Internet of Things (IoT), generally; solutions being built around application and customer analytics; and, consumer personalization scenarios.  And, we are just beginning this journey! 

These new application patterns stem from the power of cloud computing—nearly infinite scale, more powerful data analytics and machine learning, new techniques on more kinds of data, a whole host of new information that impacts modern business, and ubiquitous infrastructure that allows the flow of information like never before.  What is being done today by a small number of large-scale Internet companies to harness the power of available information will become possible to apply to any business problem. 

To provide a framework for how we think applications and the information they generate or manage will change—and how that might affect those of us who develop and use those applications—consider these characteristics:

Data types are diverse.  Applications will generate, consume and manipulate data in many forms: transactional records, structured streamed data, truly unstructured data, etc.  Examples include the rise of JSON, the embracing of Hadoop by enterprises, and the new kinds of information generated by a wide variety of newly connected devices (IoT).

Relevant data is not just from inside the enterprise.  Cross-enterprise data, data from other industries and institutions, and information from the Web are all starting to factor into how businesses and the economy function in a big way.  Consider the small business loan extension that accounts for package shipping information as a criteria; or, companies that now embrace the use of social media signals.

Analytics usage is broadening.  Customer behavior, application telemetry, and business trends are just a few examples of the kinds of data that are being analyzed differently than before.  Deep analytics and automated techniques, like machine learning, are being used more often. And, modern architectures (cloud-scale, in-memory) are enabling new value in real-time, highly-interactive data analysis.

Data by-products are being turned into value.  Data that were once considered as by-products of a core business are now valuable across (and outside of) the industries that generate this data; for example, consider the expanding uses of search term data.  Perhaps uniquely, Microsoft has very promising data sets that could impact many different businesses.  

With these characteristics in mind, our vision is to provide a great platform and solutions for our customers to realize the new value of information and to empower new experiences with data.  This platform needs to span across the cloud and the enterprise – where so much key information and business processes exist.  We want to deliver Big Data solutions to the masses through the power of SQL Server and related products, Windows Azure data services, and the BI capabilities of Microsoft Office. To do this, we are taking steps to ensure our data platform meets the demands of today’s modern business.

Modern Transaction Processing—The data services that modern applications need are broader now than traditional RDBMS.  Yes, this too needs to become a cloud asset, and our investments in Windows Azure SQL Database reflect that effort.  We recognize that other forms of data storage are essential, including Windows Azure Storage and Tables, and we need to think about new capabilities as we develop applications in cloud-first patterns.  These cloud platform services need to be low friction, easy to incorporate, and operate seamlessly at scale—and have built-in fundamental features like high availability and regulatory compliance.  We also need to incorporate technical shifts like large memory and high-speed low latency networking—in our on-premises and cloud products. 

Modern Data Warehousing—Hadoop brought flexibility to what is typically done with data warehousing: storing and performing operational and ad-hoc analysis across large datasets.  Traditional data warehousing products are scaling up, and the worlds of Hadoop and relational data models are coming together.  Importantly, enterprise data needs broad availability so that business can find and leverage information from everywhere and for every purpose—and this data will live both in the cloud and in the enterprise datacenter.  We are hearing about customers who now compose meaningful insights from data across Windows Azure SQL Database and Windows Azure Storage processed with Windows Azure HDInsight, our Hadoop-based big data solution. Customers are leveraging the same pattern of relational + Hadoop in our Parallel Data Warehouse appliance product in the enterprise. 

Modern Business Intelligence—Making sense of data signals to gain strategic insight for business will become commonplace.  Information will be more discoverable; not just raw datasets, but those facets of the data that can be most relevant—and the kinds of analytics, including machine learning, that can be applied—will be more readily available.  Power BI for Office 365, our new BI solution, enables balance between self-service BI and IT operations—which is a key accelerant for adoption. With Power BI for Office 365, data from Windows Azure, Office, and on-premises data sources comes together in modern, accessible BI experiences. 

Over the coming months, we are going to publish regular posts to encourage discussions about data and insights and the world of modernized data. We will talk more about the trends, the patterns, the technology, and our products, and we’ll explore together how the new world of data is taking shape. I hope you will engage in this conversation with us; tell us what you think; tell us whether you agree with the trends we think we see—and with the implications of those trends for the modern data platform.

If you’d like more information about our data platform technologies, visit www.microsoft.com/bigdata and follow@SQLServer on Twitter for the latest updates.

Getting Trained on Microsoft’s Expanding Data Platform [SQL Server Blog, Feb 6, 2014] 

With data volumes exploding, having the right technology to find insights from your data is critical to long term success.  Leading organizations are adjusting their strategies to focus on data management and analytics, and we are seeing a consistent increase in organizations adopting the Microsoft data platform to address their growing needs around data.  The trend is clear: CIOs named business intelligence (BI) and analytics their top technology priority in 2012, and again in 2013. Gartner expects this focus to continue during 2014. 2

At Microsoft, we have great momentum in the data platform space and we are proud to be recognized by analysts like IDC reporting that Microsoft SQL Server continues to be the unit leader and became the #2 database vendor by revenue.1Microsoft was named a leader in both the Enterprise Data Warehouse and Business Intelligence Waves by Forrester, 3,4and is named a leader in the OPDMS Magic quadrant. 5

The market is growing and Microsoft has great momentum in this space, so this is a great time to dig in and learn more about the technology that makes up our data platform through these great new courses in the Microsoft Virtual Academy.

Microsoft’s data platform products

Quentin Clark recently outlined our data platform vision [here you’ve already seen/read above]. This calendar year we will be delivering an unprecedented lineup of new and updated products and services:

  • SQL Server 2014 delivers mission critical analytics and performance by bringing to market new in-memory capabilities built into the core database for OLTP (by 10X and up to 30X) and Data Warehousing (100X). SQL Server 2014 provides the best platform for hybrid cloud scenarios, like cloud backup and cloud disaster recovery, and significantly simplifies the on-ramp process to cloud for our customers with new point-and-click experiences for deploying cloud scenarios in the tools that are already familiar to database administrators (DBAs).
  • Power BI for Office 365 is a new self-service BI solution delivered through Excel and Office 365 which provides users with data analysis and visualization capabilities to identify deeper business insights from their on-premises and cloud data.
  • Windows Azure SQL Database is a fully managed relational database service that offers massive scale-out with global reach, built-in high availability, options for predictable performance, and flexible manageability. Offered in different service tiers to meet basic and high-end needs, SQL Database enables you to rapidly build, extend, and scale relational cloud applications with familiar tools.
  • Windows Azure HDInsight makes Apache Hadoop available as a service in the cloud, and also makes the Map Reduce software framework available in a simpler, more scalable, and cost efficient Windows Azure environment.
  • Parallel Data Warehouse (PDW) is a massively parallel processing data warehousing appliance built for any volume of relational data (with up to 100x performance gains) and provides the simplest way to integrate with Hadoop. With PolyBase, PDW can also seamlessly query relational and non-relational data.

In-depth learning through live online technical events

To support the availability of these products, we’re offering live online events that will enable in-depth learning of our data platform offerings. These sessions are available now through the Microsoft Virtual Academy (MVA) and are geared towards IT professionals, developers, database administrators and technical decision makers. In each of these events, you’ll hear the latest information from our engineering and product specialists to help you grow your skills and better understand what differentiates Microsoft’s data offerings.

Here is a brief overview of the sessions that you can register for right now:

Business Intelligence

Faster Insights with Power BI Jumpstart | Register for the live virtual event on February 11

Session Overview: Are you a power Excel user? If you’re trying to make sense of ever-growing piles of data, and you’re into data discovery, visualization, and collaboration, get ready for Power BI. Excel, always great for analyzing data, is now even more powerful with Power BI for Office 365. Join this Jump Start, and learn about the tools you need to provide faster data insights to your organization, including Power Query, Power Map, and natural language querying. This live, demo-rich session provides a full-day drilldown into Power BI features and capabilities, led by the team of Microsoft experts who own them.

Data Management for Modern Business Applications

SQL Server in Windows Azure VM Role Jumpstart | Register for the live virtual event on February 18

Session Overview: If you’re wondering how to use Windows Azure as a hosting environment for your SQL Server virtual machines, join the experts as they walk you through it, with practical, real-world demos. SQL Server in Windows Azure VM is an easy and full-featured way to be up and running in 10 minutes with a database server in the cloud. You use it on demand and pay as you go, and you get the full functionality of your own data center. For short-term test environments, it is a popular choice. SQL Server in Azure VM also includes pre-built data warehouse images and business intelligence features. Don’t miss this chance to learn more about it.

Here’s a snapshot of the great content available to you now, with more to come later on the on the MVA data platform page:

Data Management for Modern Business Applications

Modern Data Warehouse

For more courses and training, keep tabs on the MVA data platform page and the TechNet virtual labs as well.

Thanks for digging in.

Eron Kelly
General Manager
Data Platform Marketing

———– 

1Market Analysis: Worldwide Relational Database Management Systems 2013–2017 Forecast and 2012 Vendor Shares, IDC report # 241292 by Carl W. Olofson, May 2013
2Business Intelligence and Analytics Will Remain CIO’s Top Technology Priority G00258063 by W. Roy Schulte | Neil Chandler | Gareth Herschel | Douglas Laney | Rita L. Sallam | Joao Tapadinhas | Dan Sommer 25 November 2013
3The Forrester Wave™: Enterprise Data Warehouse, Q4 2013, Forrester Research, Inc.,  December 9, 2013
4The Forrester Wave™: Enterprise Business Intelligence Platforms, Q4 2013, Forrester Research, Inc.,  December 18, 2013
5Gartner, Magic Quadrant for Operational Database Management Systems by Donald Feinberg, Merv Adrian and Nick Heudecker, October 21, 2013.
Disclaimer:
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Free Power BI Training – Microsoft Virtual Academy Jump Start [“A Story of BI, BIG Data and SQL Server in Canada” Blog, Feb 5, 2014]

Whether you’re a power Excel user or you’re just trying to make sense of ever-growing piles of data, we have a great day long, free online training session for you on Power BI for Office 365.

This live, demo rich training will provide sessions covering key Power BI features and capabilities and help you learn about the tools you need to provide faster data insights to your organization. 

Course Outline:

  • Introduction to Power BI
  • Drilldown on Data Discovery Using Power Query
  • The Data Stewardship Experience
  • Building Stellar Data Visualizations Using Power View
  • Building 3D Visualizations Using Power Map
  • Understand Power BI Sites and Mobile BI
  • Working with Natural Language Querying Using Q&A
  • Handling Data Management Gateway
  • Get Your Hands on Power BI

Sign Up for this Microsoft Virtual Academy Jump Start led by the team of Microsoft experts who own them.

Live Event Details

  • February 11, 2014
  • 9:00am-5:00pm PST
  • What time is this in my time zone?
  • What: Fast-paced live virtual session
  • Cost: Free
  • Audience: IT Pro
  • Prerequisites: For data analysts, Excel power users, or anyone looking to turn their data into useful business information.
  • Register Now>>

Interview with Marc Reguera, Director of Finance at Microsoft [MSCloudOS YouTube channel, Feb 10, 2014]

Hear directly from Marc Reguera, Director of Finance at Microsoft and BI champion how Power BI is changing the way finance works inside Microsoft.

Power BI Webinar Series [MSFT for Work Blog, Jan 22, 2014]

Big data scientists and the finance department haven’t always seen eye to eye in most companies. Now is your chance to embrace big data to free your finance department to focus on the ways to add the most value.

You are invited to join Microsoft Finance Director Marc Reguera and members of the Microsoft finance leadership team to find out what they did to become a more empowered and influential finance organization. The powerful new business intelligence tools they will demonstrate have been under wraps for almost two years and have so far only been used within Microsoft.

image

Now the tools have been road-tested and are ready for you to try. Grab your chance to learn how the Microsoft new BI tools will help your business not only adapt to the world of big data, but actually thrive in it.

Register for any and all of the webinars you are interested in:

1/23/14: Visualization: See how these powerful new tools have improved Microsoft’s ability to consume big data and develop insights by simplifying the data and using visualization tools. Register here.

1/30/14: Definitions: Get the best practices for creating and aligning behind a common set of data definitions and taxonomies. Learn how to get everyone on the same page. Register here.

2/13/14: Outsourcing: Learn how Microsoft worked with partners to optimize and outsource non-strategic finance tasks so the organization could focus on high-value activities. Register here.

2/20/14: Cloud collaboration: Learn how your organization can focus more time on delivering business insights by using Power BI and Microsoft Office 365. Register here.

3/6/14: Making things easy to comprehend without making them simplistic: See how Microsoft finance teams consume and analyze millions of rows of data and present their analysis in a narrative that’s easy to understand for multiple audiences. Register here.

Taken together, this series of webinars will help your company’s finance department adapt to a world of rapidly shifting paradigms and what can be, without the right tools, the overwhelming era of big data.

Business Intelligence: “The Eyes and Ears of Your Business” [Microsoft for Work Blog, Jan 30, 2014]

Businesses are collecting more data than ever before, and technology is making that process increasingly easier and more affordable. The challenge for business owners is 1) how to quickly turn that raw data into actionable business insights, and 2) how to give more people within an organization access to those insights on a self-serve basis.
Organizations must have insight into how their operations are performing in order to stay competitive. Companies who successfully manage their big data assets are more profitable than companies not making this investment, says Jason Baick, Senior Product Marketing Manager at Microsoft. Simply put, “[business intelligence] is the eyes and ears of your business,” Baick says.
Release data from the IT department
Data analysis started off as a highly specialized process. “It was always a barrier to self-service information … the treasure trove of the data was locked up in the IT department,” Baick points out. Today there are easy-to-use data visualization tools that offer anyone within an organization access to real-time business insights.
Take the Microsoft Power BI suite, for example, which gives both businesses and the individual an easy-to-use platform to visualize their data. Given that many businesses already have the infrastructure that Power BI is built on (e.g. Microsoft SharePoint) and a familiarity with its feature set, integration and adoption is simplified. Your users don’t have an intimidation factor because they already know how to use Excel, explains Baick. By equipping your employees with these types of tools, you can enable team members to unearth real-time insights, ranging from targeting a prospect at the exact right time to make the sale, to determining where the company can cut costs, to revealing where they should invest more.

Here’s a rundown of specific Power BI tools and what they can offer your business:

  • Discover and Combine
    • Search and access all your company’s data and public data from one place using Power Query. Give your team the ability to be more efficient while cutting down on the cost of investing in multiple, disparate data tools.
  • Model and Analyze
    • Empower your employees to create analytical models using Power Pivot. Since this is built on familiar software like Excel, you won’t have to worry about the cost of training or having to hire new staff for implementation.
  • Visualize
    • Power View and Power Map enables your team members to quickly translate big data sets and create easy-to-understand visuals without a huge time investment.
  • Share and Collaborate
    • Seamlessly share and edit workbooks from any device, allowing your employees quick and easy access to important information in real time.
  • Get Answers and Insights
    • The new Q&A feature gives your employees the ability to ask any question of their data without requiring specialized skills to draw out these insights.
  • Access Anywhere
    • Give your staff access to the Power BI tool set from any device, any location. This empowers your employees to access data in real time, which could mean the difference between making and not making a sale.
How are people using Power BI?
Companies like MCH Strategic Data are already employing the Power BI suite to get more out of their data. MCH collects an enormous amount of education and healthcare marketing data for their clients. After 85 years in the business, they’re now able to deliver new and unique insights to clients like never before. One application has been to create videos using tools like Power Map to create data visualizations showing the geographic range of socioeconomic status across various school districts. They’ve also made subsets of their data available and searchable by Power BI users, including datasets on hospitals, school systems, and emergency preparedness services throughout the US
Building a data-driven organization
Everyone at your company can contribute to uncovering business insights, and it’s important to give them the tools to do so. Using your data in a smart and strategic way enables you to turn it into actionable business insights and to stay ahead of the competition.

The Autonomy of Marketing with Big Data [Microsoft for Work Blog, Jan 15, 2014]

We spoke to Jeff Marcoux, Senior Product Marketing Manager for Dynamics CRM, about how big data and data insights have changed marketing. He outlined three ways that companies can use big data to reimagine their marketing efforts.

He also outlined an all-encompassing rule when using data insights for marketing efforts: it’s not about how much data you have, it’s what you do with it. “Large data makes graphs, but significant data tells a story,” said Marcoux. Learning how to leverage significant big data into actionable insights is the key to unlocking its potential as an asset to your business. Here are Jeff’s three key ways companies can do smart things with their data:

  1. Embrace the idea that autonomous marketing, or marketing that is auto-optimized and auto-customized according to customer insights and machine-generated learning, can reinvigorate marketing campaigns. The key being it’s a more responsive marketing campaign that continuously strengthens and adjusts itself.
  2. Use customer insights to create stronger sales-marketing partnerships by increasing positive brand awareness and generating more accurate information on qualified leads and revenue attribution. In other words, more insight contributing to less finger-pointing and, ultimately, greater partnerships. 
  3. Translate data into business impact by building custom sales kits appropriate for every opportunity and every customer, monitoring the end-to-end customer life cycle, and keeping customers hooked. After all, according to Marcoux, “existing customers are the best sellers.”

Data insights will help drive marketing at the deepest strategic levels, providing actionable insights that can constantly be measured against and refined. Remember, it’s not how much data you’ve got, it’s what you do with it. If your organization has started to use data insights in your marketing efforts, do you have any tips on how to better use data? Sound off in the comments!

Autonomous Marketing: Using data to perfectly personalize marketing efforts [Microsoft for Work Blog, Jan 30, 2014]

Personalization is the gold standard for marketing efforts. If you can connect with a customer on a personal level and demonstrate that you understand your audience, the customer is far more likely to respond to your marketing campaigns. It may seem like a daunting task to crunch that much customer information and automatically adapt it to your marketing efforts, but it doesn’t have to be. Technologies exist that allow you to update campaigns with new data (auto-optimize) and use that updated data to better target your efforts (auto-customize), removing the guesswork you’re your campaigns. Marketing that is auto-optimized and auto-customized based on customer insights and machine generated learning—called “autonomous marketing”—is now a tangible reality for many businesses.
Autonomous marketing and big data will be critical in re-imagining a more personalized approach to marketing—and learning to harness this approach will keep your business ahead of the curve as marketing innovators.
Data, data everywhere…
The amount of data available today is overwhelming. Take, for example, a single business—just between the company’s website, Facebook page, and Twitter, there’s a lot to keep track of. All this information needs to be consolidated and combed to figure out which data is significant and what happens next. For many businesses the question becomes: what do I do with my data?
According to Jeff Marcoux, Senior Product Marketing Manager for Dynamics CRM, that data should be fed to an engine that’s automatically optimizing itself. In practice, this “auto-optimizing” capability translates into the ability to make campaign improvements in real-time. The result is a more responsive marketing campaign that continuously strengthens and adjusts itself to help dial in on more precise market segments and figure out what’s working.
Getting personal
The clincher, once you’ve honed in on those market segments, is auto-customizing marketing campaigns down to the individual level. “Customers are already so far down the buying cycle when they get to you (nearly 57%) and getting personal is the only way to land your message and have it resonate with consumers,” said Marcoux. Once that same engine is automatically tailoring marketing efforts based on data insights, you’ll know you’ve crossed-over into today’s gold standard for marketing—personalization perfection.
“Autonomous marketing is a beast,” said Marcoux, “once it gets going you just have to pay attention and keep feeding it.” The autonomous marketing beast metabolizes content and, so long as it’s fed plenty of “healthy” (significant) data, it will do its job to improve marketing. In turn, you will gain valuable insight into revenue performance and ROI, this way you can pinpoint which marketing maneuvers were converted into real business impact.
A healthy beast, a happy business
The success of autonomous marketing relies on 2 things: the quality of the data it’s fed and whether you take advantage of the insights it offers. A responsive, personalized approach to marketing is where we’re headed—are you doing everything to make sure your business is headed there too?

The Human Side of Autonomous Marketing [Microsoft for Work Blog, Jan 30, 2014]

How do you retain the creative side of marketing when big data and autonomous marketing inevitably change the way marketers work? Data insights enhance the efficacy of your marketing efforts; however, human input is always necessary to decipher big data. Autonomous marketing, used to enable marketers and nail down effective marketing campaigns, is the secret to realizing business impact.
Metrics for the Mind
The application of autonomous marketing is a necessary next step in meeting a new demand, but it doesn’t supplant the need for marketers in the flesh. According to Jeff Marcoux, Senior Product Marketing Manager for Microsoft Dynamics CRM, marketers will never be forced to relinquish their instincts and creativity—their marketing guts—because analytics, data, and insight help fuel creativity.
“The main reason I say that,” said Marcoux, “is because there’s always going to be new channels and marketers have to come up with new ways to use them.” Take for example the exodus of college-age students from Facebook (which Marcoux attributes to the fact that their parents on are on it) to something more like Snapchat. Although data may shed some insight on the shift, it’s up to marketers to take advantage of it in a creative way (e.g., showing loyal fans a secret menu or product announcement before the rest of the world gets to see it).
Take Colorado University’s Online program at their Anschutz Medical Campus, which faced the challenge of how to remain competitive to college students and reach potential students on their own terms. CU used Microsoft Dynamics CRM to identify what their potential students liked, the media they consumed, and the social networks they used—processes that would normally take marketers months of research—and automated it so their marketing team could focus on killer campaigns that would engage the potential students they did find. The result? Increased student retention and recruitment.
Coming up with the emotional content that drives a campaign is where the creativity and experience come in. Marcoux sees autonomous marketing as a way to free up marketers to do what they love—create and innovate—and, today, there’s plenty of opportunity to innovate as campaigns become increasingly personalized.
A Mind-Body Approach to Marketing
Customers don’t want to be just a number; they want to be known. “With social media, everything is personal and everything is online,” said Marcoux. “Hooking” modern consumers is a matter of building those personal, emotional relationships—identifying who they are and what their need is, educating them on a solution, and then ultimately providing that solution.
“We’ve seen that personalization come across in emails and social posts, but that’s all been enabled by big data,” said Marcoux. Customers are already so far down the buying cycle when they get to you (nearly 57%), and getting personal is the only way to land your message and have it resonate with consumers these days.
Autonomous marketing powered by data insights helps marketers gather and combine information from many different sources in order to figure out what content is working. This way, marketers can focus on what is actually selling their product rather than getting petrified by what Marcoux calls “analysis paralysis,” or the misinterpretation and incorrect analysis of data.
Ultimately, autonomous marketing is a way to deal with the deluge of social data and other information to help marketers do their job better. Reimagining marketing, according to Marcoux, is a matter of using big data to narrow in on those granular market segmentations and continuing to fine-tune an effective, personalized marketing approach that will hook and keep hooked customers.


2. Microsoft’s vision of the unified platform for modern businesses

THE BIG PICTURE: Microsoft Cloud OS Overview [MSCloudOS YouTube channel, Jan 21, 2014]

Tune into this bite size video where you will hear Microsoft General Manager, Gavriella Schuster, provide an overview of the Microsoft Cloud OS and how the underlying technologies – Windows Server, System Center, Windows Azure, Microsoft SQL Server, and WIndows Intune- can help you cloud-optimize your business today. Interested in learning more? Visit our Microsoft Cloud OS homepage: http://www.microsoft.com/en-us/server-cloud/cloud-os/ Ready to try these solutions and experience the benefits first hand? Contact your Microsoft account manager or partner to schedule an Immersion experience today.

Business Insights Newsletter Article | October 2013

MICROSOFT DEFINES THE CLOUD WITH ONE WORD – VALUE

When conversation turns to cloud computing, there is a lot of noise. Press, vendors, analysts, bloggers and others deliver opinions on what a successful cloud strategy entails.
Converging technologies such as Big Data, Mobility, BYOD and Social are transforming how businesses operate and compete and are relying on cloud as a critical enabler. Cloud itself is considered an emerging megatrend representing a real opportunity for IT to introduce more efficiency across every operational line of business.
The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.”
Brad Anderson, Corporate Vice President, Microsoft
Microsoft believes that cloud is quite simply about a single concept – value. In this article we will share how Microsoft helps you realize the value of cloud, why Windows Server is best suited to take you on the journey, and let you hear how luxury car-maker Aston Martin transformed their business and their IT department by using a Windows Server hybrid strategy.
Your Journey to the Cloud
The true value of cloud is the opportunity for IT to get all the benefits of scale, speed, and agility while still protecting existing investments.
Cloud better enables the introduction of the megatrends of Big Data, Social and Mobile by providing answers to help IT manage risk while delivering quality services and applications quickly, efficiently, securely. As organizations start their journey to the cloud, they typically are grappling with a combination of traditional on-premise and cloud-based solutions; however these hybrid scenarios have the potential to introduce new complications. Working with multiple versions of conflicting operating systems, management tools and applications is usually counter-productive and results in staff frustration, departmental inefficiencies and poor productivity. To be successful, teams need a way to consistently manage, support and automate the datacenter. Microsoft Cloud OS Vision Begins with Windows Server 2012 R2
There are multiple ways for customers to think about how they provision their infrastructure, and we aim to enable an ‘and’ philosophy for our customers so they don’t have to think that it’s an either/or decision. We allow them to take servers and other technology they are running on premises and think about how they might want to move some of it into cloud services, while still having a consistent level of management, identity and security.”
Gavriella Schuster, Microsoft GM US Server Tools
Organizations can begin to realize tremendous value with cloud when they leverage the ability to operate and manage a converged infrastructure that shares a common operating system and set of tools across hybrid environments supporting an assortment of devices, applications and users.
Figure 1: Windows Server Delivers Value with a Unified Hybrid Environment
At the heart of the Microsoft Cloud OS vision is Windows Server 2012 R2. With Windows Server 2012 R2 Microsoft’s experience delivering global-scale cloud services enables organizations of all sizes to take advantage of new features and enhancements across virtualization, storage, networking, virtual desktop infrastructure, access and information protection, and more.
The value of standardizing on Windows 2012 R2 as your Cloud OS strategy includes:
Experience Enterprise-class Performance and Scale
    • Take advantage of even better performance and more efficient capacity utilization in your datacenter.
    • Increase the agility of your business with a consistent experience across every environment.
    • Leverage the proven, enterprise-class virtualization and cloud platform that scales to continuously run your largest workloads while enabling robust recovery options to protect against service outages.
      Drive Bottom Line Efficiencies with Cost Savings and Automation
        • Enjoy resilient, multi-tenant-aware storage and networking capabilities for a wide range of workloads.
        • Re-deploy your budget to other critical projects with the cost-savings delivered through a Windows 2012 R2 Cloud OS.
        • Automate a broad set of built-in management tasks.
        • Simplify the deployment of major workloads and increase operational efficiencies.
          Unlock Competitive Advantage with Faster Application Deployment
            • Build, deploy and scale applications and web sites quickly, and with more flexibility than ever before.
            • Unlock improved application portability between on-premises environments and public and service provider clouds in concert with Windows Azure VM and System Center 2012 R2 making it simple to rapidly shift your critical applications virtually anywhere, anytime.
            • Increase flexibility and elasticity of IT services with the Windows Server 2012 R2 platform for mission-critical applications while protecting existing investments with enhanced support for open standards, open source applications and various development languages.
              Empower Users with Better Access Anywhere
                • Windows Server 2012 R2 makes it easier to deploy a virtual desktop infrastructure making it possible for users to access IT from virtually anywhere, providing them a rich Windows experience while ensuring enhanced data security and compliance.
                • Lower storage costs significantly by supporting a broad range of storage options and VHD de-duplication.
                • Easily manage your user’s identities across the datacenter and into the cloud to help deliver secure access to corporate resources.

                  In Summary

                  The datacenter is the hub for everything IT offers to the business: storage, networking and computing capacity. The right Cloud OS strategy enables IT to transform those resources into a datacenter that is capable of handling changing needs and unexpected opportunities. With Windows Server 2012 R2, Microsoft offers a consistent operating system and set of management tools that acts and behaves in exactly the same manner across every setting. Windows Server 2012 R2 delivers the same experience and requires the same skill-sets and knowledge to manage and operate in any environment. Windows Server 2012 R2 delivers a “future-proof” road-map with a fully seamless and scalable platform, making organizations agile, nimble and ready. Highly scalable, Windows Server 2012 is already powering many of the worlds’ largest datacenters – including Microsoft’s – proving out capabilities at cloud scale and then delivering them for the enterprise. With the latest release of Windows Server 2012 R2, Microsoft is redefining the server category, delivering hundreds of new features and enhancements spanning virtualization, networking, storage, user experience, cloud computing, automation, and more. The goal of Windows Server 2012 R2 is to help organizations transform their IT operations to reduce costs and deliver a whole new level of business value.
                  Aston Martin Uses Windows Server 2012 to Drive IT Transformation
                  Behind every luxury sports car produced by Aston Martin is a sophisticated IT infrastructure. The goal of the Aston Martin IT team is to optimize that infrastructure so that it performs as efficiently as the production line it supports. To meet that goal, Aston Martin has standardized on Microsoft technology. The IT team chose the Windows Server 2012 operating system, including Hyper-V technology to virtualize its data center and build four private clouds to dynamically allocate IT resources to the business as needed. For cloud and data center management, Aston Martin uses Microsoft System Center 2012.
                  “The IT team’s purpose is to enable Aston Martin to build the most beautiful sports cars in the world. So, from servers, to desktops, to production line PCs, Microsoft technology is behind everything we do.”
                  Daniel Roach-Rooke, IT Infrastructure Manager, Aston Martin
                  Watch this short video to learn how the team at Aston Martin envisioned and executed on their strategy.

                  imageWatch the Aston Martin video now

                  Call to Action
                  With Windows Server Data Center 2012 R2 set to release in November, now is the time to see your Microsoft licensed solution provider for information about software savings.

                  MSCloudOS YouTube supersite:

                  Microsoft’s Cloud OS home on YouTube to find the latest products & solutions news, demos as well as training videos for Windows Server, SQL Server, System Center, Windows Intune, Microsoft BI, and Windows Azure—the technologies that bring Microsoft’s vision of Cloud OS to life.
                  imageEvolving IT in the Era of the Cloud OS [June 3, 2013] Today’s massive technology shifts are creating new demands on IT. Learn how Microsoft hybrid cloud solutions deliver new innovations that can help you solve the challenges you face now.
                  imageThe Enterprise Cloud Era [June 3, 2013] See Microsoft President Satya Nadella talk about Microsoft’s cloud-first approach.
                  imageTechEd North America 2013 Keynote [June 24, 2013] Despite sea changes in cloud computing, device proliferation, and the explosion of data, IT pros and developers still live for one simple thing: to deliver amazing experiences for their customers and end-users. In this keynote, Brad Anderson will unveil a broad set of new capabilities across the full suite of Microsoft Cloud OS products and technologies designed with that simple end goal in mind. Together with enterprise-optimized enhancements to the Windows 8 client, the advances that Brad will showcase in this keynote significantly advance Microsoft’s long-term effort to give you the most advanced and comprehensive set of services, products, and technologies in the industry. Learn how Windows 8 is ready for business, how Windows Azure is changing hybrid and private cloud computing, and how the world of modern application development is evolving. It’s time to embrace the challenges of a world full of risks and opportunities. See what Microsoft is delivering next, including new enterprise enhancements in the upcoming Windows 8.1 update, and learn what it means for your business as well as your career.
                  imageTechEd Europe 2013 Keynote [June 26, 2013] In an era of global technological change, IT pros and developers still live for one basic thing: to deliver amazing experiences. In this keynote from TechEd Europe 2013, Corporate Vice President Brad Anderson, will detail Microsoft’s strategy to help customers achieve that simple goal by leveraging new innovations in cloud services, device management, application development, data insights, and datacenter evolution. Mr. Anderson will review a broad set of newly-announced updates across the full suite of Microsoft Cloud OS products and technologies, including Windows Server, Microsoft System Center, Windows Azure, SQL Server, Visual Studio, and more. It’s time to embrace the challenges of a world full of new opportunities. See what Microsoft is delivering next and learn what it means for your business as well as your career.
                  imageMicrosoft Keynote Highlights from Oracle OpenWorld 2013 Watch highlights from Microsoft Corporate Vice President Brad Anderson‘s keynote address from Oracle OpenWorld 2013 as Brad discusses the Cloud OS vision and how Microsoft and Oracle are working together to bring the power of Oracle’s software to private/public cloud and service providers. This new partnership allows customers using Java, Oracle WebLogic Server and Oracle Database to run this software on Windows Azure and Windows Hyper-V.
                  MSCloudOS YouTube supersite:

                  Microsoft’s Cloud OS home on YouTube to find the latest products & solutions news, demos as well as training videos for the technologies that bring Microsoft’s vision of Cloud OS to life. Subsites:
                  SQL Server (YouTube)
                  Windows Server (YouTube)
                  System Center & Windows Intune (YouTube)
                  BI (YouTube)
                  Case Studies (YouTube)

                  A People-centric Approach to Mobile Device Management [In The Cloud Blog, Jan 29, 2014]

                  The following post is from Brad Anderson, Corporate Vice President, Windows Server & System Center.


                  It’s been a little while since I wrote about the work we are doing around the BYO and Consumerization trends – but this is an area I will be discussing much more often over the next several months.

                  Consumerization is an area that is changing and moving quickly, and I believe the industry is also at an important time where we really need to step back and define what our ultimate destination looks like.

                  I think there is a great deal of agreement across the industry on what we are all trying to accomplish – and this is aligned with Microsoft’s vision. Microsoft’s vision is to enable people to be productive on all the devices they love while helping IT ensure that corporate assets are secure and protected.

                  One particular principle that I am especially passionate about is the idea that the modern, mobile devices which are built to consume cloud services should get their policy and apps delivered from the cloud. Put another way: Modern mobile devices should be managed from a cloud service.

                  One of the reasons I am such a big believer in this is the rapid pace at which new devices and updates to the devices are released. Enabling people across all the devices they love brings with it the need to stay abreast of the changes and updates happening across Windows, iOS, and the myriad of Android devices. By delivering this as a service offering, we can stay on top of this for you. Thus, as changes are needed, we simply update the service and the new capabilities are available for you. This means no longer needing to update your on-premises infrastructure – we take care of all of it for you.

                  System Center Configuration Manager is the undisputed market leader in managing desktops around the world, and now we are delivering many of our MDM/MAM capabilities from the cloud. We have deeply integrated our Intune cloud service with ConfigMgr so organizations can take advantage of managing all of their devices in one familiar control plane using their existing IT skills.  Put simply:  We are giving organizations the choice of using their current ConfigMgr console extended with the Intune service, or doing everything from the cloud using only Intune if they wish to do management without an on-premises infrastructure.

                  On a fairly regular basis I encounter the question about whether or not cloud-based management is robust enough for enterprise organizations. My response to this has surprised our partners and customers with just how powerful a cloud-based solution can be. The answer is a resounding, “Heck yes it is robust and secure enough!”

                  Windows Intune and Windows Azure Active Directory puts IT leadership in the driver’s seat by allowing an organization to define and manage user identities and access, operate a single administrative console to manage devices, deliver apps, and help protect data.

                  The result is employee satisfaction, a streamlined infrastructure, and a more efficient IT team – all with existing, familiar, on-prem investments extended to the cloud.

                  This holistic approach is central to Microsoft’s strategy to help organizations solve one of the most complex and difficult tasks facing IT teams today: Mobile device management (MDM).

                  As I discussed on the GigaOM Mobilize panel back in October (on the topic of “The Future of Mobile and the Enterprise,” recapped here), it wasn’t that long ago that an IT department worked in a pretty homogenous hardware and software environment – essentially everything was a PC. Today, IT teams are responsible for dozens of form factors and multiple platforms that require specific processes, skills, and maintenance.

                  Helping organizations proactively manage this new generation of IT is what makes me so excited about the advancements and innovation we are delivering as a part of next week’s update to the Windows Intune service. These updates include:

                  • Support for e-mail profiles that can configure a device with the correct e-mail server information and related policies – and it can also remove that profile and related e-mail via a remote wipe.
                  • In addition to our unified deployment mode and integration with System Center Configuration Manager, Windows Intune can now stand alone as a cloud-only MDM solution. This is a big win for organizations that want a cloud-only management solutions to manage both their mobile devices and PC’s.
                  • There is also support for new data protection settings in iOS 7 – including the “managed open in” capability that protects corporate data by controlling the apps and accounts that can open documents and attachments.
                  • This update also enables broader protection capabilities like remotely locking a lost device, or resetting a device’s PIN if forgotten.

                  Windows Intune offers simple and comprehensive device management, regardless of the platform, for the devices enterprises are already using, with the IT infrastructure they already own.

                  Looking ahead to later this year, we will continue to launch additional updates to the service including the ability to allow/deny apps from running (or accessing certain sites), conditional access to e-mail depending upon the status of the device, app-specific restrictions regarding how apps interact and use data, and bulk enrollment of devices.

                  This functionality is delivered as part of the rapid, easy-to-consume, and ongoing updates that are possible with a cloud-based service.

                  Today’s announcements are just a small example of the broader set of innovations Microsoft has been developing. Our focus on a people-centric approach to solving consumerization challenges has led to a number of product improvements and updates like:

                  The number of factors at work within this Consumerization of IT trend make it clear that to effectively address it we have to think beyond devices and focus on a broader set of challenges and opportunities.

                  Microsoft is in a unique position to address the holistic needs behind this industry shift with things like public cloud management, private cloud management, identity management, access management, security, and more.

                  For organizations who haven’t already evaluated Microsoft’s device management solutions – now is the time. With the rapid release and innovation cycle offered by a cloud-based service like Intune, the ability to keep your infrastructure optimized, efficient, and secure has never been easier.

                  The Virtuous Cycle of Cloud Computing [In The Cloud Blog, Jan 29, 2014]

                  The following post is from Brad Anderson, Corporate Vice President, Windows Server & System Center.

                  In the Day 1 keynote at the recent re:Invent conference, there was an interesting point made about the virtuous cycle that can occur for the cloud vendor and for customers. As I listened to the keynote, I kept thinking: “They are missing the biggest benefit for the entire industry; if the public cloud vendor has the right strategy and is thinking about how to benefit the largest population possible, then they are completely missing how this virtuous cycle can grow to benefit every organization in the world – even if they are not using the public cloud.”

                  Let me explain a bit more about what I mean.  (And, before I get too much farther along, I want to note that this post ties into the cool news yesterday about our work with the Open Compute Project.)
                  The virtuous cycle of a public cloud looks a lot like the image below.  As the usage of the public cloud grows, you need more hardware to meet demand – and for sustained growth you will need a lot of hardware. This need for hardware increases your purchasing power and you can then negotiate lower prices as you purchase in bulk. As your purchasing power grows and your costs drop, you then pass those savings on to your customers by dropping your prices. The lower prices increases demand and the virtuous cycle continues.image
                  For customers using the public cloud, they can see the benefits of this virtuous cycle (the lower prices) – but what about organizations that are also using private and hosted clouds? How can they gain benefits from what is happening?
                  Organizations with multiple clouds can benefit if (and only if!) that public cloud vendor has at the core of its strategy an intention to take everything that it is learning from operating that public cloud and delivering it back for use in datacenters around world – not just in its own.
                  This is where Microsoft is so unique! Microsoft is the only organization in the world operating a globally available, at-scale public cloud that delivers back everything it is learning for use in datacenters of every customer (and, honestly, every competitor). Our view is the learning that we are getting from the public cloud should be delivered for all the world to benefit.
                  This innovation can be seen by applying these public cloud learnings in products like Windows Server, System Center, and the Windows Azure Pack – and these products are the only cloud offerings that are consistent across public, hosted and private clouds – ensuring customers avoid cloud lock in and, maximize workload mobility, and have the flexibility to choose the cloud that best meets their needs.
                  With this in mind, I want to show you how I think the virtuous cycle can and should look – and how it can benefit any organization in the world.
                  First, at the center of this virtuous cycle is incredible innovation. This means innovation in software, innovation in hardware, and innovation in processes. When you are ordering and deploying 100,000’s of new servers and xx bytes of storage every year – you have to innovate everywhere or you will literally buckle under demands and costs of procuring, deploying, operating, and retiring hardware at this scale.
                  Microsoft is addressing this challenge in the most direct and complete way possible: Over the last three years, Microsoft has spent more than $15B building datacenters around the world and filling them with the hardware and capacity demanded by customers of Windows Azure and other Microsoft cloud services.
                  We keep our public cloud costs low by managing our supply chain for this kind of capacity, and, per the cycle, we pass these savings to you. We also carefully track things like the number of days from when we place an order for hardware to the time the order appears on our docks (“order-to-dock”), and then we track the number of hours/days from “dock-to-live” where we literally have customers’ workloads being hosted on that hardware. Throughout this process we set aggressive quarterly targets and we work constantly to consistently drive those numbers down. If we didn’t have a best in class product and performance, it would be impossible to remain profitable at this kind of scale.
                  As you can imagine, after spending $Billions on hardware every year, we are highly incented (to put it lightly) to find ways to drive our hardware costs down. The single best way we have found to do this is to use software to do things traditionally handled by hardware. For example, in Windows Azure we are able to deliver highly available, globally available storage at incredibly low prices through software innovations like SDN – all of which is based on low-cost, direct-attached storage. This brings storage economics never before seen in the industry.
                  One example of this is the most common workload hosted in Azure: The “Web” workload. Whether it is Azure acting as the web tier for hybrid application, or the entire workload being hosted in Azure, the web workload is a part of just about every application. This makes it a great place for innovation. In Azure we pioneered high-density web site hosting where we can literally host 5,000+ web sites on a single Windows Server OS instance. This dramatically reduces our costs, which in turn reduces your costs.
                  At Microsoft, we think the public cloud’s virtuous cycle can actually get a lot bigger, a lot more functional, and a lot more powerful by integrating service providers and hosted clouds.

                  image

                  Not only is this expanded virtuous cycle more practical, I’m sure it also looks familiar to what is already up and running in your organization.

                  There are some pretty solid examples of innovation that was pioneered in Azure and then brought to the whole industry for use everywhere through Windows Server and System Center:

                  • For highly available, low-cost direct attached storage, in Windows Server 2012 we shipped a set of capabilities we call Storage Spaces. Storage Spaces delivers the value of a SAN on low-cost, direct-attached storage, and it has been widely recognized as one of the most innovative new capabilities in Windows Server – and it was significantly updated in Windows Server 2012 R2.
                  • Service Bus provides a messaging queue solution in the public cloud that can be used by developers for things like a queuing system across clouds and building loosely coupled applications. Check this post for an in-depth review of Service Bus. Service Bus also ships as a component of the Windows Azure Pack – providing value pioneered in the public cloud for use in private and hosted clouds.
                  • Earlier I referenced the ability to host 5,000+ web sites on a single Windows Server OS instance. This has had an obvious economic impact on of costs of Windows Azure where we host millions of web sites. We proved that capability in Windows Azure, battle-hardened it, and now it ships for customers around to world to use in their datacenters as a part of what we call the Windows Azure Pack (WAP).
                  This is what it looks like when the complete virtuous cycle is in effect.
                  Our efforts haven’t been limited to software, however. Our innovative work with hardware in our datacenters has driven down costs while at the same time increasing the capacity each core and processor can support.
                  Our work with hardware was highlighted yesterday when we announced that we are joining the Open Compute Project and contributing the full design of the server hardware we use in Azure. We refer to this design as the “Microsoft cloud server specification.” The Microsoft cloud server specification provides the blueprints for the datacenter servers we have designed to deliver the world’s most diverse portfolio of cloud services at global scale. These servers are optimized for Windows Server software and can efficiently manage the enormous availability, scalability and efficiency requirements of Windows Azure, our global cloud platform.
                  This design spec offers dramatic improvements over traditional enterprise server designs: We have seen up to 40% server cost savings, 15% power efficiency gains, and a 50% reduction in deployment and service times. We also expect this server design to contribute to our environmental sustainability efforts by reducing network cabling by 1,100 miles and metal by10,000 tons.

                  This level of contribution is unprecedented in the industry, and it hasn’t gone unnoticed by the media:

                  • Wired: Microsoft Open Sources Its Internet Servers, Steps Into the Future
                  • Forbes: The Worm Has Turned – Microsoft Joins The Open Compute Project

                  These are just a couple examples of innovation that is happening here at Microsoft – innovations in process, hardware and software.

                  At Microsoft, we recognize that the majority of organizations are going to use multiple clouds and will want to take advantage of Hybrid Cloud scenarios. Every organization is going to have their own unique journey to the cloud – and organizations should make decisions about cloud partners that truly enable them with the flexibility to use multiple clouds, constant innovation, and consistency across clouds.

                  This is an area that we focus on every day, and you can read more about it as a part of our ongoing, in-depth series, Success with Hybrid Cloud.

                  Vendor Spotlight: A Microsoft GM On New Midmarket IT Tools [Exchange Events, Vendor Spotlight, April 23, 2013]

                  Mr. MidmarketCIO had the opportunity to sit down with Gavriella Schuster, Microsoft’s general manager of the company’s U.S. server and tools business unit. In this interview, Schuster shares her views on the challenges midmarket businesses face today and Microsoft’s vision to address those challenges with the Cloud OS.

                  MES: Can you share with me a little about Microsoft’s vision of the cloud today and how it can address today’s IT challenges for midmarket customers?
                  Schuster: Customers face many challenges today with the new levels of mobility in their workforce and the new devices that enable mobility. This new level of consumerization has enabled avid use of technology with an always-on connectivity.  There are also many more applications available and an explosion of data to manage. All of these things really challenge customers to reconsider how they provision, secure and enable technology within their organization.
                  There are multiple ways for customers to think about how they provision their infrastructure, and we aim to enable an ‘and’ philosophy for our customers so they don’t have to think that it’s an either/or decision. We allow them to take servers and other technology they are running on premises and think about how they might want to move some of it into cloud services, while still having a consistent level of management, identity and security.
                  Our vision for the ‘Cloud OS’ is to really have the best of both worlds. It’s an easy-on/easy-off usage of the cloud that meets the needs of midmarket organizations and can be an extension of current server environments.
                  MES: Is Microsoft’s ‘Cloud OS’ synonymous with Windows Server 2012? Or does it include other Microsoft technologies?
                  Schuster: Windows Server 2012 is certainly the basis of the Cloud OS because it provides the primary framework for identity, access, security and manageability, and also provides that core virtualization layer. Windows Server 2012 is also the basis of Windows Azure, our public cloud platform, so it gives midmarket CIOs the ability to easily extend their on-premises datacenter to the public cloud using a common set of tools between the two. The other core technology in the Cloud OS is Microsoft System Center 2012 because it gives customers that common level of additional management where they can set policies, provision their workloads, get deep application insights, etc. regardless of where the workload is actually running—on-premises or in the cloud.
                  MES: Where do you recommend customers start with their data-center modernization initiative? Why?
                  Schuster: For most customers, they should start with server virtualization. There is potential for them to get a tremendous amount of efficiency and consolidation of their applications through server consolidation.  They can virtualize upwards of 80 percent of all of the apps that they are running in their environment onto virtualized server environments, particularly in the midmarket. They may even be able to consolidate down to one to four servers and really take care of all of their workloads. Using Hyper-V as that virtualization framework and then using System Center Virtual Machine Manager to deploy that new virtual machine into their environment should be their first step to this approach.
                  MES: What are some of the new capabilities of Windows Server 2012 that go beyond virtualization to solve some common challenges?
                  Schuster: Windows Server 2012 not only helps midmarket organizations virtualize the compute—the virtualized machine itself—but it also helps them to virtualize their network and storage layers, which can be very costly capex investments for customers. It eliminates a lot of the common conflicts involved in managing an on-premise environment like IP and networking address conflicts. It also gives them additional storage so they don’t have to buy expensive SANs.
                  MES: A key trend challenging CIOs is mobility and the consumerization of IT. How does the Microsoft Cloud OS vision help address the security and management challenges around new devices and the need for increased mobility?
                  Schuster: I think it goes back to what I said before—we’ve enabled the ‘and’ so they can think about their governance role. There are a number of ways to address the consumerization of IT, and our primary message is that we think customers should embrace it. We enable them through Active Directory, which enables them to have a single sign-on experience and manage the identity of the user regardless of the environment the user is in (Office 365, Windows Azure, their on-premises environment, etc.)—This eliminates multiple pop-ups where the user has to continually sign in to the service.
                  We also have native functionality in Windows Server 2012 that eliminates the need for a VPN. With Direct Access, they can now easily deliver access to corporate resources based on the user’s identity.
                  Lastly, they can set policies for the user experience based on the device that they are using—phone, home machine, work machine, etc.—and can manage those mobile devices from the cloud with Windows Intune, without having to do additional on-premises setup.
                  MES: You briefly talked about Windows Azure as part of the Microsoft Cloud OS. What workloads do you recommend customers think about moving to the public cloud first?
                  Schuster: I think the easiest thing for most customers to think about moving to the public cloud first is cloud storage—they can use it for backup, archiving and disaster recovery. Especially as a midmarket customer, the last thing they probably have is a separate site with another set of servers that are replicated and ready to do a transfer if something disastrous were to occur. That’s absolutely something that the cloud is available and ready for. And customers only have to pay for what they use— it’s consumption based. The other areas that they would probably want to use it for are application development and test environments and for business and data analytics.
                  MES: Microsoft has laid out a hybrid cloud strategy, with the same basic underpinnings for both private and public cloud. What’s the benefit to mid-market customers of adopting Microsoft’s hybrid approach and technologies?
                  Schuster: When we talk about a hybrid environment, there are two ways to think about it: One is that it’s a hybrid enterprise, meaning they have some workloads that are sitting on servers inside their organization while others are using some server capacity within a public cloud like Windows Azure; Second is having hybrid applications. One of the advantages of the cloud today is that it enables even the smallest companies to act and look like very large companies. Unlike in the past with on-premise servers, the cloud gives CIOs the capacity and capability to introduce a new service to the market where they don’t have to have a great forecast of what the demand might be. This has really opened up new doors for midmarket IT organizations.
                  MES: How can your ecosystem of partners help midmarket customers today?
                  Schuster: The midmarket IT customer will typically only have a handful of IT pros within their organization, so enabling them to focus on the business and building applications to help power the business vs. managing servers and infrastructure is a real business value to our midmarket customers—and our partner ecosystem is well set up to help them do that.
                  We have done a lot of work to train our partners on how to deliver both our on-premise Windows Server 2012 virtualization environment as well as our Windows Azure cloud environments, and we have services available that help our customers build new applications.

                  Windows Azure becoming an unbeatable offering on the cloud computing market

                  Almost a year ago, when –among others– the Windows Azure Mobile Services Preview came out, it became evident that Microsoft has a quite old heritage in cloud computing as it is the case that The cloud experience vision of .NET by Microsoft 12 years ago and its delivery now with Windows Azure, Windows 8/RT, Windows Phone, iOS and Android among others [‘Experiencing the Cloud’, Sept 16-20, 2012]. Next, with Windows Azure Media Services, an interesting question came up: Windows Azure Media Services OR Intel & Microsoft going together in the consumer space (again)? [‘Experiencing the Cloud’, Feb 13, 2013]. Then  just in the beginning of this month it was possible to conclude that “Cloud first” from Microsoft is ready to change enterprise computing in all of its facets [‘Experiencing the Cloud’, June 4, 2013]. The understanding of importance of the cloud for the company was further enhanced by finding a few days later that Windows Embedded is an enterprise business now, like the whole Windows business, with Handheld and Compact versions to lead in the overall Internet of Things market as well [‘Experiencing the Cloud’, June 8, 2013]. Finally we had a quite vivid example of the fact that Windows Azure is a huge ecosystem effort as well with: Proper Oracle Java, Database and WebLogic support in Windows Azure including pay-per-use licensing via Microsoft + the same Oracle software supported on Microsoft Hyper-V as well [‘Experiencing the Cloud’, June 20, 2013].

                  Now we have general availability of Windows Azure Mobile Services, Windows Azure Web Sites, as well as previews of improved auto-scaling, alerting and notifications, and tooling support for Windows Azure through Visual Studio. This made me conclude that Windows Azure is becoming an unbeatable offering on the cloud computing market.

                  Let’s see now the details which I will base not only on the Microsoft materials but on the first media reactions (also in order to have consistency with my post of yesterday on Windows 8.1: Mind boggling opportunities, finally some appreciation by the media [‘Experiencing the Cloud’, June 27, 2013]) as well:

                  Media reactions in the first 15 hours:

                  Specific reactions:

                  Windows Azure Mobile Services, Windows Azure Web Sites – general availability:

                  Using Azure Mobile Services and Web Sites for a Mobile Contest pt. 1 [windowsazure YouTube channel, June 27, 2013]

                  This 2-part video is a walk-through of a Mobile Contest project. It demonstrates how to Azure Mobile Services and Web Sites can be used to create a consistent set of services used as a back-end for an iOS mobile app and a .NET web admin portal. Part 1 covers: Using multiple authentication providers, Reading/Writing data with tables and Interacting with Azure storage for BLOBs

                  Using Azure Mobile Services and Web Sites for a Mobile Contest pt. 2 [windowsazure YouTube channel, June 27, 2013]

                  Part 2 covers: Using Azure Web Sites for the admin portal, Integrating with Custom API with cross-platform Push notifications and using Scheduler with 3rd Party add-ons for scripting admin tasks.

                  Partner support:

                  Xamarin with Craig Dunn [windowsazure YouTube channel, June 27, 2013]

                  Xamarin provides a frameword that lets developers buildiOS and Android applicatinos in C#. With Windows Azure Mobile Services, developers can connect those mobile apps by hosting the backend in Window Azure. Mobile Services provides a turnkey way to store data in the cloud, authenticate users and send push notifications. Get started at http://www.windowsazure.com/mobile

                  Building a Comprehensive Enterprise Cloud Ecosystem [Windows Azure blog, June 20, 2013]

                  Over the past two decades, Microsoft has worked with OEMs, Systems Integrators, ISVs, CSVs, Distributors and VARs to build one of the largest enterprise partner ecosystems in the world.  We’ve done this because customers – and the industry – need solutions that just work together.  With our partners we built the most comprehensive enterprise technology ecosystem – and, now, we’re focused on the enterprise cloud.
                  That’s why you’ve seen us work with Amazon, to bring Windows Server, SQL Server and the entire Microsoft stack to Amazon Web Services, and with EMC who owns VMware and Pivotal – key competitors in their respective areas.  We also work with innovative companies like Emotive, with Systems Integrators like Accenture and Capgemini and a host of other partners – large, small and non-commercial – around the world and across the industry.
                  The need for diverse technologies and companies to work together is clear – and that means competitors are often partners.  To many in the industry that is a given – and it really should be.  The need for technologies to work together is particularly clear in cloud computing – where platforms and services are so incredibly connected they must work together to deliver cloud computing benefits when and how customers want it.
                  So, it should not be a surprise when we partner with technology leaders who are also competitors.  We partner with these companies (and plan to partner with more) to bring our products & services to as many customers as possible.  We will continue to work across the industry to ensure our products & services work with the many platforms, business apps, services and clouds our customers use.
                  As you may have heard me say, it’s been an exciting year for Windows Azure – and we are just 6 months in.  Stay tuned – there’s more to come!
                  Steven Martin
                  General Manager
                  Windows Azure

                  All other:

                  Overall reactions:

                  Windows Azure Now Stores 8.5 Trillion Data Objects, Manages 900K Transactions Per Second [TechCrunch, June 27, 2013]

                  Microsoft announced at the Build conference today that Windows Azure now has 8.5 trillion objects stored on its infrastructure.

                  The company also announced the following:

                  • Customers do 900,000 storage transactions per second.
                  • The service is doubling its compute and storage every six months.
                  • 3.2 million organizations have Active Directory accounts with 68 million users.
                  • More than 50 percent of the world’s Fortune 500 companies are using Windows Azure.

                  In comparison, Amazon Web Services said at its AWS Summit in New York earlier this year that its S3 storage service now holds more than 2 trillion objects. According to a post by Frederic Lardinois, that’s up from 1 trillion last June and 1.3 trillion in November, when the company last updated these numbers at its re:Invent conference.

                  So what accounts for the differene between Azure and AWS? It all has to do with how each company counts the objects it stores. With that in consideration, it’s likely Azure’s numbers are far different if the same metrics were used as AWS.

                  Nevertheless, the news highlights the importance of Windows Azure for Microsoft, especially as the enterprise moves its infrastructure, shedding data centers to consolidate and reduce their costs.

                  Build 2013 Keynote Day 2 Highlights [InfoQ, June 27, 2013]

                  Server & Tools Business President Satya Nadella opened the keynote this morning with some statistics about Windows Azure and the major Microsoft cloud services.
                  Windows Azure
                    – 50% of Fortune 500 companies are using Windows Azure
                    – 3.2 Million organizations with active directory accounts
                    – 2 X compute + storage every 6 months
                    – 100+ major service releases since Build 2012 to Windows Azure
                      Major Microsoft Cloud Services
                        – XBox Live 48 million subscribers
                        – Skype 299 Million connected users
                        – Outlook.com 1 million users gained in 24 hours
                        – Office 365 Nearly 50 million Office web apps users
                        – SkyDriver 250 million accounts
                        – Bing 1 billion mobile notifications a month
                        – XBox Live 1.5 Billion games of Halo
                          Nadella noted the wide variety of first party cloud services that Microsoft supports, and says it is important that they support them as well as provides good learning experience.  In his words, “We build for the first party and make available for the third party.”
                          Scott Hanselman arrived on stage to discuss the latest for ASP.NET on VS2013.  A big change is the simplification of starting an ASP.NET application in VS2013.  The project types have been reduced to one, “ASP.NET”, and from there the new project wizard lets developers customize their project based on what they would like to create: web forms, MVC, etc.
                          VS2013 will ship with Twitter’s open source project Bootstrap, and it will be Microsoft supported just like jQuery is now.
                          An important debugging achievement was demonstrated where browsers can be associated with Visual Studio, allowing for real-time debugging and developing.  Edit code in VS2013, and the browser(s) will reflect the updates.  In this case the demo showed Hanselman editing cshtml, and via SignalR the updates were shown on the his selected web browsers of IE and Chorme.
                          In another example, Hanselman went to www.bootswatch.com to obtain a new CSS template which he used to overwrite his current file.  Pressing CTRL-ENTER, the browsers reflected this update.
                          Then Hansleman opened a CSS file to show some new editor tricks.  Hovering over CSS statements, VS has a hover window appear that indicates which browser a particular statement applies to.  Another ability allows VS to trace and view live streaming trace logs from Azure.
                          Then Hanselman demonstrated his sample website producing a QR Code of a deep link.  He then scanned this on his phone which allowed him to jump into his existing authenticated session, moving from his desktop session to the same screen on his phone.
                          Satya returned to the stage to announce the general availability of Windows Azure Web Sites, which habe been in preview since Build 2012.  Now it is available with full SLA and enterprise support.
                          Josh Twist from Microsoft’s Mobile Services came on stage to demonstrate using a Mac to add Azure support to an iOS app.  Twist noted that developers looking to explore Azure can now create a free 20 meg SQL database which in addition to the 10 free web services allowed.
                          In Twist’s demo, Azure was used to create a custom XCode project that was preloaded with the appropriate Azure URLs for the project being worked on.  This simplifies getting up to speed with Azure development on Mac.  Related to this convenience, Windows Azure Mobile Services now enables git source control so that you do not need to edit code on the web portal.  So if you would rather develop with a locally (VS, Sublime, etc) you can do by pulling the files down from Azure and the push them back when edits are complete.  Twist demonstrated this functionality using Sublime to edit a JavaScript file, and then using a Git push back into Azure.
                          VS2013 has a new Server Explorer, which is used to browse all of the Mobile Services on Windows Azure for your site/installation.  A new wizard has been added which simplifies adding Push Notification for Windows Store based applications.
                          Satya Returns to Introduce Scott Guthrie.
                          The big news is the new auto-scaling on Windows Azure for billing.  Developers can manage the instance count, target CPU, VMs, No billing when a machine is stopped (only pay when the machine is working.)
                          Per minute billing has been added, for greater granularity.  Preview of Windows Azure AutoScale is now live
                          Windows Azure
                            – Active Directory for the Cloud
                            – Integrate with on-premises Active Directory
                            – Enable single sign-on within your cloud Apps
                            – Supports SAML, WS-Fed, and OAuth 2.0
                              Applications tab shows all apps registered with the current Active directory.  Manage Application to integrate (external) app with Active Directory.  For example, developers can Use Windows Azure AD to enable user access to Amazon Web Services.
                              Satya describes Office 365 as “…a programmable surface area”
                              Jay Schmelzer to demonstrated the changes being made to allow/promote Office 365 as a platform.
                                – Rich Office Model
                                – Use Web APIs to access
                                – Extend with Azure
                                – First class tools support in VS2013
                                – Office 365 Apps + Windows Azure
                                  Increasing promotion of Windows Azure, MSDN subscribers receive greater discounts and incentives to use the Azure platform.
                                    1. Use your MSDN Dev/Test licenses on Windows Azure
                                    2. Reduced rates for Dev/test licenses up to 97% discounts
                                    3. No Credit card required for MSDN members

                                    Microsoft showcases developer opportunity on Windows Azure, Windows devices [press release, June 27, 2013]

                                    Increasing importance of cloud services
                                    Developers today are building multidevice, multiscreen, cloud-connected experiences. Windows Azure spans infrastructure and platform capabilities to provide them with a comprehensive set of services to easily and quickly build modern applications, using the tools and languages familiar to them.
                                    “Developers are increasingly demanding a flexible, comprehensive platform that helps them build and manage apps in a cloud- and mobile-driven world,” [Satya] Nadella [, president, Server and Tools Business] said. “To meet these demands, Microsoft has been doubling down on Windows Azure. Nearly 1,000 new businesses are betting on Windows Azure daily, and as momentum for Azure grows, so too does the developer opportunity to build applications that power modern businesses.”
                                    Delivering on its commitment to provide developers with the most comprehensive cloud platform, Microsoft announced the general availability of Windows Azure Mobile Services. Mobile Services enables developers building Windows, Windows Phone, iOS and Android apps to store data in the cloud, authenticate users and send push notifications. TalkTalk Business, a leading business telecommunications provider in the United Kingdom, chose Windows Azure Mobile Services to create new ways to engage with its customers and serve demand for mobile access.
                                    Microsoft also announced the general availability of Windows Azure Web Sites, which allows developers to create websites on a flexible, secure and scalable platform to reach new customers. With the investments Microsoft has made in ASP.NET and Web tools, Web developers can now create scalable experiences easier than ever. Dutch brewer Heineken is using Windows Azure to power a social pinball game for the UEFA Champions League Road to the Final campaign, with the expectations of millions of interactions scaled on Windows Azure. Heineken exceeded its usage metrics by a wide margin yet experienced no scalability issues with Windows Azure.
                                    [Scott] Guthrie[, Corporate Vice President, Windows Azure] also highlighted Microsoft’s continued enterprise cloud momentum by demonstrating several platform advancements, including previews of improved auto-scaling, alerting and notifications, and tooling support for Windows Azure through Visual Studio. In addition, he previewed how Windows Azure Active Directory provides organizations and ISVs, such as Box, with a single sign-on experience to access cloud-based applications.
                                    Developers can go to the Windows Azure site today for a free trial:http://www.windowsazure.com/en-us/pricing/free-trial/?WT.mc_id=AE37323DE.

                                    Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN [ScottGu’s Blog, June 27, 2013 at 10:41 AM]

                                    This morning we released a major set of updates to Windows Azure.  These updates included:

                                    • Web Sites: General Availability Release of Windows Azure Web Sites with SLA
                                    • Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA
                                    • Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines
                                    • Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines)
                                    • MSDN: No more credit card requirement for sign-up

                                    All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them.

                                    Windows Azure: Major Updates for Mobile Backend Development [ScottGu’s Blog, June 14, 2013]

                                    This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include:
                                    Mobile Services: Custom API support
                                    Mobile Services: Git Source Control support
                                    Mobile Services: Node.js NPM Module support
                                    Mobile Services: A .NET API via NuGet
                                    Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites
                                    Mobile Notification Hubs: Android Broadcast Push Notification Support
                                      All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them.

                                      Windows Azure: Announcing New Dev/Test Offering, BizTalk Services, SSL Support with Web Sites, AD Improvements, Per Minute Billing [ScottGu’s Blog, June 3, 2013]

                                      This morning we released some fantastic enhancements to Windows Azure:

                                      • Dev/Test in the Cloud: MSDN Use Rights, Unbeatable MSDN Discount Rates, MSDN Monetary Credits
                                      • BizTalk Services: Great new service for Windows Azure that enables EDI and EAI integration in the cloud
                                      • Per-Minute Billing and No Charge for Stopped VMs: Now only get charged for the exact minutes of compute you use, no compute charges for stopped VMs
                                      • SSL Support with Web Sites: Support for both IP Address and SNI based SSL bindings on custom web-site domains
                                      • Active Directory: Updated directory sync utility, ability to manage Office 365 directory tenants from Windows Azure Management Portal
                                      • Free Trial: More flexible Free Trial offer

                                      There are so many improvements that I’m going to have to write multiple blog posts to cover all of them!  Below is a quick summary of today’s updates at a high-level:

                                      From Announcing LightSwitch in Visual Studio 2013 Preview [Visual Studio LightSwitch Team Blog, June 27, 2013]

                                      Sneak Peek into the Future

                                      At this point, I’d like to shift focus and provide a glimpse of a key part of our future roadmap. During this morning’s Build 2013 Day 2 keynote in San Francisco, an early preview was provided into how Visual Studio will enable the next generation of line-of-business applications in the cloud (you can check out the recording via Channel 9). A sample app was built during the keynote that highlighted some of the capabilities of what it means to be a modern business application; applications that run in the cloud, that are available to a myriad of devices, that aggregate data and services from in and out of an enterprise, that integrate user identities and social graphs, that are powered by a breadth of collaboration capabilities, and that continuously integrate with operations.

                                      Folks familiar with LightSwitch will quickly notice that the demo was deeply anchored in LightSwitch’s unique RAD experience and took advantage of the rich platform capabilities exposed by Windows Azure and Office 365. We believe this platform+tools combination will take productivity to a whole new level and will best help developers meet the rising challenges and expectations for building and managing modern business applications. If you’re using LightSwitch today, you will be well positioned to take advantage of these future enhancements and leverage your existing skills to quickly create the next generation of business applications across Office 365 and Windows Azure. You can read more about this on Soma’s blog.

                                      Additional information:
                                      Announcing the General Availability of Windows Azure Mobile Services, Web Sites and continued Service innovation [Windows Azure blog, June 27, 2013]
                                      50 Percent of Fortune 500 Using Windows Azure [Windows Azure blog, June 14, 2013]
                                      Azure WebSites is now Generally Available [Enabling Digital Society blog of Microsoft, June 27, 2013]
                                      New features for Windows Azure Mobile Services [Enabling Digital Society blog of Microsoft, June 14, 2013]
                                      Lots of Azure Goodness Revealed [Enabling Digital Society blog of Microsoft, June 3, 2013]
                                      BizTalk Services is LIVE! [To BizTalk and Beyond! blog of Microsoft, June 3, 2013]
                                      Hello Windows Azure BizTalk Services! [BizTalk Server Team Blog, June 4, 2013]
                                      Windows Azure BizTalk Services – Preview [The Enterprise Integration Space blog of Microsoft, June 4, 2013]
                                      Business Apps, Cloud Apps, and More at Build 2013 [Somasegar’s blog, June 27, 2013]

                                      Day 2 Keynote [Channel 9 video, June 27, 2013] Windows Azure related part up to [01:31:12], click on the link or the image to watch the video

                                      image

                                      Speech transcript: Satya Nadella and Scott Guthrie: Build 2013 Keynote

                                      Remarks by Satya Nadella, President, Server & Tools Business; and Scott Guthrie, Corporate Vice President, Windows Azure; San Francisco, Calif., June 27, 2013

                                      ANNOUNCER: Ladies and gentlemen, please welcome President, Server and Tools Business, Satya Nadella. (Applause.)

                                      SATYA NADELLA: Good morning. Good morning, and welcome back to day two of Build. Hope all of you had a fantastic time yesterday. From what I gather, there were half a trillion megabytes of downloads as far as the show goes in terms of show net, so we really saturated the show net with all the downloads of Windows 8.1. So that’s just tremendous to see that all of you took Steve’s guidance and said, “Let’s just download it now and play with it.” Hopefully you had fun with it, also had a chance to get Visual Studio and maybe hack some of those Bing controls last night after the party.

                                      But welcome back today, and we have some fantastic stuff to show. There’s going to be a lot more code onscreen as part of this keynote.

                                      Yesterday, we talked about our devices, and we’re going to switch gears this morning to talk about the backend.

                                      The context for the backend is the apps, the technology, as well as the devices, experiences that all of us collectively are building. We’re for sure well and truly into the world of devices and services. There is not an embedded system, not a sensor, not a device experience that’s not connected back to our cloud service. And that’s what we’re going to talk about.

                                      And we see this momentum today in how we are seeing the backend evolve. If you look at Windows Azure, we have over 50 percent of the Fortune 500 companies already using Windows Azure. We have over 250,000 customers. We’re adding 1,000 customers a day.

                                      We have 3.2 million distinct organizations inside of Azure AD representing something like 65 million users active. That’s a fantastic opportunity, and we’ll come back to that a couple of different times during this keynote.

                                      Our storage and compute resources are doubling every six months. Our storage, in fact, is 8.5 trillion storage objects today, doing around 900K transactions per second. Something like 2 trillion transactions a month.

                                      The last point, which is around the hypervisor growth, where we’re seeing tremendous hypervisor share growth is interesting. Because we are unique in that we not only are building an at-scale public cloud service, but we’re also taking all of the software technology that is underneath our public cloud service and making it available as part of our server products for service providers and enterprises to stand up their own cloud. That’s something pretty unique to us.

                                      Given that, we’re seeing tremendous growth for the high-end servers that people are buying and the high-end server software people are buying from us to deploy their own cloud infrastructure in support of the applications that you all are building.

                                      Now, of course at the end of the day, all that momentum has to be backed up by some product. And in that case, Steve talked a lot about our cadence and increased cadence across our devices. But when it comes to Windows Azure and our public cloud service, that cadence takes on a different hyper drive, if you will, because we are every day, every week, every month doing major updates. We’ve done over 100-plus major updates to our services from the last Build to now.

                                      In fact, this is even translating into a much faster cadence for our server. We now have the R2 updates to our 2012 that were made available yesterday. So all around, when it comes to server technology and cloud technology, we have some of the fastest cadences, but very targeted on the new scenarios and applications and technologies that you’re building to run these cloud services.

                                      Now, one of the other things that drives us and is at play for us on a daily basis is the feedback cycle of our first-party workloads. We have perhaps the most diverse set of first-party workloads at Microsoft. You know, these are SaaS applications that we run ourselves.

                                      image

                                      Now, these applications keep us honest, especially if you’re in the infrastructure business, you’ve got to live this live site availability day in and day out. And the diversity also keeps us honest because you build out your storage compute network, the application containers, to meet the needs of the diversity these applications represent.

                                      Take Xbox. When they started Xbox Live in 2002, they had around 500 servers. Now, they use something like 300,000 servers, which are all part of our public cloud to be able to really drive their experiences. Halo itself has had over a billion games played, and something like 270 million hours of gameplay. And Halo uses the cloud in very interesting ways for pre-production, rendering support, gameplay, post-production analytics, the amount of real-time analytics that’s driving the continuous programming of Halo is pretty stunning.

                                      Take SkyDrive. We have over 250 million accounts. You combine SkyDrive with the usage of Office Web Apps, where we have more than 50 million users of Office Web Apps, you can see a very different set of things that are happening with storage, collaboration, productivity.

                                      Skype is re-architecting their core architecture to take advantage of the cloud for their 190-plus million users.

                                      Bing apps that you saw many of them yesterday as part of Windows 8.1 are using the Azure backend to do a lot of things like notifications, which is one of the core scenarios for any mobile apps. And it’s going to send something like a billion notifications a month.

                                      So all of these diverse needs that we have been building infrastructure for, we have this one simple mantra where “first party equals third party.” That means we build for our first party and make all of that available for our third party. And that feedback cycle is a fantastic cycle for us.

                                      Now, when you put it all together, you put what we’re building, what you’re building, we see the activity on Azure, we listen to our customers, and you sort of distill it and say, “What are the key patterns of the modern business for cloud? What are the applications people are building?”

                                      Three things emerge: People are building Web-centric applications. People are building mobile-centric applications. And what we call cloud-scale and enterprise-grade applications. So the rest of the presentation is all about getting into the depth of each of these patterns.

                                      Now, in support of these applications, we’re building a very robust Windows Azure app model. Now, of course, at the bottom of the app model is our infrastructure. We run 18-plus datacenters on our own, 100-plus co-locations. We have an edge network. And so that is the physical plant. But the key thing is it’s the fabric, the operating system that we build to manage all of those resources.

                                      At the compute-storage-network level, at the datacenter scale and multi-datacenter scale. And that really is the operating system that is Windows at the backend, at this point, which in fact shipped even in Windows Server for a different scale unit.

                                      But that infrastructure management or resource management is one part of the operating system.

                                      Then about that, you have all the application containers. And we’re unique in providing a complete IaaS plus PaaS, which is infrastructure as a service and platform as a service capability when it comes to application containers. Everything from virtual machines with full persistence to websites to mobile to media services to cloud services. So that capability is what allows you to build these rich applications and very capable applications.

                                      Now, beyond that, we also believe that we can completely change the economics of what complex applications have needed in the past. We can take both productivity around development and continuous deployment and cycling through your code of any complex application and reduce it by orders of magnitude.

                                      image

                                      Take identity. We are going to change the nature of how people set up your applications to be able to accept multiple identities, have strong authentication and authorization, how to have a directory with rich people schema underneath it that you can use for authorization.

                                      Integration, take all of the complex business-to-business or EI type of project that you have to write a lot of setup before you even write the core logic; we want to change the very nature of how you go about that with our integration services.

                                      And when it comes to data, there is not a single application now that doesn’t have a diverse set of needs when it comes to the data from everything from SQL to NoSQL, all types of processing from transactional to streaming to interactive BI to MapReduce. And we have a full portfolio of storage technologies all provided as platform services so that your application development can be that much richer and that much easier.

                                      Now, obviously, the story will not be complete without great tooling and great programming model. What we are doing with Visual Studio, we will see a lot of it throughout the demos. .NET, as well as our support for some of the cloud services around continuous development — everything from source code control, project management, build, monitoring — all of that technology pulled together, really take everything underneath it to a next level from an application development perspective.

                                      But also supporting all the other frameworks. In fact, just this week we announced with Oracle that we will have even more first-class support for Java on Windows Azure. And so we have support for node, we have support for PHP and so on. So we have a fantastic set of language bindings to all of our platform support and a first-class support for Visual Studio .NET, as well as TFS with Git when it comes to application development.

                                      So that’s really the app model. And the rest of the presentation is really for us to see a lot of this in action.

                                      Let me just start with our IaaS and PaaS and virtual machines. We launched our IaaS service just in April. In fact, we have tremendous momentum. Something like 20 percent of all of Azure compute already is IaaS capacity. So that’s tremendous growth.

                                      The gallery of images is constantly improving and increasing in size, in depth, breadth, and variety. In fact, if you want to spin up Windows Server 2012 R2, I would encourage you to go off to the Azure gallery and spin it up because it’s available as of yesterday there, and so that will be a fantastic use of the Azure IaaS, and test that out.

                                      imageSo what I want to talk about is websites. We’ve made a lot of investments in websites. And when we say “websites” we mean enterprise-grade Web infrastructure for your most mission-critical applications. Because if you think about it, your website is your front door to your business. It could be a SaaS business, it could be an enterprise business, but it’s the front door to your business. And you want the most robust enterprise-scale infrastructure for it. And we’ve invested to build the best Web stack with the best performance, load balancing built in, elasticity built in, and from a development perspective, integrated all the way into Visual Studio.

                                      So we think that what we have in our website technology is the best-in-class Web for the enterprise-grade applications you want to build.

                                      Now, you can also start up for free, and you can scale up. So maybe even the starting process with our Web, very, very easy.

                                      imageNow, of course having Web technology is one, but it’s also very important for us to have a lot of framework support. And we have a lot of frameworks. But the one framework that we hold close and dear to our heart is ASP.NET. This is something that we have continued to innovate in significant ways. One of the things that we’ve done with the new version of ASP.NET, which is in preview as part of .NET 4.5.1. is the one ASP.NET. Which means that you can have one project where you can bring all of the technologies from Web forms to MVCs to Web APIs to signal all together.

                                      We also improved our tooling from a scaffolding perspective across all of these frameworks.

                                      You’re all building even these rich Web applications. So these single-page Web applications. And for that, you need new frameworks. We have Bootstrap. You also want to be able to call into the server side, we made that easy with OLAP support, we made it easy with Web APIs. So this makes it much easier for you now to be able to build these rich Web apps.

                                      And Entity Framework. We’ve now plumbed async all the way back into the server. So now, you can imagine if you’re building one of those social media applications with lots of operations on the client, as well as needing the same async capabilities on the backend, you now have async end to end.

                                      So a lot of this innovation is, I think, in combination with our Web is going to completely change how you could go about building your Web applications and your Web technologies.

                                      To show you some of this in action, I wanted to invite up onstage Scott Hanselman from our Web team. Scott? (Applause.)

                                      SCOTT HANSELMAN: Hello, friends. I’m going to show you some of the great, new stuff that we’ve got in ASP.NET and Visual Studio 2013.
                                      I’m going to go here and hit file, new, project. And you’ll notice right off the bat that we’ve got just one ASP.NET Web application choice. This is delivering on that promise of one ASP.NET. (Applause.)
                                      Awesome, I’m glad you dig that. And this is not the final dialog, but there is no MVC project or Web forms project anymore. I can go and say I want MVC with Web API or I want Web forms plus MVC. But there is, at its core, just one ASP.NET.
                                      We’ve got an all-new authentication system. I can go in here and pick organizational accounts, use Active Directory or Azure Active Directory, do Windows auth.
                                      For this application, I’m going to use an individual user account. I’m going to make a geek trivia app. So I’ll hit create project.
                                      Now, of course when you’re targeting for the Web, it’s not realistic to target just one browser. We’re not going to use just Internet Explorer; we’re going to use every browser and try to make this have as much reach as possible.
                                      So up here, I’m going to click “browse with” and then pick both Internet Explorer and Google Chrome and set them both as the default browser. (Applause.)
                                      Now, we’ll go ahead and run our application. And I’ll snap Visual Studio off to the side here. You notice Visual Studio just launched IE and Chrome.
                                      You can see that we’re using Twitter Bootstrap. We’re shipping Bootstrap with ASP.NET; you get a nice, responsive template. We’ve got the great icons, grid system, works on mobile. And that’s going to ship just like we shipped jQuery, as a fully supported item within ASP.NET, even though it’s open source.
                                      I’m going to open up my index.cs HTML over here. You can see we’ve got ASP.NET as my H1. Notice next to multiple browsers, we’ve got a new present for you. You see this button right here? We’re running SignalR in process inside of Visual Studio, and there’s now a real-time connection between Visual Studio and any number of browsers that are running.
                                      So now I can type in the new geek quiz application and hit this button. And using Web standards and Web sockets, we’ve just talked to any number of browsers. (Applause.)
                                      Now, this is just scratching the surface of what we’re going to be able to do. What’s important isn’t the live reload example I’ve just shown you, but rather the idea that there’s a fundamental two-directional link now between any browser, including mobile browsers or browser simulators and Visual Studio.
                                      Now, this is using the Bootstrap default template, which is kind of default. So I’m going to go up to Bootswatch, which is a great website that saves us from the tyranny of the default template.
                                      And I’m going to pick — this looks appropriately garish. I’m going to pick this one here. And I’m going to just right click and say “save target as” and then download a different CSS, and I’m going to save that right over the top of the one that came with ASP.NET.
                                      And then I’ll come back over here and use the hotkey control/alt/enter and update the linked browsers. And you’ll see that right there, the hotdog theme is back today, and this is the kind of high-quality design and attention to — I can’t do that with a straight face — attention to detail and design that you’ve come to expect from us at Microsoft. That’s beautiful, isn’t it? You’ve got to feel good about that, everybody.
                                      I’m going to head over into Azure. And I’m going to say “new website.” You know, creating websites is really, really easy from within the portal. I’ll say geek quiz. Blah, blah, blah, and I’m going to make a new website.
                                      And this is going to fire up in the cloud right now. You can see it’s going and creating that. And that’s going to be ready and waiting to go when it’s time for me to publish from Visual Studio.
                                      Now, I’m going to fast forward in time here and close down this application and then do a little Julia Child action and switch into an application that’s a little bit farther along.
                                      So we’re going to write a geek quiz or a geek trivia app. And it’s going to have Model View Controller and Web API on the server. And it’s going to send JSON across the wire over to the client side. This trivia controller, which is ASP.NET, Web API is going to be feeding that.
                                      This is code that I’m not really familiar with. I can spend a lot of time scrolling around, or I could right click on the scroll bar, hit scroll bar options, and some of you fans may remember this guy. It’s back. And now you’ve got map mode inside of the scroll bar. I can move around, find my code really, really easily. Here is the GET method. Notice that this GET method is going to return the trivia questions into my application here. And it’s marked as async. We’ve got async and await all the way through. So this asynchronous Web API method is then going to call this service call, next question async.
                                      Now, I could right click and say “go to definition.” But I could also say “peek definition.” And without actually opening the source code, see what’s going on in that file. (Applause.)
                                      I could promote that if I wanted to. You notice, of course, I’m using Entity Framework 6, I’ve got async and await from clients to servers to services all the way down into the database non-blocking I/O, async and await all the way down. I just hit escape to drop out of there. So it makes it really, really easy to move around my code.
                                      So this is going to serve the trivial questions. I’m just going to hit control comma, go get my index.cs HTML.
                                      Now, in this HTML editor that’s been completely rewritten in Visual Studio 2013, you notice that I’ve got a couple of things you may not have seen before in an ASP.NET app. I’ve got Handlebars, which is a templating engine, and I’ve got Ember. So we’ve got model view controller on the server and model view controller on the client. So we can start making those rich, single-page applications.
                                      Now, this Ember application here has some JavaScript. And on the client, we’ve got a next question method. This is going to go and get that next question, and I’ve got that Web API call. So this is how the trivia app is going to get its information. And then when I answer the question, I’m going to go and send that and post that same RESTful service. So you’ve got really nice experience for front-end Web developers. That’s the Ember stuff.
                                      Here, I’ve got the Handlebars. This is a client-side template. You can see right off the bat that I’ve got syntax highlighting for my Handlebars or my Moustache templating. And I’m going to go ahead and fire this up, and I’ll put IE off to the side there, and I’ll put VS over here.
                                      And I’m going to log into my geek quiz app. See if I can type my own name a few times here, friends. There we go. And this is going to go and fetch a trivia question. See, it said, “loading question.” And then it says, “How many Scotts work on the Azure team?” Which is a lot, believe me.
                                      You’ll see that that’s coming from this bound question tile. So we’ve got client-side data binding right there.
                                      Now, I need to figure out what the buttons are going to look like. I’ve got the question, but I don’t have the buttons. I could start typing the HTML; that’s kind of boring. But I could use Visual Studio Web Essentials, which takes the extensibility points in Visual Studio and extends them even further.
                                      And I could say something like hash fu dot bar and hit tab. And now I’ve got Zen Coding, also known as Emmet, built in with Web Essentials.
                                      So that means I could go and say, you know, I need a button. And button has a button trivia class, but I need four of those buttons.
                                      And then, again, I hit — you like that, kids? (Applause.) Then I hit refresh, and you’ll notice that my browser is updating as I’m going.
                                      But that’s not really good. I need more information. I really want the text there that says “answer,” and I want to have answer one, answer two, answer three. So I’ll go like that. And then hit refresh, and then we’re seeing it automatically update.
                                      So that looks like what I want it to look like. But I want to do that client-side data binding. So I’m going to take this here, and I’m going to spin through that JSON that came across the wire. So I’m going to go open Moustache, and I’m going to say for each, and again, syntax highlighting, great experience for the client-side developer.
                                      I’m going to say for each option, and then we’ll close up each here. And answer one, just like question title is going to be bound. So I’m going to open that up, and I’m going to say option.title. And then when a user clicks on that button, we’re going to have an Ember action. I’m going to say the action is call that send answer passing in the question and then passing in the option that the user chose.
                                      I just did an update with the hotkey, how many Scotts work on Azure? 42. How old is Guthrie? He is zero XFF because he’s quite old. What color is his favorite polo? Goldenrod, in fact, is my — no? I’m sorry, Goldenrod is the next version of Windows, Windows Goldenrod. So my mistake there.
                                      That’s a pretty nice flip animation. Let’s take a look at that. I’m going to go ahead and hit control comma again and type in “flip.” Go right into the flip CSS. You’ll see that that animation actually used no JavaScript at all. That, in fact, was done entirely in CSS, which can sometimes be hard to figure out, but with Web Essentials, I can actually hover over a rule, and it’ll tell me which version of which browser which vendor prefix supports. (Applause.)
                                      So that’s pretty hot. I’m going to go ahead and right click and hit publish. And because I’ve got the Azure SDK installed, I can do my publish directly from Visual Studio. We’re going to go and load our Azure website. Hit OK. It brings the publish settings right down into Visual Studio. And I can go and publish directly from here.
                                      So now I’m doing a live publish out to Azure directly from Visual Studio. It goes and launches the browser for me.
                                      And I can click over here on the Server Explorer, and Windows Azure actually appears on the side now. I can start and stop virtual machines, start and stop websites; they’re all integrated inside of the Server Explorer.
                                      That’s my website. I can double click on it, and again, while I can go to the management portal, I can change my settings, my .NET version and my application logging without having to enter the portal.
                                      So back over into my app, when I sign in, I know that people are going to be pushing buttons and answering questions backstage. I want to see that. I put in some tracing. So what I’m going to do is right click and say view streaming logs in the output window.
                                      This is the Visual Studio output window. And I’m just going to pin that off to the side. And then as I’m answering questions, and it looks like someone backstage is answering questions as well. I’m getting live streaming trace logs from Azure fed directly into Visual Studio. (Applause.)
                                      Now, you know that we’ve also rewritten the entire authentication infrastructure and made it based on OWIN, which is the Open Web Interface for .NET. It’s an open source framework that lets you have pluggable middleware. So identity and authorization has been rewritten in a really, really clean way. And it allows us to do stuff that we really couldn’t do before and extend it in a pretty funny way.
                                      And I think that every good sample involves a QR code, right? Don’t you think? This will bring the number of times that you’ve seen a QR code scanned in public to three. (Laughter.)
                                      So what I want to do is I want to install this QR sample because I know people are going and checking out these trivia stats. And I’ve got SVG and SignalR giving me real-time updates as people are answering trivia questions.
                                      I’m logged in right now as CHanselman. I want to take this session and I want to deep link into an authenticated session on a phone and then view these samples and take them with me.
                                      So I’ve gone and used NuGet to bring in the QR sample. And now I’m going to go and publish that again to the same site. This is an incremental publish now. So this is going to go and send that new stuff up to Azure.
                                      And then I’ll bring up my phone here. I’ve got my phone. And my camera guy, he follows me around. And I’m going to click on trivia stats. And here are the real-time trivia stats.
                                      And then I’m going to click on transfer to mobile up here in the auth area. And we’re going to do is we’re going to generate a QR code. I’m going to then scan that code, and we get a deep link that pops up generated by ASP.NET that’s then going to bring me in IE, and now I’ve got SingnalR, SVG, and Flot all running inside of my browser and I’ve jumped into my authenticated session using OWIN, ASP.NET, and HTML5. It’s pretty fabulous stuff. (Applause.)
                                      So we’ve got the promise of one ASP.NET; we’ve got browser link, bringing all of those browsers together with Web standards using SignalR. You saw Web Essentials as our playground that we’re adding new features to Visual Studio 2013. We can make Azure websites easily in the portal, publish directly from VS, logging, SignalR everywhere. Thanks very much, I hope you guys have fun. (Applause.)

                                      SATYA NADELLA: So I hope you got a great feel for how we’re going to completely change or revolutionize Web development by innovation in tools, in the framework, and in the Web server in Windows Azure. And round-tripping across all three such that you can really do unimaginable things in a much more productive way.

                                      We have over 130,000 active websites or Web applications today using Azure websites. Some big-name brands — Heineken, 3M, Toyota, Trek Bicycle — doing some very, very cool stuff using some of this technology.

                                      I’m very, very pleased that we’re using all of that feedback to announce the general availability of Windows Azure Websites. This has been in preview now since last Build, and we’ve had some tremendous amount of feedback from all of the customers who have been using it. Many of them, obviously, in production. But now you can start using it for full SLA and enterprise support from us. So we’re really, really pleased to reach this milestone. Hope you get a chance to start using it as well. (Applause.)

                                      I’m also pleased to announce the preview of Visual Studio 2013. You got to see it yesterday, today, and you’ll see a lot more of it. It’s just pretty stunning improvements in the tool itself. And combined with the .NET 4.5.1 framework update, you now have the previews of both the framework and the tools, and we really encourage you to give us feedback like you did the last time in your app development, and we’ll be watching for that.

                                      imageSo now I want to switch to mobile. Now, when you think about mobile-centric application development, the key consideration perhaps more than anything else is how do you build these mobile apps fast? And since there’s not a single mobile experience or application you’re building which doesn’t have a cloud backend, then the natural question is: What can we do to really speed up the building of these cloud backends?

                                      And that’s exactly what Azure Mobile Services does, which is we provide a very easy way for you to build out a backend for your mobile experiences and applications. We provide a rich set of services from identity to data to push notification, as well as background scripting.

                                      imageAnd then, of course, we support all of the platforms, Windows, Windows Phone, Android, IOS, as well as HTML5.

                                      To show you this in action, I wanted to invite up onstage Josh Twist from our Windows Azure Mobile Services team. Josh? (Applause, music.)

                                      JOSH TWIST: Thanks. We launched Windows Azure Mobile Services into preview in August last year. And in case you weren’t familiar, mobile services makes it incredibly easy to add the power of Windows Azure to your Windows Store, Windows Phone, IOS, Android, and even Web and HTML applications.
                                      To prove this to you, I’m going to give you a demo now of how easy it is to add the cloud services you need to an IOS application using this map.
                                      Here we are in the gorgeous Azure portal, and creating a new mobile service couldn’t be easier. I click, new, compute, mobile service, create. I enter the name of my mobile service, and then I choose a database option.
                                      And I want to point out, look at this new option we have here. You can now create a free 20-megabyte SQL database. Which means it’s now completely free for developers to work against Mobile Services with the 10 free services and that free 20-megabyte SQL database.
                                      Now, I’ve already created a service we have here today that we’re going to use called My Lists. If I click on the name, I’m greeted by our quick start, which is a short tutorial that shows me how to build a to-do list application.
                                      Now, I selected IOS, but this same mobile service could simultaneously power all of these platforms.
                                      We’re going to create a new IOS application. And since it’s a to-do list app, I need a table to hold my to-do list items.
                                      And then I’m going to download a personalized starter project. So here it comes. That’s a little zip file. And inside that zip file I’m downloading from the portal is an Xcode project. So if I double click this, it’ll open up in Xcode, and then we’re going to take a look at the source. Because what we’ve done is we’ve pre-bootstrapped the application to be ready to talk to Mobile Services. You’ll see it already contains the URL for my new mobile service.
                                      So what I’m going to do is launch this in the simulator. And what we’ll see here is a little to-do list application that inserts, updates, and reads data from Windows Azure with each operation being a single line of code, even in Objective-C.
                                      So I’m going to create a little to-do list item here to add to my tasks. Let’s just save that. So now that’s saved in Windows Azure. To prove that to you, I’m going to switch over to the portal. We take a look at the data tabs, and you’ll see I can drill into the table, view all of my data right here, and there’s the item I just added saved safely into a SQL database in Windows Azure.
                                      Now, we have so many cool features in Mobile Services. Here’s another one. I can actually add a script that executes securely on the server and intercepts those CRUD operations.
                                      So what I’m going to do here, just to give you a quick example, is I’m going to add a time stamp to items that are being inserted. So I simply say item dot created equals new date. I’m going to save that. And right here from the portal, that’s going to go live into Windows Azure and be updated in just a few seconds. So it’s done.
                                      Switch back to the app. Let’s insert a new item. That’s now saved. So if I switch back to browse, we’ll see that data again, but notice how we’ve automatically created a new column, and we’ve got that extra piece of data in there that executed on the server.
                                      Now, we have this amazing script editing experience here in the browser, but not everybody wants to edit code in the portal. And so we’ve added a new feature to Windows Azure Mobile Services that allows you to manage all of your source assets using Git Source Control.
                                      So I’m going to show you how to enable that. We go to the dashboard. Just down here under quick glance, we’ll get an option to set up source controls. So I’m going to click on that and kick it off.
                                      Now, this can take a minute or two. So while that’s running, I’m going to give you a tour of some of the other new features we’ve added to Mobile Services recently.
                                      One of our most-requested features was the ability to have service scripts for execute on the server but not in conjunction with HTTP CRUD operations where I can create an arbitrary REST API.
                                      We’ve added that feature, and it’s called Custom API. So I can now create a completely arbitrary REST API in a matter of minutes with Mobile Services.
                                      We also have a scheduler that allows me to execute scripts on a scheduled basis. So I can execute these every 15 minutes, every night at 2 a.m., whatever I prefer. And we also make it incredibly easy for you to authenticate your users with Microsoft Accounts, Facebook, Twitter, and Google. It’s just a single line of code in your applications.
                                      Now, our source control’s still running here. So what I’m going to do actually is switch to another service, not make you guys wait.
                                      So we have one here where I pre-configured Git. So if we go to the configure tab, you’ll see what we have here is a Git URL. So I’m going to copy this to the clipboard and then switch the terminal. And we’re now going to pull all of the source files down from the server repo onto my local machine.
                                      That’s going to take just a few seconds. It’s going to pull those files down so I can now work on them locally with my favorite tools.
                                      So I’m going to just drive into this directory here and show you what the tree looks like. So you can see we can see all of the API files, the scheduler files, and my table files including that insert script that we just edited in the portal.
                                      Let’s take a look at that in Sublime. And you can see there’s that change. Now, we can make more changes here. I’m just going to comment this out and save it. And then I’m going to do a Git push to push that back up. So let’s commit it to the tree. And then Git push, and in a matter of seconds, that change will go live into Windows Azure.
                                      So enough with the Mac. Let’s talk about what’s happened since preview. We’re now supporting tens of thousands of services in production on Mobile Services to all kinds of scenarios from games to business applications and consumer engagement applications.
                                      I want to talk to you today about one of my favorite applications that we have in the store. And it’s from a company called TalkTalk Business. TalkTalk Business are one of the U.K.’s leading telephony providers for businesses. And these guys have a serious focus on customer service. So they’ve created a Windows Phone app and a Windows Store app.
                                      Let me show you the phone application now. So here’s the app on my Start screen. If we launch it, you’ll see we get an instant at-a-glance view of my billing activity, my account balance. I can see all of the services I can use with TalkTalk Business, and I get real-time delivery of up-to-the-minute service alerts.
                                      Now, it should come as no surprise that best-in-class applications like this need best-in-class services. And this is actually built using Mobile Services and is live in the U.K. stores today.
                                      Now, they also have a Windows Store application. And I actually have a replica of that project here on my Windows machine.
                                      And you can see the project’s open in the next version of Visual Studio 2013. One of the capabilities this app has is it lets me manage my user profile.
                                      Now, let me show you some of the code that does that. So over here in this file, you can see where we upload the user profile when we make a save. Notice how that’s just a single line of code to write that data all the way through to my database.
                                      And here we load a user profile into the UI, again, with a single line of code.
                                      Now, these guys also have tables and scripts. And I want to show you those, but instead of switching out to the portal, let’s do it using the new Server Explorer in Visual Studio 2013.
                                      So I can open up the Server Explorer here, dive into Windows Azure, notice the new Mobile Services tab, expand that, and we’ll see enumerated all of our Mobile Services.
                                      There’s my TalkTalk service. And if we open this, we’ll see all of the tables that are backing that service, including my user profiles table down here.
                                      If we look in that, we’ll be able to see all of my scripts. The best thing is I can now edit them here in Visual Studio.
                                      So I launched the script editor. I can make a change. And then when I hit save, this is going to deploy live to Windows Azure directly from Visual Studio in a matter of seconds. It’s done. (Applause.)
                                      So the next thing I want to do is app push notifications for this application.
                                      Now, setting up push traditionally is quite a few steps. I have to register my application with the Windows Store. I have to configure Mobile Services with my credentials to call Windows Notification Services. I have to require a channel URI on my client and upload that to Mobile Services so it’s ready to send the push.
                                      Let me show you just how easy we’ve made this in the next version of Visual Studio.
                                      I simply right click, add push notification, and this wizard is going to guide me through all of the steps necessary. So I’m just entering my credentials there for the Windows Store. And then it’s going to ask me to choose which application I want to associate. So I’m going to choose this one.
                                      The next step, I’ll be asked to choose which mobile service I want to configure. I’m going to choose TalkTalk, and we’re done.
                                      What’s going to happen now is this is going to make some changes to my mobile service and to my client application. In fact, it’s going to prewire a test notification so I can be superbly confident that everything is wired correctly and going to work. And to try that out, all I have to do is launch the application.
                                      Let’s try that now. It’s going to take a second to deploy. And then what we should see is a push notification arrive in the top-right corner. And there we go. So that’s how easy we’ve made it now to add a push notification to your application with Mobile Services and Visual Studio 2013. (Applause.)
                                      The next thing I want to do is create an ability for the administrators at TalkTalk Business to actually send these service alerts. And these guys use a Web portal. So let’s switch over to their Web project.
                                      So here it is in Visual Studio. And you’ll see we have an index HTML file. Let’s open that up.
                                      Now, notice how we pre-configured this with the Mobile Services JavaScript SDK that we added recently. It now means it’s super easy to add Mobile Services to your Web and HTML hybrid applications.
                                      We’ve already added the client. So all I need to do now is add the code to invoke the service API that sends those messages. So let’s try that. So I start client dot invoke API. I need the name of the API I’m calling, which is send alert, in this case. And then since I’m doing a post, I need to specify the body. Body is service alert. And we’re done.
                                      So I’m going to save that and launch it in the local browser. Now, since we’ve already pre-configured the client to receive push notifications, we can actually test this whole scenario end to end right here on this machine.
                                      So what I’m going to do is send out a service alert for email in the midlands and western region that says SMTP upgrade complete. And when I hit send notification I should get a push notification in the top-right corner that was initiated from a website. And there we go. (Applause.) Thank you.
                                      You can see just how easy it is to add some incredible capabilities to your apps using Windows Azure Mobile Services. I really can’t wait to see what you guys do with this. I’ll see you at 2:00. (Applause.)

                                      SATYA NADELLA: Thanks, Josh.

                                      As Josh was saying, we’ve been in preview, and we’ve got some tremendous feedback. We’ve had over 20,000 active apps on Azure Mobile Services to date, and TalkTalk Business is something that Josh showed. There’s a cool app written by Aviva, which is an application that collects telematic data from a mobile app and gives you a real-time quote based on your driving habits for your car insurance, which is a fascinating application, and there are many, many applications like that, which are getting written on top of Azure Mobile Services.

                                      So I’m really, really pleased to announce the general availability of Azure Mobile Services today. We think that this is going to really help in your mobile development efforts across all devices, and we look forward to seeing what kind of applications you go build.

                                      So now to take you to the next section, which is all around cloud scale and enterprise grade, let me invite up onstage Scott Guthrie. Scott? (Applause.)

                                      SCOTT GUTHRIE: Well, this morning we looked at how you can use Windows Azure to build Web and mobile applications and host them in the cloud.

                                      I’m now going to walk through how we’re making it even easier to scale these apps, as well as integrate them within enterprise environments.

                                      Let’s start by talking about scale. Specifically, I’m going to use a real-world example, which is Skype.

                                      Now, Skype is one of the largest Internet services in the world. And over the last year, they’ve been working to migrate that service to run on top of Windows Azure.

                                      One of the benefits they get from moving to Windows Azure is that they can avoid having to buy and provision their own servers, and instead leverage a dynamic cloud environment.

                                      Like most apps, Skype sees fluctuations in terms of load throughout the day, the week, even different parts of the year. And in a traditional datacenter environment, they need to deploy a thick set of servers in order to handle their peak load.

                                      image

                                      The downside with this, though, is that you end up having a lot of expensive, unused compute capacity during non-peak times.

                                      Moving to a cloud environment like Windows Azure allows them to, instead, dynamically scale their compute capacity based on just what their service needs at any given point in time. And this can yield enormous cost savings to both small and especially to very large services.

                                      Now, with Windows Azure, you’ve always been able to dynamically scale up and scale down your apps, but you had to typically write custom scripts or use other tools in order to enable that. What we’re excited to announce today is that we’re going to make this a lot easier by baking in auto-scale capability directly into Windows Azure. And this is going to make it easy for anyone to start taking advantage of these kind of dynamic scale environments and yield the same cost savings.

                                      I’d like to invite Charles Lemanna onstage to show it off in action. (Applause.)

                                      CHARLES LEMANNA: I’ll be giving a quick demo of the brand-new autoscale feature that supports Windows Azure Compute Services.
                                      First, I’ll cover the website autoscale, then the cloud services, and then the virtual machine.
                                      So if I navigate to the website you saw earlier from Scott Hanselman’s demo, the geek quiz website, we see all the normal metric information that Windows Azure is collecting for his deployment. In this case, CPU time, response time, and network traffic.
                                      But now there’s a new prompt to configure autoscale for this particular website. In the past, when the website would get lots of traffic, people would come in and take the quiz. Scott would have to go in and manually drag the slider to increase his capacity so his response time is not impacted.
                                      However with autoscale, I’m able to now configure a basic set of rules that will manage the capacity from my website automatically.
                                      I can configure an instance count range with a minimum value that we’ll always honor, as well as a maximum value. In this case, we’ll never go above six instances, so you can be sure you won’t get a giant bill.
                                      Next, you can also configure a target CPU range. In this case, I say choose 40 to 54 percent, and what that means is the autoscale engine for Azure in the background we’ll be turning off and turning on website instances so your CPU always stays in that range. In other words, if you go below 40 percent, we’ll turn off the machine to save you money, and if you go above 54 percent, we’ll turn on a new machine so none of your users are impacted.
                                      And just like that, I click save, and Windows Azure will manage my website, scale, and capacity entirely on its own. (Applause.)
                                      Next, I’ll hop over to the cloud service autoscale. I just have a simple deployment here with a Web front end where my customers can come and, say, place T-shirt orders or other memorabilia. And this front end puts items into a queue, which I have a background worker role, which will go and pull items from this queue and process them for billing or shipping.
                                      For the Web role, I’ve already configured autoscale based on CPU, just like you saw for websites with an instance range and a CPU range. But I also can configure a scale up button, which impacts the velocity by which I increase my capacity. I’ve chosen to scale up by two instances with only a five-minute cool down because I want to respond immediately and quickly to spikes in customer demand.
                                      For my background worker role, it’s a little bit different. I don’t care as much about CPU; I care about how many items are waiting in the queue to be processed, how many orders I have to go through.
                                      In this case, I’ve already configured autoscale based on queue depth by selecting a storage count and queue name, as well as the target number of items in that queue per machine.
                                      In this case, as the queue gets bigger, we’ll add more machines. Imagine it’s the holidays and a bunch of new orders come in; we’ll make sure you have enough capacity to process it in real time.
                                      And imagine it’s a Sunday night and not as many people are coming to your website and placing orders. We’ll go down to your minimum to save you even more money on your monthly Azure bill.
                                      Lastly, I’ll hop over to virtual machines. Virtual machines are just like cloud services in that you configure autoscale for a set of virtual machines based on either CPU or queue.
                                      For the virtual machines, you can choose minimum-maximum instances, and we’ll move you up and down within that range by turning on and turning off those machines. And with the recent announcement of no billing while the machine’s stopped, you don’t have to worry about being charged in this case.
                                      As you can see, it just took a few minutes to configure autoscale across all these different compute resources. And that’s what the power of autoscale brings to Windows Azure. In just a few minutes, you can make sure your cloud application runs, stays up and running for the lowest possible cost. Thank you. (Applause.)

                                      SCOTT GUTHRIE: So as Charles showed you, it’s super easy to configure autoscale and set it up so you can really take advantage of some great savings. He also mentioned, two of the improvements that we made earlier this month is the ability now to stop VMs without incurring any billing compute charge, as well as the ability to now bill per minute. This means that if you run your site or you run your VM for only 20 minutes, we’re only going to bill you for the 20 minutes that you actually run it instead of the full hour.

                                      And when you combine all these features together, it really yields a massive cost savings over what you can do today in the cloud, but in particular, also over what you can do in an on-premises environment.

                                      We’re really excited to announce that the preview of Windows Azure Autoscale is now live. And you can actually all try it out for free and start taking advantage of it today. (Applause.)

                                      So let’s switch gears now and talk a little bit about enterprise integration and some of the things that we’re doing to make it even easier for you to build cloud apps and integrate them within your corporate or enterprise environment. Whether you’re an enterprise building your own apps or you also hear a little bit about how we’re enabling ISVs that are building SaaS-based solutions to sell into an enterprise environment and monetize even more effectively.

                                      imageThere are a whole bunch of services that we have built into Windows Azure in the identity space that makes it really easy to do this kind of enterprise identity integration so that you can define an Active Directory in the cloud using a service we call Windows Azure Active Directory.

                                      You can basically have a cloud-only directory, meaning you only have one directory, and it’s in the cloud, and you put all your users in it.

                                      imageWhat’s nice about Windows Azure Active Directory though is it also supports the ability where you can synchronize it with an on-premises Active Directory that you’re running on Windows Server. And this is great for enterprises or corporates that already have Active Directory installed. And it allows them to very easily synchronize all their users into the cloud and allow cloud-based applications to start using that directory very easily to authenticate and enable single sign-on for all their customers.

                                      And what’s nice about Windows Azure Active Directory is it’s built using open standards. So we support SAML, OAuth, as well as WS Federation, which makes it really easy for you as developers to start authenticating and enabling single sign-on within all your apps using existing libraries and protocols that you already use.

                                      So what I thought I’d do is actually walk through a simple example of how this week we’re making it even easier in order to take advantage of that.
                                      So what I’m going to show here is just a simple example where we have a company called Contoso that has an Active Directory on premises. And they’re going to basically spin up an Azure Active Directory running inside Windows Azure. And they can synchronize their directory up into the cloud. That means all their users are now available there.
                                      And what they can then do is they can start to build apps, whether they’re mobile apps, Web apps, or any other type of app, deploy them in the cloud, and now any of their employees when they go ahead and access that application can enable single sign-on using their existing enterprise credentials and be able to securely login and start using that app. Let’s go ahead and walk through some code on how we do that.
                                      So what I’m standing in front of here is the Windows Azure Management Portal, which you already seen Scott and Josh and Charles walk through earlier today.
                                      What I’m going to do is click on this Active Directory tab that’s within the portal, which allows me to control and configure my Windows Azure Active Directory.
                                      And what you can see here is the Contoso directory has already been created. I’m creating directories inside Windows Azure; it’s actually free; it doesn’t cost anything. So every developer they want can create their own directory, and companies can very easily go ahead and populate their directory with their information.
                                      You can see here this directory; I already have a number of users that are stored within it. If I want to, I could directly inside the admin tool create new users and manage them through the admin console.
                                      I could also click that directory integration tab and then set up a sync relationship with my on-premises Active Directory. That means every time a user is added or updated inside my on-premises Active Directory, it’ll be automatically reflected inside Windows Azure as well.
                                      So once I have this, I basically have a directory that I can use within my applications to authenticate users.
                                      So let’s build a simple app using the new Visual Studio 2013 and the new ASP.NET release coming out this week and show how I could basically integrate that within a Web app.
                                      So I’m going to use the same Web application template that Scott showed earlier. Call this Simple App.
                                      I can choose whatever frameworks I want within it. I can also click this change authentication dialog box that Scott touched on briefly in his talk.
                                      And what I’m going to do is I’m going to click this organizational accounts tab. And I can go ahead now and enter in the name of the domain of my company. You’ll notice inside this dropdown we’ve added support so that both for internal apps within an enterprise that want to target a single company, they can do it. We also support the ability if you want to develop a SaaS application and target multiple enterprise customers, you can go ahead and select that as well. (Applause.)
                                      I can then go ahead and just enter the password here. What I’m doing here is just registering this application with Windows Azure. And I just hit create project, and what this is literally going to go ahead and do now is create for me an ASP.NET project using whatever framework that I wanted to specify as registering that application with Windows Azure. So it’s basically saying I’m going to do secure sign-on with it.
                                      And now if I go ahead and run this application in the browser, it’s going to launch, and one of the first things you’ll see it do is because I’ve enabled Active Directory single sign-on, it’s just going to automatically show me a single sign-on screen. And right now, I’m on the Internet, so that’s why it’s going to prompt me with this in HTML. I can also set it up if I was in an intranet environment where I wouldn’t have to explicitly sign in.
                                      But right now, I can sign in. And I’m just going to say Contoso Build.com. If I do this now, I’m logged into this ASP.NET. I’m logged in using my Active Directory account that the employee has. And I’ve literally in a matter of moments set this thing up where I’m actually now using the cloud in order to actually use a single sign-on provider.
                                      What this means is not only can I run this thing locally, but I can now just right click and hit publish, and I can publish this as a website, I can publish this as a virtual machine or in a cloud service. And now any of the employees within my organization that access it are integrated with their existing enterprise security credentials and can do single sign-on within the application. (Applause.)

                                      So this makes it really, really easy for you now to build your own custom applications, host them in the cloud, and enable enterprise security throughout.

                                      What we’re also doing with Windows Azure Active Directory is making sure that not only can you host your own applications, but we also want to make it really easy for enterprises to be able to consume and integrate existing SaaS-based solutions and have the same type of single sign-on support with Active Directory as well.

                                      This is great for enterprises because it suddenly means that they can go ahead and take advantage of all the great SaaS solutions that are out there, and they can start to integrate more and more apps with less friction into their enterprise environment. And it’s really great from an ISV and developer perspective because it now means that you can go ahead and build SaaS solutions and sell them to enterprises at a fraction of the friction that was required today. That makes it much easier to go ahead and show the value quickly, makes it much easier to onboard your enterprise customers, and at the end of the day, enables you to make a lot more money.

                                      So what I’m going to do is walk through an example of how this works. So we’re going back to the Windows Azure portal. And we’ve got our users, like we had before here. I’m now going to click this applications tab as well. And what the applications tab does is it’s going to show me all of the apps that have been registered with this directory. So any of the custom apps that I would build would show up here.
                                      You’ll notice also inside this list, we have a bunch of popular SaaS-based solutions that have already been registered with Contoso as well. So we’ve got Box, Basecamp, and many others.
                                      What I can do now inside the Windows Azure portal if I’m an administrator of the directory is I can go ahead and just click add. Click this manage access to an application link. And what we’re integrating is SaaS-based directory of existing SaaS-based solutions that this organization can now seamlessly integrate as part of their Windows Azure Active Directory system.
                                      So, for example, I could do popular ones like DocuSign or Dropbox or Evernote.
                                      We’ve got ones you might not expect at a Microsoft conference. We’ve got Google Apps. We’ve got Salesforce.com. We even just for giggles enabled Amazon Web Services. (Laughter.) Some of these we’d like you to use more than others. (Laughter.) But regardless, you can add any of these, and basically once you just click add, they’ll show up in this list. And then all you need to do in order to integrate your single sign-on with one of these apps is drill into it.
                                      So in this case here, I’m going to drill into Box. Basically, I can just hit configure. I can say I want to enable my users to authenticate the Box using my Windows Azure Active Directory. Just paste in my Box tenant URL, which is the URL I get from Box. And I just download and upload a cert in order to make sure that we have a secure connection.
                                      And once I do that, I then basically have integrated my Active Directory with Box. I can then go ahead and hit configure user access. This will bring up my list of all the users within my Windows Azure Active Directory. I can then go ahead and click on any of them, click enable access.
                                      You’ll notice we’ve even integrated if the SaaS provider has roles defined within their application, I cannot only give this user access to Box, but I can actually map which roles within the Box applications they should have access to. And then hit OK and then literally in a matter of seconds, that user is now provisioned on Box and they can now use their Active Directory credentials in order to do single sign-on to that SaaS application. (Applause.)
                                      So I’m going to switch gears now and go to another machine. So I was showing you kind of the administrator experience for how an administrator would login or enable that. I’m now going to kind of show you the end-user experience of what this translates into. And once we set up that relationship with that particular employee, that employee can go ahead and just go to Box directly and use their Active Directory credentials to sign in.
                                      Or one of the other things that we’ve done which we think is kind of cool is integrated the ability so that the company can expose the single dashboard of all the SaaS applications that they’ve configured that employees can just go ahead and bookmark.
                                      So in this case here, going ahead and logging into this. So this is kind of an end-user experience. All of the apps, SaaS solutions, or custom apps that the administrator of Active Directory has gone ahead and said you have access to will show up in this list. So you can see the Box app that we’ve just provisioned shows up here now. And as more get added, we’ll just dynamically show up.
                                      And then what the user can do is just go ahead and click on any of them in order to initiate a single sign-on relationship. And that’s how easy now our Contoso employee is now logged into Box. And they can now do all the standard Box operations now using their Active Directory against it. (Applause.)

                                      The beauty about this model is not only is it super easy to set up, you saw both on the administrator side, as well as on the developer side, it’s really, really easy to integrate. But it also means from an enterprise perspective, they feel a lot more secure. It means that if the employee ever leaves the organization or their account is ever suspended, they basically lose all access to the SaaS applications that they’ve been using on the company’s behalf. So the company doesn’t have to worry about the data leaving or the employee still able to kind of login and make changes to their data. So it enables a very nice model there.

                                      And I think from a developer perspective, you know, one of the things to think about in terms of what we’re enabling here is not only is it easy, but it’s going to enable you to reach a lot of customers. We have more than 3.2 million businesses that have already synced their on-premises Active Directory to the cloud and more than 68 million active users that login regularly using that system.

                                      That basically means as a developer, as a company that wants to sell to enterprises, you’ve got an awesome market that you’re now able to go ahead and sell to and makes it real easy for you to monetize.

                                      And what I thought I’d do is actually invite Aaron Levie, who is the co-founder and CEO of Box to actually come onstage and talk a little bit about what this means to Box and some of the kind of possibilities this opens up for them.

                                      AARON LEVIE: Hey, how you doing? (Applause.) How’s it going? So I’m really excited to be here. At Box, we help businesses store, share, manage, and access information from anywhere. And we’re big supporters of Microsoft. We build for the Windows desktop, we build on Windows 8, build on Windows 8 Phone. We love to integrate our work with SharePoint. Unfortunately, they haven’t returned our email yet, but maybe spam filter, we don’t know what’s going on there.

                                      But it’s really exciting to see sort of an all-new Microsoft. I think the amount of support for openness and heterogeneity is incredibly amazing. I think you normally wouldn’t have seen a development preview on top of a Mac or whatever. I was actually afraid that Bill Gates was going to drop down from the ceiling and rip it off. So that was really exciting to see.

                                      So we’re really excited to be supporting Windows Azure Active Directory. It helps reduce the friction for customers to be able to deploy cloud solutions, and we think it’s going to be great for developers. We think that’s going to be great for startups and the ecosystem broadly.

                                      SCOTT GUTHRIE: Yeah, we were talking a little bit earlier about some of the friction that it reduces. I don’t know maybe you could talk as an enterprise SaaS solution what that friction is like, and how does something like this help?

                                      AARON LEVIE: Yeah, I mean, if you think about how the enterprise software industry for decades basically if you wanted to deploy software or technology in your enterprise, you had to build this sort of massive competency in managing infrastructure and managing services and managing new software that you want to deploy. And there was so much friction for implementing new solutions into your business. So any new problem that you wanted to solve, you had to have the exact same amount of technology that you had to implement per solution.

                                      Even harder was getting things like the identity to integrate and getting the technology to actually talk to each other. The power of the cloud is that any business anywhere in the world — and we’re talking millions of businesses that now have access to these solutions — can instantly on-demand light up new tools.

                                      And so what that means is when you have lower friction, when you have more openness, we’re going to see way more innovation. And that creates an environment where startups can be much more competitive, where we can build much better solutions, and I think the ecosystem broadly can actually expand. And the $290 billion that is spent every year on enterprise software today on-premises can massively move to the cloud, and we can actually expand the amount of market potential that there is between the ecosystem.

                                      SCOTT GUTHRIE: That’s awesome. You know, we’re kind of excited on our side in terms of the opportunity both kind of to enable that kind of shift. How we can use Windows Azure, how we can use the cloud in order to provide sort of this great opportunity for developers to basically build solutions that really can reach everyone.

                                      You know, I think one of the other things that’s just nice is sort of how we can actually interoperate and integrate with systems all over the place. And that’s across protocols, that’s across operating systems, that’s devices, that’s even across languages. And I think as Aaron mentioned, it’s going to open up a ton of possibilities. And at the end of the day, I think really provide a lot of economic opportunity out there, hopefully for everyone in the audience.

                                      AARON LEVIE: Cool.

                                      SCOTT GUTHRIE: So thanks so much, Aaron.

                                      AARON LEVIE: Thanks a lot, appreciate it. See you. (Applause.)

                                      SCOTT GUTHRIE: I’m really excited to say that everything that we just showed here from a developer API perspective, you can start plugging into and taking advantage of this week. We’ve got a lot of great sessions on Windows Azure Active Directory where you can learn more, and you can start taking advantage of all the tools that we are providing in ASP.NET and with the new version of .NET and VS to get started and make it really easy to do it.

                                      We’re then going to go ahead and soon have a preview of the SaaS app management gallery that you can also start loading your applications into, and we’ll start taking advantage of as an enterprise. So we’re pretty excited about that, and we think, again, it’s going to offer a ton of opportunity.

                                      imageSo let’s switch gears now. We’ve talked a little bit about identity and how we’re trying to make it really easy for you to integrate that within an enterprise environment. I’m going to talk a little bit about the integration space more broadly, and in particular talk about how we’re also making it really easy to integrate data, as well as operations in a secure way into your enterprise environment as well.

                                      And we’ve got a number of great services with Windows Azure that make it really easy to do so.

                                      One of them is something that we first launched this month called Windows Azure BizTalk Services. And I’m pretty excited about this one in that it really allows me to dramatically simplify the integration process. For people that haven’t ever tried to integrate, say, an SAP system with one of their existing apps, or ever tried to integrate an SAP system with an existing SaaS-based solution, there’s an awful lot of work involved in terms of doing that both in terms of code, but also in terms of monitoring and making sure everything is secure. And these types of integration efforts can often go on for months or years as you integrate complex line-of-business systems across your enterprise.

                                      What we’re trying to do with Windows Azure BizTalk Services is just dramatically lower that cost in a really quantum way. And basically with Windows Azure BizTalk services, you can stand up an integration hub in a matter of minutes inside the cloud. You can do full B2B EDI processing in the cloud so you can process orders and manage supply chains across your organization.

                                      We’re also enabling enterprise application integration support so that you can very easily integrate lots of different disparate apps within your environment, as well as integrate them with cloud-based apps, both your own custom solutions, as well as SaaS-based apps that your enterprise wants to go ahead and take advantage of.

                                      You know, we think the end result really is going to be a game-changer in the integration space and opens up a bunch of possibilities.

                                      So what I thought I’d like to do is walk through just sort of a simple example of how you can use it. So I’m going to go back to our little Contoso company.

                                      And they want to be able to consume and use a SaaS-based app that does travel management. We’ll call it Tailspin Travel. And they want to be able to do single sign-on with their employees so that their employees can login using their Active Directory credentials.

                                      But to really make it useful, they also want to be able to tie in their travel information and policies with their existing ERP system on premises, and that poses a challenge, which is how do you securely open up your ERP system and enable a third party to have access to it? How do you monitor it? How do you make sure it’s really secure?

                                      And so that’s where BizTalk services comes into play. So with BizTalk services, you can go to Windows Azure, you can very easily and very quickly stand up a Windows Azure BizTalk service. And then we have a number of adapters that you can go ahead and download and run on-premises to connect it up.

                                      In particular, we have an SAP adapter. We also have Oracle adapters, Siebel adapters, JD Edwards adapters, and a whole bunch more. So, basically, without you having to write any code, you can actually just define what we call bridges, which make it really easy and secure for you to go ahead and expose just the functionality you want.

                                      That SaaS app or your own custom app can then go ahead and call endpoints within Windows Azure BizTalk Services using just standard JSON or REST APIs, and then basically securely go through that bridge and execute and retrieve the appropriate data.

                                      Again, it’s really simple to set this up. What I’d like to do is just walk through a simple example of how to do it in action.
                                      So what I have here is kind of the end-user app that our Contoso employees will use. It’s a Web-based application. Again, our Tailspin Travel. You’ll notice that the users are already logged in using the Windows Azure Active Directory already within the app. So this app could be hosted anywhere on the Internet.
                                      I could then create new trips as an employee, or I could go ahead and look at existing ones that I’ve already booked. So here’s one, this is the return trip from Build. Right now, I’m flying in economy. I don’t know, maybe it would be nice to get upgraded. So I can go ahead and try to enter that.
                                      But you’ll notice here at the top when I do it, a few seconds later, I’ve got a policy violation that was surfaced directly inside the Tailspin Travel app. And basically it just was saying I can’t just do this myself; my manager actually has to go ahead and approve it. And it’s coming directly out of the SAP system of Contoso.
                                      So how did this happen? Well, on the Tailspin Travel side, this is the SaaS app, they’re building it in .NET. This is basically a simple piece of code that they have, which allows them on the SaaS side to actually check whether or not this trip is in policy.
                                      Basically, the way they’ve implemented it is they’re just making a standard REST call to some endpoint that’s configured for the Contoso tenant. And this doesn’t have to be implemented with Azure, doesn’t have to be implemented with .NET, it can be implemented anywhere. And it’s just making a standard REST call. And depending on that action, the SaaS app then goes ahead and does something.
                                      So how do we implement this REST call? Well, we could implement it in a variety of different ways on Windows Azure. We could write our own custom REST endpoint and process the code and handle it that way. We have lots of great ways to do that. Now, the downside, though. The tricky part of this is not going to be so much implementing the REST API; it’s actually implementing all the logic to flow that call to an on-premises SAP system, get the information validated, and return it.
                                      Again, that would typically require an awful lot of code if you needed to do that from scratch.
                                      What I’m going to do here is switch here to the other machine. And walk through how we can use BizTalk services to dramatically simplify it.
                                      So you can create a new BizTalk service. Go ahead and just say new app service, BizTalk service custom create. I could say Contoso endpoint. And literally just by walking through a couple wizards here and hitting OK, I can basically stand up my own BizTalk service inside the cloud hosted in a high-availability environment literally in a matter of minutes.
                                      And for anyone who’s ever installed BizTalk Server or an integration hub themselves, they’ll know that typically that does not take a couple minutes. And the nice thing about the cloud is we can really kind of make this almost instantaneous.
                                      Once the service is created, you get the same kind of nice dashboard view and quick start view that you saw Josh with Mobile Services. And so there are ways that you download the SDK. You can also monitor and scale up and scale down the service dynamically.
                                      And then as a developer, I can just launch Visual Studio. I can say new project. I can say I want to create a new BizTalk service, which will define all the mapping rules and the bridge logic that I want to use.
                                      This is one I’ve created earlier. You’ll notice here on the left in the Server Explorer we have a number of LOB adapters that are automatically loaded inside the Server Explorer, so I can connect through my SAP system directly and do that.
                                      Add it to the design surface, and then I can create these bridges that I can either define declaratively; I can also write custom code using .NET in order to customize. Basically, I can just double-click it. This little WYSIWYG designer here lets me actually map the REST calls that I’m getting from that Tailspin Travel SaaS app, transform it, and then I can basically map it to my SAP system.
                                      And you can see here in our schema designer, we basically allow you to do fairly complex mapping rules between any two formats. So here on the right-hand side, I have my SAP schema that’s stored in my on-premises environment; the left-hand side here, there’s that REST endpoint. This is a very simple example with a lot of these integration workflows. You might have literally thousands of fields that you’re mapping back and forth.
                                      Once I do the mapping, though, all I need to do is just go ahead and hit deploy, and this will immediately upload it into my BizTalk Azure service and at that point, it’s live on the Web. I can then choose who do I want to give access to this bridge? And I can now securely start transferring just the information I want into and out of my enterprise.
                                      For an IT professional, they can then go ahead and open up our admin tool. They can see all the bridges that have been defined. And then one of the things that we also build directly into Windows Azure BizTalk Services is automatic tracking support. And what this means is now the IT professional can actually see all of the calls that are going in and out of the enterprise. It’s all logged; it’s all audited so it’s fully compliant, and they can basically now keep track of exactly all the communication that’s going on to make sure that it’s in policy.
                                      Literally, you saw all of this sort of a simple example here, but this really starts to open up tons of possibilities where you can integrate either with other SaaS out there that your organization wants to use, or as you want to start building your own custom business application and host within Windows Azure, you can now securely get access to your on-premises line-of-business capabilities and very securely manage it. (Applause.)

                                      And I’m excited to announce that everything we just showed here, as well as everything I showed when I created that Active Directory app, is now available for you to start using. You can go to WindowsAzure.com, and you can start taking advantage of Windows Azure BizTalk Services today. (Applause.)

                                      imageSo I talked a little bit about how we’re making it easy to integrate enterprise systems with the cloud, both on the identity side as well as the integration side. The other side of enterprise grade services that we’re delivering fall into the data space. And here we’re really trying to make it easy for you to store any data you want in the cloud, any amount of data you want in the cloud, and be able to perform really rich analysis on top of it. And so with Windows Azure storage, we have a really powerful storage system that lets you store hundreds of terabytes, or even petabytes, of storage in any format that you want. We have NoSQL capabilities that are provided as part of that as well as raw block capability. With our SQL database support, we now have a relational engine in the cloud that you can use. You can very easily spin up relational databases literally in a matter of seconds and start using the same ADO.NET and SQL syntax features that you are familiar with today.

                                      We also a few months ago launched a new service that we call HD Insight. This makes it really easy for you to spin up your own Hadoop cluster in the cloud, and that you can then go ahead and access any of this data that’s being stored and perform map reduce jobs on it. And what’s nice about how we’re doing HD Insight, like you’ve seen with a lot of the openness things that we’ve talked about throughout the day, is it’s built using the same Hadoop open source framework that you can download and use elsewhere. We’re actually contributors into the project now.

                                      And with Windows Azure, it’s now trivially easy for you to spin up your own Hadoop cluster, be able to point at the data and immediately start getting insights from it, and starting to integrate it with your environment. And so I think in the next keynote later today, you’re actually going to see a demo of that in action. So I’ll save some of that for them.

                                      But the key takeaway here is just sort of the combination of all these capabilities in identity integration and data space really we think are game-changers for the enterprise, really enable you to build modern business applications in the cloud. I think they’re going to be a lot of fun to use. So we look forward to seeing what you build.

                                      Thank you very much.

                                      (Applause.)

                                      SATYA NADELLA: Thanks, Scott.

                                      So one last thing I want to talk about is Office and Office 365 as a programmable surface area. We talked a lot about building SaaS applications using services, Scott talked about it. But what if you were a large developer, line-of-business application developer, or a SaaS application developer and could use all of the power of Office as part of your application? And that’s what we’re enabling with the programming surface area of Office.

                                      What that means is the rich object model of Office, everything from the social graph, the identity, presence information, document workflows, document libraries, all of that is available for you to use using modern Web APIs within your application. You can, in fact, have the chrome either in the Office client or in SharePoint, and you can have the full power of the backend in Azure. And, of course, the idea is here is to be able to do all of that with first-class tool support.

                                      To show you some of this in action, I wanted to invite up onstage Jay Schmelzer from our Visual Studio team to show you some of the rapid application development in Office.

                                      Jay, come on in.

                                      JAY SCHMELZER: Thank you. The requirements and expectations and importance of business applications has never been greater than it is today. Modern business applications need to access data available inside and outside the organization. They need to enable individuals across the organization to connect and easily collaborate with each other in rich and interesting ways. And the applications themselves need to be available on multiple different types of devices and form factors.
                                      As developers, we need a platform that provides a set of services that meet the core requirements of these applications. And we need a toolset that allows us to productively build those applications while also integrating in with our existing dev ops processes across the organization.
                                      What I want to show you this morning is a quick look at some things we’re still working on inside of Visual Studio to enable developers to build these modern business applications that extend the Office 365 experience leveraging those services available both from Office 365 and the Windows Azure platform.
                                      And, of course, doing it inside of a Visual Studio experience that allows the developer to focus on unique aspects of their business, and their application, not spending as much time in boilerplate code.
                                      To do that, we’re going to focus on the human resources department at Contoso, who has been using Office 365 to manage the active job positions across the organization. And we want to create a new application that allows individuals in the company to submit potential candidates for open positions from within their Office 365 site using whichever device they happen to have available at the time.
                                      To do that, we’ll switch over to Visual Studio, and we’ll see that we have a new Office 365 Cloud Business app project template available to us. This project goes and builds on the existing apps for Office and apps for SharePoint capabilities that are surfaced as part of that new cloud app model Satya was talking about. And it provides us a prescriptive solution structure for building a modern business application.
                                      I mentioned data is a core part of this, and you see we’ve already started creating the definition for a new table that we’ll use to store our potential candidates. What Office 365 Cloud Business apps does for us is surface additional data types that provide access to these core capabilities of the Office 365 and Windows Azure platform.
                                      Some examples of that we see here that the referred by is typed as a person, giving us access to all the capabilities in Office 365 associated with that Office 365 or Azure Active Directory user. The document, their resume, is stored as a typed document. So we can store it in a document library, and it leverages the rich content management and workflow capabilities associated with Office documents.
                                      We also need to be able to go and pull in data from elsewhere. In our case, we want to go and grab data from that existing SharePoint list the human resources team is using to manage active positions, so that our users can choose a potential position they think those candidates are appropriate for. You see, I’ve already added that, so it’s in my project.
                                      We’ll just go and connect it up between the candidate and our job postings, specify the relationship, and say OK. And now we have this virtual relationship between our Office 365 list and our SQL Azure Database.
                                      OK, the next thing we want to do, though, is really enable that people interaction. If you notice, when I look over here at the candidate, if I select this, you’ll see right from here I have the ability to have the application interact with my corporate social network on my behalf as I’m doing interesting things in the application.
                                      So we have the data model defined. The next thing we need to do is create the UI model. Users of business applications today expect a modern look and feel, a modern experience, but they also want it to be consistent. Visual Studio gives you great ways of doing this for providing a set of patterns that are going to be consistent across your applications. We’ll select a browse pattern, just choose, or the default pattern, choose the table we care about, and now let Visual Studio go and create for us a set of experiences for browsing, viewing, editing and updating that candidate information.
                                      So we have our data model. We have our UI model. The last thing we want to do is go in and actually write some business logic. In this case, back on the entity designer, we’ll go in, and we’ll leverage the data pipeline where we can interact with data moving in and out of the application. In this case, we’ll use our validate. And what we’ll do is, we’ll just go in and make sure that the only folks that can go and actually set or modify the interview date are members of the HR department. And here’s another example where we see the power of surfacing those underlying platform capabilities. I’m able to reach in to the current user, into their Azure Active Directory settings, and grab the current department and validate it against the checks we want to make.
                                      Let’s go ahead and set a breakpoint here. I think we’re probably in good shape. Anyway, so we’re going to launch the application, and Visual Studio is going to go package this up, send the manifest off to our remote Office 365 developer site, and then launch our application. We have no candidates yet, so we’ll create a new one. Last night when we were talking about this stuff, Scott seemed pretty excited about what we’re doing. So maybe he would be an interesting person for us to work with.
                                      When I go in and actually start specifying who it is that’s going to refer this person, you see I’m by default getting the list of the users available on this Office 365 site because I typed that it’s a person. So we’ll select Jim there, one of our team members, go ahead and upload a document that is Scott’s resume. And we’ll specify an interview date, maybe we’ll go out here into September.
                                      The last thing we want to do is go choose which of the positions we think is appropriate to Scott. He’s going to be new to the team, so we’ll maybe choose a little more junior role for him so that he can be successful. We hit save. If we’d actually set that breakpoint, we would see our business logic would have been executed, and we would be able to get that rich debugging experience you’ve come to know and expect from Visual Studio.
                                      We now see we have our candidate. When I drill in and look at it, you see that we’re getting that consistency of experience. I’m getting presence information for the person. When I hover over it, we see the contact card. A little misplaced, but if I want to have a conversation with Jim right now, I can go ahead and do that right from within the application just because we’ve leveraged those underlying capabilities. Of course, in the document we can see the properties of the document. We can view it in the Web application right from the site, or we can follow it if we want to do that as well.
                                      I noticed one thing here; I’ve got this extra ID showing up. So let me go flip over to Visual Studio, and we’ll look at the View Candidate page. And just like we can with any other Web development, we can just go in here and while the application is running we’ll just remove that. We’ll save those changes, flip back over here, just kind of do a little quick refresh, and now when I go in you’ll see that, hey, that extraneous value is no longer there.
                                      The other thing you’ll notice is that in addition to the values we specified for our SQL data, we also have built in the ability to do the basic tracking of, hey, who was the last person who created or modified this record, just core requirements of a business application.
                                      The last thing we’ll look at is on the newsfeed we’re going to click over to that, and you’ll see that the application has gone and interacted on my behalf, right, and entered things into our internal social network, letting people know that, hey, I just submitted somebody as a potential new candidate. So if you folks want to follow them, and so forth.
                                      OK. Our application is looking good. It’s time to go get it integrated with our existing dev ops processes. To do that, we’ll just go over here to the solution explorer, we’ll right click on the solution, and we’ll start by adding this to source code control. In this case, we’ll add it to our Team Foundation Service instance. We’ll go right click; we’ll go check in all these changes that we just made, and while that’s happening I’m going to switch over and take a look at some of the build environments we have established in our Team Foundation Service.
                                      In this case. we’ll see that we have an existing build definition for HR jobs. If I look at that definition, we’ll see that the things I can do is I can switch it to now be continuous, so that as we check in code we can go move on. The other interesting thing is here we’ve got a custom process template that understands how to take the output of the build and deploy it into our Office 365 test site. So this is all just basic power, and this is all built on the underlying technologies and capabilities inside of Visual Studio. That also means we can extend this beyond the SharePoint experience into the Office client experiences, as well.
                                      So here I’ve also built a mail app that allows me to go and prepopulate information in the application from the content of the mail and shove it right into creating a new user, without having to go directly into the application. Hopefully with that, you got a really quick look at some things we’re still working on in Visual Studio, to enable developers to build modern business applications, extending the Office 365 experience, building on the capabilities of Office 365 and the Windows Azure platform.
                                      Thank you very much.

                                      SATYA NADELLA: Thanks, Jay. Thank you.

                                      So hopefully, you got a feel for how you can rapidly build these Office applications, but more importantly, how you could compose these applications you build with, in fact, your full line of business application on Azure and enrich your SAS app, or your line of business enterprise app. I’m very, very pleased to announce that there is a subscription of my Office 365 Home Premium for 12 months that’s going to come to you via email later this afternoon. We hope you enjoy that subscription. (Applause.)

                                      And I know everyone in the room is also perhaps an MSDN subscriber. So we are continuing to improve MSDN benefits. One of the things that we are doing with Windows Azure is to make it very, very easy for you to be able to do dev tests. So now you can use your dev test licenses on Windows Azure. In fact, the cost and the pricing for that is such that you can probably share something like 97 percent of your dev test expenses. We’re also going to give you credits based on your various levels of MSDN. So if you’re a premium subscriber, you get $100, which you can use across your VMs, databases, as well as doing things like load testing. So fantastic benefits I would encourage everyone to go take advantage of it. And also to reduce the friction even further, we have now made it possible for any MSDN subscriber to be able to sign up to Azure without any credit card. I know this is something that many of you have asked for. We’re really pleased to do that. (Applause.)

                                      We had a whirlwind tour of the backend technologies. Really with Windows Azure, we think we now have a robust platform for you to be able to do your modern application development for a modern business. It could be Web, mobile, or this cloud scale and enterprise grade. So hope you get a chance to play with it. We welcome all the feedback, and have a great rest of the Build.

                                      Thank you very, very much.

                                      END

                                      “Cloud first” from Microsoft is ready to change enterprise computing in all of its facets

                                      … represented by these alternative/partial titles explained later on in this composite post:
                                      OR Choosing the Cloud Roadmap That’s Right for Your Business [MSCloudOS YouTube channel, June 3, 2013]
                                      OR Microsoft transformation to a “cloud-first” (as a design principle to) business as described by Satya Nadella’s (*) Leading the Enterprise Cloud Era [The Official Microsoft Blog, June 3, 2013] post
                                      OR Faster development, global scale, unmatched economics… Windows Azure delivers [Windows Azure MSDN blog, June 3, 2012] which is best summarized by Scott Guthrie (*) as the following enhancements to Windows Azure
                                      OR as described by Brian Harry (*) in Visual Studio 2013 [Brian Harry’s MSDN blog, June 3, 2013]
                                      OR as described by Brad Anderson (*) in TechEd 2013: After Today, Cloud Computing is No Longer a Spectator Sport [TechNet Blogs, June 3, 2013]
                                      OR as described by Quentin Clark (*) in SQL Server 2014: Unlocking Real-Time Insights [TechNet Blogs, June 3, 2013]
                                      OR as described by Antoine Leblond (*) in Continuing the Windows 8 vision with Windows 8.1 [Blogging Windows, May 30, 2013], and continued by Modern Business in Mind: Windows 8.1 at TechEd 2013 [June 3, 2013] from Erwin Visser (*) describing some of the features that businesses can look forward to in Windows 8.1
                                      OR putting all this together: Microsoft unveils what’s next for enterprise IT [press release, June 3, 2013]

                                      First watch how this whole story was presented in the keynote to TechEd North America 2013 on June 3, 2013:

                                      Brad Anderson was the keynote speaker so besides the overall topic and his two particular topics he is taking care of all the introductions/recappings to detailed/particular parts delivered by other executives from Microsoft Server & Tools Business as well. His keynote is starting at [3:18]. He first invites Iain McDonald to deliver the Windows 8.1 Enterprise presentation starting at [6:36] in the video. After that at [28:57] Brad is talking about “Empower people-centric IT” based on “Personalized experience”, “Any device, anywhere” and “Secure and Protected” leading to System Center Configuration Manager 2012 R2 + Windows Intune which are demonstrated from [38:00] to [46:50] by Molly Brown, Principal Development Lead for those products. Bringing consistence experience across PCs, iOS devices and Android devices supporting the BYOD trend for client device managers. Then he starts talking about “Enable modern business applications” based on “Time to market”, “Revolutionary technology” and “Organizational readiness” by focusing on “Rapid lifecycle”, Multi-device”, “Any data, any size” and “Secure and avalable”. Then comes (at [50:00]) a current state-of-the-art overview of the Windows Azure business and a customer testimonial from a budget airliner Easy Jet at [52:10] about moving to the “allocated seating mode” for which they indeed required the peak load support capability of Windows Azure to meet the sudden rush of customer reservations for things like putting everything on sale at slashed down prices. He then at [56:45] invites Scott Guthrie to talk about the Windows Azure application platform leading to announcements like “Windows Azure per minute pricing” and “Windows Azure MSDN offer”. Brian Harry replaces Guthrie on the stage (at [1:05:02]) to continue the same topic with the upcoming Visual Studio 2013 offering a number of new additions for team development. Brad is back at [1:14:40] to talk about “Unlock insight from any data” based on “Data explosion”, “New types and sources of data” and “Increasing user expectations” by focusing on “Easy access to data”, “Powerful analytics for all” and “Complete data platform”. To shed specific details on that he invites Quentin Clark at [1:16:44] who is talking about the upcoming SQL Server 2014 and joined by his marketing partner Eron Kelly demonstrating the new things coming with that product. At [1:36:15] Brad Anderson is back to talk about how to “Transform the datacenter” based on “Cloud options on demand”, “Reduced cost and complexity” and “Rapid response to the business”. First he talks about the cloud platform itself (as an infrastructure) exploiting a customer testimonial from Trek Corporation at [1:39:24]. Then at [1:41:24] he announces Windows Server 2012 R2 and System Center 2012 R2, followed with Windows Azure Pack announcement encompassing a number of things which ar demonstrated at [1:44:44] by Clare Henry, Director of Product Marketing. At [1:49:25] Brad is back to talk about the fabric of this infrastucture for which he also invites at [1:51:30] Jeff Wolser, Principal Program Manager for looking into the storage, live migration, HyperV replica etc. From [2:01:27] Brad is delivering the final recap.

                                      The final recap by Brad Anderson well represented the story shown in the keynote:

                                      1. [2:03:20] Microsoft’s cloud vision is the Cloud OS in which they have 4 promises:
                                        image
                                        which was fully covered in the keynote (actually in that order) and
                                      2. [2:03:50] with the new announcements demonstrating execution on those promises:
                                        image

                                      Then here is the alternative/partial information which became also available:

                                      OR Choosing the Cloud Roadmap That’s Right for Your Business [MSCloudOS YouTube channel, June 3, 2013]

                                      Introductory information: Built From the Cloud Up [MSFTws2012 YouTube channel, Nov 20, 2012]

                                      Experience Microsoft’s vision for the Cloud OS with Satya Nadella (*) and see how it is made real today with Windows Server 2012 and Windows Azure. Learn more at http://microsoft.com/ws2012

                                      OR Microsoft transformation to a “cloud-first” (as a design principle to) business as described by Satya Nadella’s (*) Leading the Enterprise Cloud Era [The Official Microsoft Blog, June 3, 2013] post:

                                      Two years ago we bet our future on the cloud and quietly refocused our 19 billion-dollar software business by completely transforming our products, culture and practices to be cloud-first. We knew the journey would be long and challenging with plenty of doubters. But we forged ahead knowing that the cloud transition would change the face of enterprise computing. […]

                                      To enable this transformation we had to make deep changes to our organizational culture, overhauling how we build and deliver products. Every one of our division’s nearly 10,000 people now think and build for the cloud – first. […]

                                      We are already seeing this bet deliver substantial returns. Windows Azure is going through hyper-growth. Half the Fortune 500 companies are using Windows Azure. We have over 1,000 new customers signing up every day and over 30,000 organizations have started using our IaaS offering since it became available in April. We are the first multinational company to bring public cloud services to China. Ultimately we support enormous scale, powering some of the largest SaaS offerings on the planet.

                                      (*) Satya Nadella is President, Server & Tools Business, a US$19 billion division that builds and runs the company’s computing platforms, developer tools and cloud services. The whole above mentioned post contains the email he sent to employees about the progress they’ve made completely transforming Microsoft products, culture and practices to be cloud-first.

                                      Introductory information: Enable Modern Apps [MSFTws2012 YouTube channel, Nov 20, 2012]

                                      Scott Guthrie (*) demonstrates how Windows Server 2012 and Windows Azure provide the world’s best platform for modern apps. Learn more at http://microsoft.com/ws2012

                                      OR Faster development, global scale, unmatched economics… Windows Azure delivers [Windows Azure MSDN blog, June 3, 2012] which is best summarized by Scott Guthrie (*) as the following enhancements to Windows Azure:

                                      Windows Azure: Announcing New Dev/Test Offering, BizTalk Services, SSL Support with Web Sites, AD Improvements, Per Minute Billing [ScottGu’s blog, June 3, 2013]

                                      • Dev/Test in the Cloud: MSDN Use Rights, Unbeatable MSDN Discount Rates, MSDN Monetary Credits
                                      • BizTalk Services: Great new service for Windows Azure that enables EDI and EAI integration in the cloud
                                      • Per-Minute Billing and No Charge for Stopped VMs: Now only get charged for the exact minutes of compute you use, no compute charges for stopped VMs
                                      • SSL Support with Web Sites: Support for both IP Address and SNI based SSL bindings on custom web-site domains
                                      • Active Directory: Updated directory sync utility, ability to manage Office 365 directory tenants from Windows Azure Management Portal [regarding this read also: Making it simple to connect Windows Server AD to Windows Azure AD with password hash sync [Active Directory Team Blog, June 3, 2013]
                                      • Free Trial: More flexible Free Trial offer
                                      (*) Scott Guthrie, Corporate Vice President (CVP) of Program Management leading the Windows Azure Application Platform Team in the Server & Tools Business

                                      OR as described by Brian Harry (*) in Visual Studio 2013 [Brian Harry’s MSDN blog, June 3, 2013]

                                      Today at TechEd, I announced Visual Studio 2013 and Team Foundation Server 2013 and many of the Application Lifecycle Management features that they include. … I will not, in this post, be talking about many of the new VS 2013 features that are unrelated to the Application Lifecycle workflows. Stay tuned for more about the rest of the VS 2013 capabilities at the Build conference. […]

                                      We are continuing to build on the Agile project management features (backlog and sprint management) we introduced in TFS 2012 and the Kanban support we added in the TFS 2012 Updates. With TFS 2013, we are tackling the problem of how to enable larger organizations to manage their projects with teams using a variety of different approaches. … The first problem we are tackling is work breakdown. … We are also enabling multiple Scrum teams to each manage their own backlog of user stories/tasks that then contributes to the same higher-level backlog. […]

                                      We’ve been hard at work improving our version control solution. … We’ve added a “Connect” page to Team Explorer that makes it easier than ever to manage the different Team Projects/repos you connect to – local, enterprise or cloud. …We’ve also built a new Team Explorer home page. …The #1 TFS request on User Voice. … So, we have introduced “Pop-out Team Explorer pages”. …  Another new feature that I announced today is “lightweight code commenting”. […]

                                      As always, we’ve also done a bunch of stuff to help people slogging code every day. The biggest thing is a new “heads up display” feature in Visual Studio that provides you key insights into your code as you are working. We’ve got a bunch of “indicators” now and we’ll be adding more over time. It’s a novel way for you to learn more about your code as you read/edit. … Another big new capability is memory diagnostics – particularly with a focus on enabling you to find memory leaks in production. […]

                                      In addition to the next round of improvements to our web based test case management solution, today I announced a preview of a brand new service – cloud load testing. […]

                                      At TechEd today, perhaps my biggest announcement was our agreement to acquire the InRelease release management product from InCycle Software. I’m incredibly excited about adding this to our overall lifecycle solution. It fills an important gap that can really slow down teams. InRelease is a great solution that’s been natively built to work well with TFS. […]

                                      With TFS 2013 we are trying a new tact to facilitate that called “Team Rooms”. A Team Room is a durable collaboration space that records everything happening in your team. You can configure notifications – checkins, builds, code reviews, etc to go into the Team Room and it becomes a living record of the activity in the project. You can also have conversations with the rest of your team in the room. It’s always “on” and “permanently” recorded, allowing people to catch up on what’s happened while they were out, go back and find previous conversations, etc. […]

                                      (*) Brian Harry, Microsoft Technical Fellow working as the Product Unit Manager for Team Foundation Server (TFS).

                                      Introductory information: Empower People Centric IT [MSFTws2012 YouTube channel, Nov 20, 2012]

                                      Brad Anderson (*) shows how Windows Server 2012 helps enable personalized experiences across devices. Learn more athttp://microsoft.com/ws2012

                                      OR as described by Brad Anderson (*) in TechEd 2013: After Today, Cloud Computing is No Longer a Spectator Sport [TechNet Blogs, June 3, 2013]

                                      We are now delivering on our vision with a wave of enterprise products built with this cloud-first approach: Windows Server & System Center 2012 R2 and the update to Windows Intune bring cloud-inspired innovation to the enterprise, and enable hybrid scenarios that cannot be duplicated anywhere in the industry.

                                      With this new wave, our partners and customers can do four key things:

                                      • Build a world-class datacenter without barriers, boundaries, or limitations.
                                      • Use a Cloud OS to innovate faster and better than ever before.
                                      • Embrace and control the countless ways users circumvent IT, but still enable productivity.
                                      • Get serious about the cloud with a partner who takes the cloud seriously.

                                      These developments shatter the obstacles which once stood in the way of turning traditional datacenters into modern datacenters, and which inhibited the natural progression to hybrid clouds. These hybrid scenarios are especially exciting – and Microsoft’s comprehensive support for them sets us apart from each and every other competitor in the tech industry.

                                      (*) Brad Anderson, Corporate Vice President (CVP) of Program Management leading the Windows Server and System Center Group (WSSC) in the Server & Tools Business. The rest of his above post will shed more light on the Microsoft achievements delivered in his sphere of activity. See also his In the Cloud blog for more details.

                                      Follow-up information: Transform the Datacenter [MSFTws2012 YouTube channel, Nov 20, 2012]

                                      Bill Laing (*) shows how Windows Server 2012 helps increase agility and efficiency in the datacenter. Learn more athttp://microsoft.com/ws2012
                                      (*) Bill Laing, Corporate Vice President (CVP) for Server and Cloud [Development]. Read also his Announcing New Windows Azure Services to Deliver “Hybrid Cloud” [Windows Azure blog, June 6, 2012] post.

                                      Introductory information: Webcast: From Data to Insights [sqlserver YouTube channel, April 2, 2013]

                                      To better understand the impact of big data on the future of global business, Microsoft hosted an exclusive webcast briefing, “From data to insights”, produced in association with the Economist. In the webcast, you’ll hear from Tom Standage, digital editor of the Economist, on the social and economic benefits of mining data, followed by a moderated discussion featuring two Microsoft data experts, VP/Technical Fellow for Microsoft SQL Server Product Suite, Dave Campbell, and Technical Fellow, Server and Tools, Raghu Ramakrishnan, for an insider’s view of the trends and technologies driving the business of big data, as well as Microsoft’s big data strategy. To learn more about Microsoft big data solutions, visithttp://www.microsoft.com/bigdata

                                      OR as described by Quentin Clark (*) in SQL Server 2014: Unlocking Real-Time Insights [TechNet Blogs, June 3, 2013]

                                      The next version of our data platform – SQL Server 2014 – is a key part of the day’s news. Designed and developed with our cloud-first principles in mind, it delivers built-in in-memory capabilities, new hybrid cloud scenarios and enables even faster data insights. […]

                                      Today, we’re delivering Hekaton’s in-memory OLTP in the box with SQL Server 2014. For our customers, “in the box” means they don’t need to buy specialized hardware or software and can migrate existing applications to benefit from performance gains. … SQL Server 2014 is helping businesses manage their data in nearly real-time. The ability to interact with your data and the system supporting business activities is truly transformative. […]

                                      Insert: Edgenet Gain Real-Time Access to Retail Product Data with In-Memory Technology [MSCloudOS YouTube channel, June 3, 2013]

                                      To ensure that its customers received timely, accurate product data, Edgenet decided to enhance its online selling guide with In-Memory OLTP in Microsoft SQL Server 2014.

                                      End of Insert

                                      Delivering mission critical capabilities through new hybrid scenarios SQL Server 2014 includes comprehensive, high-availability technologies that now extend seamlessly into Windows Azure to make the highest level of service level agreements possible for every application while also reducing CAPEX and OPEX for mission-critical applications. Simplified cloud backup, cloud disaster recovery and easy migration to Windows Azure Virtual Machines are empowering new, easy to use, out of the box hybrid capabilities.

                                      We’ve also improved the AlwaysOn features of the RDBMS with support for new scenarios, scale of deployment and ease of adoption. We continue to make major investments in our in-memory columnstore for performance and now compression, and this is deeply married to our business intelligence servers and Excel tools for faster business insights.

                                      Unlocking real-time insights Our big data strategy to unlock real-time insights continues with SQL Server 2014. We are embracing the role of data – it dramatically changes how business happens. Real-time data integration, new and large data sets, data signals from outside LOB systems, evolving analytics techniques and more fluid visualization and collaboration experiences are significant components of that change. Another foundational component of this is embracing cloud computing: nearly infinite scale, dramatically lowered cost for compute and storage and data exchange between businesses. Data changes everything and across the data platform, we continue to democratize technology to bring new business value to our customers.

                                      (*) Quentin Clark, Corporate Vice President of Program Management leading the Data Platform Group. The rest of his above post emphasizes the great progress of the Microsoft SQL Server for which he also includes the below diagram:
                                      image

                                      Introductory information: Selling Windows 8 | Windows 8 business apps as big bet [msPartner YouTube channel, March 1, 2013]

                                      We recently sat down to talk Windows 8 with partners Scott Gosling from Data#3, Danny Burlage from Wortell and Carl Mazzanti from eMazzanti Technologies. In a conversation led by Erwin Visser (*), Windows Commercial, and our own Jon Roskill and Kat Tillman we discussed the business potential of Windows 8 and why apps are key. In this segment, learn why Windows 8 business apps are a big bet.

                                      TechEd North America 2013 – Windows 8.1 Enterprise Build 9415 [lyraull [Microsoft Spain] YouTube channel, June 4, 2013]

                                      During the keynote address, Iain McDonald, partner director of program management for Windows, [starting at [6:36]] detailed key business features in the recently announced Windows 8.1 update — including advances in security, management, mobility and networking — to offer the best business tablets with the most powerful operating system for today’s modern business needs.

                                      OR as described by Antoine Leblond (*) in Continuing the Windows 8 vision with Windows 8.1 [Blogging Windows, May 30, 2013]

                                      Windows 8.1 will advance the bold vision set forward with Windows 8 to deliver the next generation of PCs, tablets, and a range of industry devices, and the experiences customers — both consumers and businesses alike — need and will just expect moving forward. It’s Windows 8 even better. Not only will Windows 8.1 respond to customer feedback, but it will add new features and functionality that advance the touch experience and mobile computing’s potential.

                                      Windows 8.1 will deliver improvements and enhancements in key areas like personalization, search, the built-in apps, Windows Store experience, and cloud connectivity. Windows 8.1 will also include big bets for business in areas such as management and securitywe’ll have more to say on these next week at TechEd North America. Today, I am happy to share a “first look” at Windows 8.1 and outline some of the improvements, enhancements and changes customers will see. […]

                                      (*) Antoine Leblond, Corporate Vice President (CVP) of Windows Program Management. His above post from last Thursday was continued by Modern Business in Mind: Windows 8.1 at TechEd 2013 [June 3, 2013] from Erwin Visser (*) describing some of the features that businesses can look forward to in Windows 8.1 such as

                                      Networking features optimized for mobile productivity. Windows 8.1 improves mobile productivity for today’s workforce with new networking capabilities that take advantage of NFC-tagged and Wi-Fi [Miracast etc.] connected devices […]

                                      Security enhancements for device proliferation and mobility.Security continues to be a top priority for companies across the world, so we’re making sure we continue to invest resources to help you protect your corporate data, applications and device […]

                                      Improved management solutions to make BYOD a reality. As BYOD scenarios continue to grow in popularity among businesses, Windows 8.1 will make managing mobile devices even easier for IT Pros […]

                                      More control over business devices. Businesses can more effectively deliver an intended experience to their end users – whether that be employees or customers. … Windows Embedded 8.1 Industry Our offering for Industry devices like POS Systems, ATMs, and Digital Signage that provides a broader set of device lockdown capabilities. […]

                                      On June 26, at the Build developer conference in San Francisco, Microsoft will release a public preview of Windows 8.1 for Windows 8, Windows RT and Windows Embedded 8.1 Industry. Upgrading to Windows 8.1 is simple as the update does not introduce any new hardware requirements and all existing Windows Store apps are compatible. […]

                                      (*) Erwin Visser, Senior Director of Windows Commercial Business Group

                                      OR putting all this together: Microsoft unveils what’s next for enterprise IT [press release, June 3, 2013]

                                      New wave of 2013 products brings it all together for hybrid cloud, mobile employees and modern application development.

                                      NEW ORLEANS — June 3, 2013 — At TechEd North America 2013, Microsoft Corp. introduced a portfolio of new solutions to help businesses thrive in the era of cloud computing and connected devices. In today’s keynote address, Server & Tools Corporate Vice President Brad Anderson and fellow executives showcased how new offerings across client, datacenter infrastructure, public cloud and application development help deliver the most comprehensive, connected enterprise platform.

                                      “The products and services introduced today illustrate how Microsoft is the company that businesses can bet on as they embrace cloud computing, deliver critical applications, and empower employee productivity in new and exciting ways,” Anderson said. “Only Microsoft connects the dots for the enterprise from ‘client to cloud.’

                                      Today’s keynote featured several customers, including luxury car manufacturer Aston Martin. The company is an example of the many enterprises that use the full range of Microsoft products and cloud platforms for IT success.

                                      Driving Strategy and Innovation with the Power of the Microsoft Cloud OS Vision [MSCloudOS YouTube channel, June 3, 2013]

                                      Behind every luxury sports car produced by Aston Martin is a sophisticated IT Infrastructure. The goal of the Aston Martin IT team is to optimize that infrastructure so it performs as efficiently as the production line it supports. This video describes how Aston Martin has used cloud and hybrid-based solutions to deliver innovation and strategy to the business.

                                      “Our staff’s sole purpose is to provide advanced technology that enables Aston Martin to build the most beautiful, iconic sports cars in the world,” said Daniel Roach-Rooke, IT infrastructure manager, Aston Martin. “From corporate desktops and software development to private and public cloud, Microsoft is our IT vendor of choice.”

                                      Fueling hybrid cloud

                                      At TechEd, Microsoft introduced upcoming releases of its key enterprise IT solutions for hybrid cloud: Windows Server 2012 R2, System Center 2012 R2 and SQL Server 2014. Available in preview later this month, the products break down boundaries between customer datacenters, service provider datacenters and Windows Azure. Using them, enterprises can make IT services and applications available across clouds and scale them up or down according to business needs. Windows Server 2012 R2 and System Center 2012 R2 are slated to release by the end of calendar year 2013, with SQL Server 2014 slated for release shortly thereafter.

                                      With advances in virtualization, software-defined networking, data storage and recovery, in-memory transaction processing, and more, these solutions were engineered with Microsoft’s “cloud-first” focus, including a faster pace of development and release to market. They incorporate Microsoft’s experience running large-scale cloud services, connect to Windows Azure and work together to provide a consistent platform for powerful hybrid cloud scenarios. More information can be found at blog posts by Anderson about Windows Server and System Center and by Quentin Clark about SQL Server.

                                      Further showcasing Microsoft’s hybrid cloud advantage, today the company also announced the public preview of Windows Azure BizTalk Services for enterprise integration solutions, both on-premises and in the cloud. In addition, Windows Azure now offers industry-leading, per-minute billing for virtual machines, Web roles and worker roles that improves cloud economics for customers. More information is available at the Windows Azure blog.

                                      Windows 8.1: Empowering modern business

                                      During the keynote address, Iain McDonald, partner director of program management for Windows, detailed key business features in the recently announced Windows 8.1 update — including advances in security, management, mobility and networking — to offer the best business tablets with the most powerful operating system for today’s modern business needs.

                                      New networking features in Windows 8.1 aim to improve mobile productivity for today’s workforce, with system-on-a-chip (SoC)-integrated mobile broadband, native Miracast wireless display and near field communication (NFC)-based pairing with enterprise printers. Security is also enhanced in the new update to address device proliferation and to protect corporate data and applications with fingerprint-based biometrics, multifactor authentication on tablets and remote business data removal to securely wipe company data from a device. And improved management capabilities in Windows 8.1 give customers more flexibility with supported options such as System Center Configuration Manager 2012 R2 and new mobile device management (MDM) solutions with third-party MDM partners, in addition to updated Windows Intune support.

                                      On June 26, at the Build 2013 developer conference in San Francisco, Microsoft will release a public preview of the Windows 8.1 update for Windows 8 and Windows RT customers. More information on new features found in Windows 8.1 for businesses, including updated Windows deployment guidance for businesses, is available on the Windows for your Business blog.

                                      Fostering modern application development

                                      Microsoft today also introduced Visual Studio 2013 and demonstrated new capabilities for improving the application lifecycle, both on-premises and in the cloud. A preview of Visual Studio 2013, with its new enhancements for agile portfolio planning, developer productivity, team collaboration, quality enablement and DevOps, is slated for release in the coming weeks, timed with the Build conference.

                                      Furthermore, Microsoft today announced an agreement to acquire InCycle Software Inc.’s InRelease Business Unit. InRelease is a leading release management solution for Microsoft .NET and Windows Server applications. This acquisition will extend Microsoft’s offerings in the application lifecycle management and DevOps market. More information is available on S. Somasegar’s blog.

                                      In addition, the company today announced new benefits that enable Microsoft Developer Network (MSDN) subscribers to more easily develop and test more applications with Windows Azure. New enhancements include up to $150 worth of Windows Azure platform services per month at no additional cost for Visual Studio Professional, Premium or Ultimate MSDN subscribers and new use rights to run select MSDN software in the cloud.

                                      Founded in 1975, Microsoft (Nasdaq “MSFT”) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.

                                      Read More: SQL Server, Brad Anderson, Enterprise, IT Professionals, BUILD, Cloud Computing, Windows Server 2012, S. Somasegar, Windows Azure, Windows Intune, Visual Studio, .NET, TechEd North America 2013, TechEd 2013

                                      Software defined server without Microsoft: HP Moonshot

                                      Updates as of Dec 6, 2013 (8 months after the original post):

                                      image

                                      Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

                                      This Cloud, Social, Big Data and Mobile we are referring to as this “New Style of IT” [when talking about the slide shown above]

                                      Through the Telescope: 3 Minutes on HP Moonshot [HewlettPackardVideos YouTube channel, July 24, 2013]

                                      Steven Hagler (Senior Director, HP Americas Moonshot) provides insight on Moonshot, why it’s right for the market, and what it means for your business. http://hp.com/go/moonshot

                                      HIGHLY RECOMMENDED READING:
                                      HP Offers Exclusive Peek Inside Impending Moonshot Servers [Enterprise Tech, Nov 26, 2013]: “The company is getting ready to launch a bunch of new server nodes for Moonshot in a few weeks”.
                                      – So far, the most simple and understandable info is serviced in Visual Configuration Moonshot diagram set: http://www.goldeneggs.fi/documents/GE-HP-MOONSHOT-A.pdf  This site includes also full visualisation for all x86 rack, desktop and blade servers.

                                      From HP Launches Investment Solutions to Ease Organizations’ Transitions to “New Style of IT” [press release, Dec 6, 2013]

                                      The HP accelerated migration program for cloud—helps …

                                      The HP Pre-Provisioning Solution—lets …

                                      New investment solutions for HP Moonshot servers and HP Converged Systems—provide customers and channel partners with quick access to the latest HP products through a simple, scalable and predictable monthly payment that aligns technology and financial requirements to business needs.   

                                      Access the world’s first software defined server [HP offering, Nov 27, 2013]
                                      With predictable and scalable monthly payments

                                      HP Moonshot Financing
                                      Cloud, Mobility, Security and Big Data require a different level of technology efficiency and scalability. Traditional systems may no longer be able to handle the increasing internet workloads with optimal performance. Having and investment strategy that gives you access to newer technology such as HP Moonshot allows you to meet the requirements for the New Style of IT.
                                      A simple and flexible payment structure can help you access the latest technology on your terms.
                                      Why leverage a predictable monthly payment?
                                      • Provides financial flexibility to scale up your business
                                      • May help mitigate the financial risk of your IT transformation
                                      Enables IT refresh cycles to keep up with latest technology
                                      • May help improve your cash flow
                                      • Offers predictable monthly payments which can help you stay within budget
                                      How does it work?
                                      • Talk to your HP Sales Rep about acquiring HP Moonshot using a predictable monthly payment
                                      Expand your capacity easily with a simple add-on payment
                                      • Add spare capacity needed for even greater agility
                                      • Set your payment terms based on your business needs
                                      • After an agreed term, you’ll be able to refresh your technology

                                      From The HP Moonshot team provides answers to your questions about the datacenter of the future [The HP Blog Hub, as of Aug 29, 2013]

                                      Q: WHAT IS THE FUNDAMENTAL IDEA BEHIND THE HP MOONSHOT SYSTEM?

                                      A: The idea is simple—use energy-efficient CPU’s attuned to a particular application to achieve radical power, space and cost savings. Stated another way; creating software defined servers for specific applications that run at scale.

                                      Q: WHAT IS INNOVATIVE ABOUT THE HP MOONSHOT ARCHITECTURE?

                                      A: The most innovative characteristic of HP Moonshot is the architecture. Everything that is a common resource in a traditional server has been converged into the chassis. The power, cooling, management, fabric, switches and uplinks are all shared across 45 hot-pluggable cartridges in a 4.3U chassis.

                                      Q: EXPLAIN WHAT IS MEANT BY “SOFTWARE DEFINED” SERVER

                                      A: Software defined servers achieve optimal useful work per watt by specializing for a given workload: matching a software application with available technology that can provide the most optimal performance. For example, the firstMoonshot server is tuned for the web front end LAMP (Linux/Apache/MySQL/PHP) stack. In the most extreme case of a future FPGA (Field Programmable Gate Array) cartridge, the hardware truly reflects the exact algorithm required.

                                      Q: DESCRIBE THE FABRIC THAT HAS BEEN INTEGRATED INTO THE CHASSIS

                                      A: The HP Moonshot 1500 Chassis has been built for future SOC designs that will require a range of network capabilities including cartridge to cartridge interconnect. Additionally, different workloads will have a range of storage needs. 

                                      There are four separate and independent fabrics that support a range of current and future capabilities; 8 lanes of Ethernet; storage fabric (6Gb SATA) that enable shared storage amongst cartridges or storage expansion to a single cartridge; a dedicated iLO management network to manage all the servers as one; a cluster fabric with point to point connectivity and low latency interconnect between servers.

                                      image

                                      Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

                                      We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]

                                      Calxeda Midway in HP Moonshot [Janet Bartleson YouTube channel, Oct 28, 2013]

                                      HP’s Paul Santeler encourages you to test Calxeda’s Midway-based Moonshot server cartridges in the HP Discovery Labs. http://www.hp.com/go/moonshot http://www.calxeda.com

                                      Details about the latest and future Calxeda SoCs see in the closing part of this Dec 7 update

                                      @SC13: HP Moonshot ProLiant m800 Server Cartridge with Texas Instruments [Janet Bartleson YouTube channel, Nov 26, 2013]

                                      @SC13, Texas Instruments’ Arnon Friedmann shows the HP ProLiant m800 Server Cartridge with 4 66K2H12 Keystone II SoCs each with 4 ARM Cortex A15 cores and 8 C66x DSP cores–alltogether providing 500 gigaflops of DSP performance and 8Gigabytes of data on the server cartridge. It’s lower power, lower cost than traditional servers.

                                      Details about the latest Texas Instruments DSP+ARM SoCs see after the Calxeda section in the closing part of this Dec 7 update

                                      The New Style of IT & HP Moonshot: Keynote by HP’s Martin Fink at ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 29, published on Nov 11, 2013]

                                      Keynote Presentation: The New Style of IT Speaker: Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company It’s an exciting time to be in technology. The IT industry is at a major inflection point driven by four generation-defining trends: the cloud, social, Big Data, and mobile. These trends are forever changing how consumers and businesses communicate, collaborate, and access information. And to accommodate these changes, enterprises, governments and fast growing companies desperately need a “New Style of IT.” Shaping the future of IT starts with a radically different approach to how we think about compute — for example, in servers, HP has a game-changing new category that requires 80% less space, uses 89% less energy, costs 77% less–and is 97% less complex. There’s never been a better time to be part of the ecosystem and usher in the next-generation of innovation.

                                      From Big Data and the future of computing – A conversation with John Sontag [HP Enterprise 20/20 Blog, October 28, 2013]

                                      20/20 Team: Where is HP today in terms of helping everyone become a data scientist?
                                      John Sontag: For that to happen we need a set of tools that allow us to be data scientists in more than the ad hoc way I just described. These tools should let us operate productively and repeatably, using vocabulary that we can share – so that each of us doesn’t have to learn the same lessons over and over again. Currently at HP, we’re building a software tool set that is helping people find value in the data they’re already surrounded by. We have HAVEn for data management, which includes the Vertica data store, and Autonomy for analysis. For enterprise security we have ArcSight and ThreatCentral. We have our work around StoreOnce to compress things, and Express Query to allow us to consume data in huge volumes. Then we have hardware initiatives like Moonshot, which is bringing different kinds of accelerators to bear so we can actually change how fast – and how effectively – we can chew on data.
                                      20/20 Team: And how is HP Labs helping shape where we are going?
                                      John Sontag: One thing we’re doing on the software front is creating new ways to interrogate data in real time through an interface that doesn’t require you to be a computer scientist.  We’re also looking at how we present the answers you get in a way that brings attention to the things you most need to be aware of. And then we’re thinking about how to let people who don’t have massive compute resources at their disposal also become data scientists.
                                      20/20 Team: What’s the answer to that?
                                      John Sontag: For that, we need to rethink the nature of the computer itself. If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data. Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways. And then we’re thinking about how to package these re-imagined computers into boxes of different sizes that match the needs of everyone from the individual to the massive, multinational entity. On top of all that, we need to reduce costs – if we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you, your colleagues, and partners across the world can conduct experiments on this data literally as fast as we can think them up.
                                      About John Sontag:
                                      John Sontag is vice president and director of systems research at HP Labs. The systems research organization is responsible for research in memristor, photonics, physical and system architectures, storing data at high volume, velocity and variety, and operating systems. Together with HP business units and partners, the team reaches from basic research to advanced development of key technologies.
                                      With more than 30 years of experience at HP in systems and operating system design and research, Sontag has had a variety of leadership roles in the development of HP-UX on PA-RISC and IPF, including 64-bit systems, support for multiple input/output systems, multi-system availability and Symmetric Multi-Processing scaling for OLTP and web servers.
                                      Sontag received a bachelor of science degree in electrical engineering from Carnegie Mellon University.

                                      Meet the Innovators [HewlettPackardVideos YouTube channel, May 23, 2013]

                                      Meet those behind the innovative technology that is HP Project Moonshot http://www.hp.com/go/moonshot

                                      From Meet the innovators behind the design and development of Project Moonshot [The HP Blog Hub, June 6, 2013]

                                      This video introduces you to key HP team members who were part of the team that brings you the innovative technology that fundamentally changes how hyperscale servers are built and operated such as:
                                      • Chandrakant Patel – HP Senior Fellow and HP Labs Chief Engineer
                                      • Paul Santeler  – Senior Vice President and General Manager of the HyperScale Business Unit
                                      • Kelly Pracht – Moonshot Hardware Platform Manager, HyperScale Business Unit
                                      • Dwight Barron – HP Fellow, Chief Technologist, HyperScale Business Unit

                                      From Six IT technologies to watch [HP Enterprise 20/20 Blog, Sept 5, 2013]

                                      1. Software-defined everything
                                      Over the last couple of years we have heard a lot about software defined networks (SDN) and more recently, software defined data center (SDDC). There are fundamentally two ways to implement a cloud. Either you take the approach of the major public cloud providers, combining low-cost skinless servers with commodity storage, linked through cheap networking. You establish racks and racks of them. It’s probably the cheapest solution, but you have to implement all the management and optimization yourself. You can use software tools to do so, but you will have to develop the policies, the workflows and the automation.
                                      Alternatively you can use what is becoming known as “converged infrastructure,” a term originally coined by HP, but now used by all our competitors. Servers, storage and networking are integrated in a single rack, or a series of interconnected ones, and the management and orchestration software included in the offering, provides an optimal use of the environment. You get increased flexibility and are able to respond faster to requests and opportunities.
                                      We all know that different workloads require different characteristics. Infrastructures are typically implemented using general purpose configurations that have been optimized to address a very large variety of workloads. So, they do an average job for each. What if we could change the configuration automatically whenever the workload changes to ensure optimal usage of the infrastructure for each workload? This is precisely the concept of software defined environments. Configurations are no longer stored in the hardware, but adapted as and when required. Obviously this requires more advanced software that is capable of reconfiguring the resources.
                                      A software-defined data center is described as a data center where the infrastructure is virtualized and also delivered as a service. Control of the data center is automated by software – meaning hardware configuration is maintained through intelligent software systems. Three core components comprise the SDDC, server virtualization, network virtualization and storage virtualization. It remains to be said that some workloads still require physical systems (often referred to as bare metal), hence the importance of projects such as OpenStack’s Ironic which could be defined as a hypervisor for physical environments.

                                      2. Specialized servers

                                      As I mentioned, all workloads are not equal, but run on the same, general purpose servers (typically x86). What if we create servers that are optimized for specific workloads? In particular, when developing cloud environments delivering multi-tenant SaaS services, one could well envisage the use of servers specialized for a specific task, for example video manipulation, dynamic web service management. Developing efficient, low energy specialized servers that can be configured through software is what HP’s Project Moonshot is all about. Today, although still in its infancy, there is much more to come. Imagine about 45 server/storage cartridges linked through three fabrics (for networking, storage and high speed cartridge to cartridge interconnections), sharing common elements such as network controllers, management functions and power management. If you then build the cartridges using low energy servers, you reduce energy consumption by nearly 90%. If you build SaaS type environments, using multi-tenant application modules, do you still need virtualization? This simplifies the environment, reduces the cost of running it and optimizes the use of server technology for every workload.

                                      Particularly for environments that constantly run certain types of workloads, such as analyzing social or sensor data, the use of specialized servers can make the difference. This is definitely an evolution to watch.

                                      3. Photonics

                                      Let’s now complement those specialized servers with photonic based connections enabling flat, hyper-efficient networks boosting bandwidth, and we have an environment that is optimized to deliver the complex tasks of analyzing and acting upon signals provided by the environment in its largest sense.

                                      But technology is going even further. I talked about the three fabrics, over time; why not use photonics to improve the speed of the fabrics themselves, increasing the overall compute speed. We are not there yet, but early experiments with photonic backplanes for blade systems have shown overall compute speed increased up to a factor seven. That should be the second step.

                                      The third step takes things further. The specialized servers I talked about are typically system on a chip (SoC) servers, in other words, complete computers on a single chip. Why not use photonics to link those chips with their outside world? On-chip lasers have been developed in prototypes, so we are not that far out. We could even bring things one step further and use photonics within the chip itself, but that is still a little further out. I can’t tell you the increase in compute power that such evolutions will provide you, but I would expect it to be huge.

                                      4. Storage
                                      Storage is at a crossroads. On the one hand, hard disk drives (HDD) have improved drastically over the last 20 years, both in reading speed and in density. I still remember the 20MB hard disk drive, weighing 125Kg of the early 80’s. When I compare that with the 3TB drive I bought a couple months ago for my home PC, I can easily depict this evolution. But then the SSD (solid state disk) has appeared. Where a HDD read will take you 4 ms, the SDD read is down at 0.05 ms.

                                      Using nanotechnologies, HP Labs did develop prototypes of the Memristor, a new approach to data storage, faster than Flash memory and consumes way less energy. Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.


                                      Details about the latest and future Calxeda SoCs:

                                      Calxeda EnergyCore ECX-2000 family – ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 30, 2013]

                                      Calxeda tells us about their new EnergyCore ECX-2000 product line based on ARM Cortex-A15. http://www.calxeda.com/ecx-2000-family/

                                      From ECX-2000 Product Brief [October, 2013]

                                      The Calxeda EnergyCore ECX-2000 Series is a family of SoC (Server-on-Chip) products that delivers the power efficiency of ARM® processors, and the OpenStack, Linux, and virtualization software needed for modern cloud infrastructures. Using the ARM Cortex A15 quad-core processor, the ECX-2000 delivers roughly twice the performance, three times the memory bandwidth, and four times the memory capacity of the ground-breaking ECX-1000. It is extremely scalable due to the integrated Fleet Fabric Switch, while the embedded Fleet Engine simultaneously provides out-of-band control and intelligence for autonomic operation.

                                      In addition to enhanced performance, the ECX-2000 provides hardware virtualization support via KVM and Xen hypervisors. Coupled with certified support for Ubuntu 13.10 and the Havana Openstack release, this marks the first time an ARM SoC is ready for Cloud computing. The Fleet Fabric enables the highest network and interconnect bandwidth in the MicroServer space, making this an ideal platform for streaming media and network-intensive applications.

                                      The net result of the EnergyCore SoC architecture is a dramatic reduction in power and space requirements, allowing rapidly growing data centers to quickly realize operating and capital cost savings.

                                      image

                                      Scalability you can grow into. An integrated EnergyCore Fabric Switch within every SoC provides up to five 10 Gigabit lanes for connecting thousands of ECX-2000 server nodes into clusters capable of handling distributed applications at extreme scale. Completely topology agnostic, each SoC can be deployed to work in a variety of mesh, grid, or tree network structures, providing opportunities to find the right balance of network throughput and fault resiliency for any given workload.

                                      Fleet Fabric Switch
                                      • Integrated 80Gb (8×8) crossbar switch with through-traffic support
                                      • Five (5) 10Gb external channels, three (3) 10Gb internal channels
                                      • Configurable topology capable of connecting up to 4096 nodes
                                      • Dynamic Link Speed Control from 1Gb to 10Gb to minimize power and maximize performance
                                      • Network Proxy Support maintains network presence even with node powered off
                                      • In-order flow delivery
                                      • MAC learning provider support for virtualization

                                      ARM Servers and Xen — Hypervisor Support at Hyperscale – Larry Wikelius, [Co-Founder of] Calxeda [TheLinuxFoundation YouTube channel, Oct 1, 2013]

                                      [Xen User Summit 2013] The emergence of power optimized hyperscale servers is leading to a revolution in Data Center design. The intersection of this revolution with the growth of Cloud Computing, Big Data and Scale Out Storage solutions is resulting in innovation at rate and pace in the Server Industry that has not been seen for years. One particular example of this innovation is the deployment of ARM based servers in the Data Center and the impact these servers have on Power, Density and Scale. In this presentation we will look at the role that Xen is playing in the Revolution of ARM based server design and deployment and the impact on applications, systems management and provisioning.

                                      Calxeda Launches Midway ARM Server Chips, Extends Roadmap [EnterpriseTech, Oct 28, 2013]

                                      ARM server chip supplier Calxeda is just about to ship its second generation of EnergyCore processors for hyperscale systems and most of its competitors are still working on their first products. Calxeda is also tweaking its roadmap to add a new chip to its lineup, which will bridge between the current 32-bit ARM chips and its future 64-bit processors.
                                      There is going to be a lot of talk about server-class ARM processors this week, particularly with ARM Holdings hosting its TechCon conference in Santa Clara.
                                      A month ago, EnterpriseTech told you about the “Midway” chip that Calxeda had in the works and as well as its roadmap to get beefier 64-bit cores and extend its Fleet Services fabric to allow for more than 100,000 nodes to be linked together.
                                      The details were a little thin on the Midway chip, but we now know that it will be commercialized as the ECX-2000, and that Calxeda is sending out samples to server makers right now. The plan is to have the ECX-2000 generally available by the end of the year, and that is why company is ready to talk about some feeds and speeds. Karl Freund, vice president of marketing at Calxeda, walked EnterpriseTech through the details.

                                      image

                                      The Midway chip is fabricated in the same 40 nanometer process as the existing “High Bank” ECX-1000 chip that Calxeda first put into the field in November 2011 in the experimental “Redstone” hyperscale servers from Hewlett-Packard. That 32-bit chip, based on the ARM Cortex-A9 core, was subsequently adopted in systems from Penguin Computing, Boston, and a number of other hyperscale datacenter operators who did proofs of concept with the chips. The ECX-1000 has four cores and was somewhat limited in its performance and was definitely limited in its main memory, which topped out at 4 GB across the four-core processor. But the ECX-2000 addresses these issues.
                                      The ECX-2000 is based on ARM Holding’s Cortex-A15 core and has the 40-bit physical memory extensions, which allows for up to 16 GB of memory to be physically attached to each socket. With the 40-bit physical addressing added with the Cortex-A15, the memory controller can, in theory, address up to 1 TB of main memory; this is called Large Physical Address Extension (LPAE) in the ARM lingo, and it maps the 32-bit physical addressing on the core to a 40-bit virtual address space. Each core on the ECX-2000 has 32 KB of L1 instruction cache and 32 KB of L1 data cache, and ARM licensees are allowed to scale the L2 cache as they see fit. The ECX-2000 has 4 MB of L2 cache shared across the four cores on the die. These are exactly the same L1 and L2 cache sizes as used in the prior ECX-1000 chips.
                                      The Cortex-A15 design was created to scale to 2.5 GHz, but as you crank up the clocks on any chip, the amount of energy consumed and heat radiated grows progressively larger as clock speeds go up. At a certain point, it just doesn’t make sense to push clock speeds. Moreover, every drop in clock speed gives a proportionately larger increase in thermal efficiency, and this is why, says Freund, Calxeda is making its implementation of the Cortex-A15 top out at 1.8 GHz. The company will offer lower-speed parts running at 1.1 GHz and 1.4 GHz for customers that need an even better thermal profile or a cheaper part where low cost is more important than raw performance or thermals.
                                      What Calxeda and its server and storage array customers are focused on is the fact that the Midway chip running at 1.8 GHz has twice the integer, floating point, and Java performance of a 1.1 GHz High Bank chip. That is possible, in part, because the new chip has four times the main memory and three times the memory bandwidth as the old chip in addition to a 64 percent boost in clock speed. Calxeda is not yet done benchmarking systems using the chips to get a measure of their thermal efficiency, but is saying that there is as much as a 33 percent boost in performance per watt comparing old to new ECX chips.
                                      The new ECX-2000 chip has a dual-core Cortex-A7 chip on the die that is used as a controller for the system BIOS as well as a baseboard management controller and a power management controller for the servers that use them. These Fleet Engines, as Calxeda calls them, eliminate yet another set of components, and therefore their cost, in the system. These engines also control the topology of the Fleet Services fabric, which can be set up in 2D torus, mesh, butterfly tree, and fat tree network configurations.
                                      The Fleet Services fabric has 80 Gb/sec of aggregate bandwidth and offers multiple 10 Gb/sec Ethernet links coming off the die to interconnect server nodes on a single card, multiple cards in an enclosure, multiple enclosures in a rack, and multiple racks in a data center. The Ethernet links are also used to allow users to get to applications running on the machines.
                                      Freund says that the ECX-2000 chip is aimed at distributed, stateless server workloads, such as web server front ends, caching servers, and content distribution. It is also suitable for analytics workloads like Hadoop and distributed NoSQL data stores like Cassandra, all of which tend to run on Linux. Both Red Hat and Canonical are cooking up commercial-grade Linuxes for the Calxeda chips, and SUSE Linux is probably not going to be far behind. The new chips are also expected to see action in scale-out storage systems such as OpenStack Swift object storage or the more elaborate Gluster and Ceph clustered file systems. The OpenStack cloud controller embedded in the just-announced Ubuntu Server 13.10 is also certified to run on the Midway chip.
                                      Hewlett-Packard has confirmed that it is creating a quad-node server cartridge for its “Moonshot” hyperscale servers, which should ship to customers sometime in the first or second quarter of 2014. (It all depends on how long HP takes to certify the system board.) Penguin Computing, Foxconn, Aaeon, and Boston are expected to get beta systems out the door this year using the Midway chip and will have them in production in the first half of next year. Yes, that’s pretty vague, but that is the server business, and vagueness is to be expected in such a young market as the ARM server market is.
                                      Looking ahead, Calxeda is adding a new processor to its roadmap, code-named “Sarita.” Here’s what the latest system-on-chip roadmap looks like now:

                                      image

                                      The future “Lago” chip is the first 64-bit chip that will come out of Calxeda, and it is based on the Cortex-A57 design from ARM Holdings –one of several ARMv8 designs, in fact. (The existing Calxeda chips are based on the ARMv7 architecture.)
                                      Both Sarita and Lago will be implemented in TSMC’s 28 nanometer processes, and that shrink from the current 40 nanometer to 28 nanometer processes is going to allow for a lot more cores and other features to be added to the die and also likely a decent jump in clock speed, too. Freund is not saying at the moment which way it will go.
                                      But what Freund will confirm is that Sarita will be pin-compatible with the existing Midway chip, meaning that server makers who adopt Midway will have a processor bump they can offer in a relatively easy fashion. It will also be based on the Cortex-A57 cores from ARM Holdings, and will sport four cores on a die that deliver about a 50 percent performance increase compared to the Midway chips.
                                      The Lago chips, we now know, will scale to eight cores on a die and deliver about twice the performance of the Midway chips. Both Lago and Sarita are on the same schedule, in fact, and they are expected to tape out this quarter. Calxeda expects to start sampling them to customers in the second quarter of 2014, with production quantities being available at the end of 2014.
                                      Not Just Compute, But Networking, Too
                                      As important as the processing is to a system, the Fleet Services fabric interconnect is perhaps the key differentiator in its design. The current iteration of that interconnect, which is a distributed Layer 2 switch fabric that is spread across each chip in a cluster, can scale across 4,096 nodes without requiring top-of-rack and aggregation switches.

                                      image

                                      Both of the Lago and Sarita chips will be using the Fleet Services 2.0 intehttp://www.ti.com/product/66ak2h12rconnect that is now being launched with Midway. This iteration of the interconnect has all kinds of tweaks and nips and tucks but no scalability enhancements beyond the 4,096 nodes in the original fabric.
                                      Freund says that the Fleet Services 3.0 fabric, which allows the distributed switch architecture to scale above 100,000 nodes in a flat network, will probably now come with the “Ratamosa” chips in 2015. It was originally – and loosely – scheduled for Lago next year. The circuits that do the fabric interconnect is not substantially different, says Freund, but the scalability is enabled through software. It could be that customers are not going to need such scalability as rapidly as Calxeda originally thought.
                                      The “Navarro” kicker to the Ratamosa chip is presumably based on the ARMv9 architecture, and Calxeda is not saying anything about when we might see that and what properties it might have. All that it has said thus far is that it is aimed at the “enterprise server era.”


                                      Details about the latest Texas Instruments DSP+ARM SoCs:

                                      A Better Way to Cloud [MultiVuOnlineVideo YouTube channel, Nov 13, 2012]

                                      To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. To view Multimedia News Release, go to http://www.multivu.com/mnr/54044-texas-instruments-keystone-multicore-socs-revitalize-cloud-applications

                                      Infinite Scalability in Multicore Processors [Texas Instruments YouTube channel, Aug 27, 2012]

                                      Over the years, our industry has preached how different types of end equipments and applications are best served by distinctive multicore architectures tailored to each. There are even those applications, such as high performance computing, which can be addressed by more than one type of multicore architecture. Yet most multicore devices today tend to be suited for a specific approach or a particular set of markets. This keynote address, from the 2012 Multicore Developer’s Conferece, touches upon why the market needs an “infinitely scalable” multicore architecture which is both scalable and flexible enough to support disparate markets and the varied ways in which certain applications are addressed. The speaker presents examples of how a single multicore architecture can be scalable enough to address the needs of various high performance markets, including cloud RAN, networking, imaging and high performance computing. Ramesh Kumar manages the worldwide business for TI’s multicore growth markets organization. The organization develops multicore processors and software that are targeted for the communication infrastructure space, including multimedia and networking infrastructure equipment, as well as end equipment that requires multicore processors like public safety, medical imaging, high performance computing and test and measurement. Ramesh is a graduate of Northeastern University, where he obtained an executive MBA, and Purdue University where he received a master of science in electrical engineering.

                                      From Imagine the impact…TI’s KeyStone SoC + HP Moonshot [TI’s Multicore Mix Blog, April 19, 2013]

                                      TI’s participation in HP’s Pathfinder Innovation Ecosystem is the first step towards arming HP’s customers with optimized server systems that are ideally suited for workloads such as oil and gas exploration, Cloud Radio Access Networks (C-RAN), voice over LTE and video transcoding. This collaboration between TI and HP is a bold step forward, enabling flexible, optimized servers to bring differentiated technologies, such as TI’s DSPs, to a broader set of application providers. TI’s KeyStone II-based SoCs, which integrate fixed- and floating- point DSP cores with multiple ARM® Cortex™A-15 MPCore processors, packet and security processing, and high speed interconnect, give customers the performance, scalability and programmability needed to build software-defined servers. HP’s Moonshot system integrates storage, networking and compute cards with a flexible interconnect, allowing customers to choose the optimized ratio enabling the industry’s first software-defined server platform. Bringing TI’s KeyStone II-based SoCs into HP’s Moonshot system opens up several tantalizing possibilities for the future. Let’s look at a few examples:
                                      Think about the number of voice conversations happening over mobile devices every day. These conversations are independent of each other, and each will need transcoding from one voice format to another as voice travels from one mobile device, through the network infrastructure and to the other mobile device. The sheer number of such conversations demand that the servers used for voice transcoding be optimized for this function. Voice is just one example. Now think about video and music, and you can imagine the vast amount of processing required. Using TI’s KeyStone II-based SoCs with DSP technology provides optimized server architecture for these applications because our SoCs are specifically tuned for signal processing workloads.
                                      Another example can be with C-RAN. We have seen a huge push for mobile operators to move most of the mobile radio processing to the data center. There are several approaches to achieve this goal, and each has pros and cons associated with them. But one thing is certain – each approach has to do wireless symbol processing to achieve optimum 3G or 4G communications with smart mobile devices. TI’s KeyStone II-based SoCs are leading the wireless communication infrastructure market and combine key accelerators such as BCP (Bit Rate Co-Processor), VCP (Viturbi Co-Processor) and others to enable 3G/4G standards compliant for wireless processing. These key accelerators offload standard-based wireless processing from the ARM and/or DSP cores, freeing the cores for value-added processing. The combination of ARM/DSP with these accelerators provides an optimum SoC for 3G/4G wireless processing. By combining TI’s KeyStone II-based SoC with HP’s Moonshot system, operators and network equipment providers can now build customized servers for C-RAN to achieve higher performance systems at lower cost and ultimately provide better experiences to their customers.

                                      A better way to cloud: TI’s new KeyStone multicore SoCs [embeddednewstv YouTube channel, published on Jan 12,2013 (YouTube: Oct 21, 2013)]

                                      Brian Glinsman, vice president of multicore processors at Texas Instruments, discusses TI’s new KeyStone multicore SoCs for cloud infrastructure applications. TI announced six new SoCs, based on their 28-nm KeyStone architecture, featuring the Industry’s first implementation of quad ARM Cortex-A15 MPCore processors and TMS320C66x DSPs for purpose built servers, networking, high performance computing, gaming and media processing applications.

                                      Texas Instruments Offers System on a Chip for HPC Applications [RichReport YouTube channel, Nov 20, 2012]

                                      In this video from SC12, Arnon Friedmann from Texas Instruments describes the company’s new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”

                                      A better way to cloud: TI’s new KeyStone multicore SoCs revitalize cloud applications, enabling new capabilities and a quantum leap in performance at significantly reduced power consumption

                                        • Industry’s first implementation of quad ARM® Cortex™-A15 MPCore™ processors in infrastructure-class embedded SoC offers developers exceptional capacity & performance at significantly reduced power for networking, high performance computing and more
                                        • Unmatched combination of Cortex-A15 processors, C66x DSPs, packet processing, security processing and Ethernet switching, transforms the real-time cloud into an optimized high performance, power efficient processing platform
                                        • Scalable KeyStone architecture now features 20+ software compatible devices, enabling customers to more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices

                                      ELECTRONICA – MUNICH (Nov.13, 2012) /PRNewswire/ — To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption.

                                      To TI, a BETTER way to cloud means:

                                        • Safer communities thanks to enhanced weather modeling;
                                        • Higher returns from time sensitive financial analysis;
                                        • Improved productivity and safety in energy exploration;
                                        • Faster commuting on safer highways in safer cars;
                                        • Exceptional video on any screen, anywhere, any time;
                                        • More productive and environmentally friendly factories; and
                                        • An overall reduction in energy consumption for a greener planet.
                                        TI’s new KeyStone multicore SoCs are enabling this – and much more. These 28-nm devices integrate TI’s fixed-and floating-point TMS320C66x digital signal processor (DSP) generation cores – yielding the best performance per watt ratio in the DSP industry – with multiple ARM® Cortex™-A15 MPCore™ processors – delivering unprecedented processing capability combined with low power consumption – facilitating the development of a wide-range of infrastructure applications that can enable more efficient cloud experiences. The unique combination of Cortex-A15 processors and C66x DSPcores, with built-in packet processing and Ethernet switching, is designed to efficiently offload and enhance the cloud’s first generation general purpose servers; servers that struggle with big data applications like high performance computing and video processing.
                                        “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”
                                        TI’s six new high-performance SoCs include the 66AK2E02, 66AK2E05, 66AK2H06, 66AK2H12, AM5K2E02 and AM5K2E04, all based on the KeyStone multicore architecture. With KeyStone’s low latency high bandwidth multicore shared memory controller (MSMC), these new SoCs yield 50 percent higher memory throughput when compared to other RISC-based SoCs. Together, these processing elements, with the integration of security processing, networking and switching, reduce system cost and power consumption, allowing developers to support the development of more cost-efficient, green applications and workloads, including high performance computing, video delivery and media and image processing. With the matchless combination TI has integrated into its newest multicore SoCs, developers of media and image processing applications will also create highly dense media solutions.

                                        image

                                        “Visionary and innovative are two words that come to mind when working with TI’s KeyStone devices,” said Joe Ye, CEO, CyWee. “Our goal is to offer solutions that merge the digital and physical worlds, and with TI’s new SoCs we are one step closer to making this a reality by pushing state-of-the-art video to virtualized server environments. Our collaboration with TI should enable developers to deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.”
                                        Simplified development with complete tools and support
                                        TI continues to ease development with its scalable KeyStone architecture, comprehensive software platform and low-cost tools. In the past two years, TI has developed over 20 software compatible multicore devices, including variations of DSP-based solutions, ARM-based solutions and hybrid solutions with both DSP and ARM-based processing, all based on two generations of the KeyStone architecture. With compatible platforms across TI’s multicore DSPs and SoCs, customers can more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices, starting at just $30 and operating at a clock rate of 850MHz all the way to 15GHz of total processing power.
                                        TI is also making it easier for developers to quickly get started with its KeyStone multicore solutions by offering easy-to-use, evaluation modules (EVMs) for less than $1K, reducing developers’ programming burdens and speeding development time with a robust ecosystem of multicore tools and software.
                                        In addition, TI’s Design Network features a worldwide community of respected and well established companies offering products and services that support TI multicore solutions. Companies offering supporting solutions to TI’s newest KeyStone-based multicore SoCs include 3L Ltd., 6WIND, Advantech, Aricent, Azcom Technology, Canonical, CriticalBlue Enea, Ittiam Systems, Mentor Graphics, mimoOn, MontaVista Software, Nash Technologies, PolyCore Software and Wind River.
                                        Availability and pricing
                                        TI’s 66AK2Hx SoCs are currently available for sampling, with broader device availability in 1Q13 and EVM availability in 2Q13. AM5K2Ex and 66AK2Ex samples and EVMs will be available in the second half of 2013. Pricing for these devices will start at $49 for 1 KU.

                                        66AK2H14 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, Nov 10, 2013]
                                        The same as below for 66AK2H12 SoC with addition of:

                                        More Literature:

                                        From that the below excerpt is essential to understand the added value above 66AK2H12 SoC:

                                        image

                                        Figure 1. TI’s KeyStone™ 66AK2H14 SoC

                                        The 66AK2H14 SoC shown in Figure 1, with the raw computing power of eight C66x processors and quad ARM Cortex-A15s at over 1GHz performance, enables applications such as very large fast fourier transforms (FFT) in radar and multiple camera image analytics where a 10Gbit/s networking connection is needed. There are, and have been, several sophisticated technologies that have offered the bandwidth and additional features to fill this role. Some such as Serial RapidIO® and Infiniband have been successful in application domains that Gigabit Ethernet could not address, and continue to make sense, but 10Gbit/s Ethernet will challenge their existence.

                                        66AK2H12 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, created on Nov 8, 2012]

                                        Datasheet manual [351 pages]:

                                        More Literature:

                                        Description

                                        The 66AK2Hx platform is TI’s first to combine the quad ARM® Cortex™-A15 MPCore™ processors with up to eight TMS320C66x high-performance DSPs using the KeyStone II architecture. Unlike previous ARM Cortex-A15 devices that were designed for consumer products, the 66AK2Hx platform provides up to 5.6 GHz of ARM and 11.2 GHz of DSP processing coupled with security and packet processing and Ethernet switching, all at lower power than multi-chip solutions making it optimal for embedded infrastructure applications like cloud computing, media processing, high-performance computing, transcoding, security, gaming, analytics and virtual desktop. Using TI’s heterogeneous programming runtime software and tools, customers can easily develop differentiated products with 66AK2Hx SoCs.

                                        image

                                        Taking Multicore to the Next Level: KeyStone II Architecture [Texas Instruments YouTube channel, Feb 26, 2012]

                                        TI’s scalable KeyStone II multicore architecture includes support for both TMS320C66x DSP cores and multiple cache coherent quad ARM Cortex™-A15 clusters, for a mixture of up to 32 DSP and RISC cores. With significant updates to its award-winning KeyStone architecture, TI is now paving the way for a new era of high performance 28-nm devices that meld signal processing, networking, security and control functionality, with KeyStone II. Ideal for applications that demand superior performance and low power, devices based on the KeyStone architecture are optimized for high performance markets including communications infrastructure, mission critical, test and automation, medical imaging and high performance and cloud computing. For more information, please visit http://www.ti.com/multicore.

                                        Introducing the EVMK2H [Texas Instruments YouTube channel, Nov 15, 2013]

                                        Introducing the EVMK2H evaluation module, the cost-efficient development tool from Texas Instruments that enables developers to quickly get started working on designs for the 66AK2H06, 66AK2H12, and 66AK2H14 multicore DSP + ARM devices based on the KeyStone architecture.

                                        Kick start development of high performance compute systems with TI’s new KeyStone™ SoC and evaluation module [TI press release, Nov 14, 2013]

                                        Combination of DSP + ARM® cores and high-speed peripherals offer developers an optimal compute solution at low power consumption

                                        DALLAS, Nov. 14, 2013 /PRNewswire/ — Further easing the development of processing-intensive applications, Texas Instruments (TI) (NASDAQ: TXN) is unveiling a new system-on-chip (SoC), the 66AK2H14, and evaluation module (EVM) for its KeyStoneTM-based 66AK2Hx family of SoCs. With the new 66AK2H14 device, developers designing high-performance compute systems now have access to a 10Gbps Ethernet switch-on-chip. The inclusion of the 10GigE switch, along with the other high-speed, on-chip interfaces, saves overall board space, reduces chip count and ultimately lowers system cost and power. The EVM enables developers to evaluate and benchmark faster and easier. The 66AK2H14 SoC provides industry-leading computational DSP performance at 307 GMACS/153 GFLOPS and 19600 DMIPS of ARM performance, making it ideal for a wide variety of applications such as video surveillance, radar processing, medical imaging, machine vision and geological exploration.

                                        “Customers today require increased performance to process compute-intensive workloads using less energy in a smaller footprint,” said Paul Santeler, vice president and general manager, Hyperscale Business, HP. “As a partner in HP’s Moonshot ecosystem dedicated to the rapid development of new Moonshot servers, we believe TI’s KeyStone design will provide new capabilities across multiple disciplines to accelerate the pace of telecommunication innovations and geological exploration.”

                                        Meet TI’s new 10Gbps Ethernet DSP + ARM SoC
                                        TI’s newest silicon variant, the 66AK2H14, is the latest addition to its high-performance 66AK2Hx SoC family which integrates multiple ARM Cortex™-A15 MPCore™ processors and TI’s fixed- and floating-point TMS320C66x digital signal processor (DSP) generation cores. The 66AK2H14 offers developers exceptional capacity and performance (up to 9.6 GHz of cumulative DSP processing) at industry-leading size, weight, and power. In addition, the new SoC features a wide array of unique high-speed interfaces, including PCIe, RapidIO, Hyperlink, 1Gbps and 10Gbps Ethernet, achieving total I/O throughput of up to 154Gbps. These interfaces are all distinct and not multiplexed, allowing designers tremendous flexibility with uncompromising performance in their designs.
                                        Ease development and debugging with TI’s tools and software
                                        TI helps simplify the design process by offering developers highly optimized software for embedded HPC systems along with development and debugging tools for the EVMK2H – all for under $1,000. The EVMK2H features a single 66AK2H14 SoC, a status LCD, two 1Gbps Ethernet RJ-45 interfaces and on-board emulation. An optional EVM breakout card (available separately) also provides two 10Gbps Ethernet optical interfaces for 20Gbps backplane connectivity and optional wire rate switching in high density systems.
                                        The EVMK2H is bundled with TI’s Multicore Software Development Kit (MCSDK), enabling faster development with production ready foundational software. The MCSDK eases development and reduces time to market by providing highly-optimized bundles of foundational, platform-specific drivers, optimized libraries and demos.
                                        Complementary analog products to increase system performance
                                        TI offers a wide range of power management and analog signal chain components to increase the system performance of 66AK2H14 SoC-based designs. For example, the TPS53xx integrated FET DC/DC converters provide the highest level of power conversion efficiency even at light loads, while the LM10011 VID converter with dynamic voltage control helps reduce system power consumption. The CDCM6208 low-jitter clock generator also eliminates the need for external buffers, jitter cleaners and level translators.
                                        Availability and pricing
                                        TI’s EVMK2H is available now through TI distribution partners or TI.com for $995. In addition to TI’s Linux distribution provided in the MCSDK, Wind River® Linux is available now for the 66AK2Hxx family of SoCs. Green Hills® INTEGRITY® RTOS and Wind River VxWorks® RTOS support will each be available before the end of the year. Pricing for the 66AK2H14 SoC will start at $330 for 1 KU. The 10Gbps Ethernet breakout card will be available from Mistral.

                                        Ask the Expert: How can developers accelerate scientific computing with TI’s multicore DSPs? [Texas Instruments YouTube channel, Feb 7, 2012]

                                        Dr. Arnon Friedmann is the business manager for TI’s high performance computing products in the multicore and media infrastructure business. In this video, he explains how TI’s multicore DSPs are well suited for computing applications in oil and gas exploration, financial modeling and molecular dynamics, where ultra- high performance, low power and easy programmability are critical requirements.

                                        Ask the Expert: Arnon Friedmann [Texas Instruments YouTube channel, Sept 6, 2012]

                                        How are TI’s latest multicore devices a fit for video surveillance and smart analytic camera applications? Dr. Arnon Friedmann, PhD, is a business manager for multicore processors at Texas Instruments. In this role, he is responsible for growing TI’s business in high performance computing, mission critical, test and measurement and imaging markets. Prior to his current role, Dr. Friedmann served as the marketing director for TI’s wireless base station infrastructure group, where he was responsible for all marketing and design activities. Throughout his 14 years of experience in digital communications research and development, Dr. Friedmann has accumulated patents in the areas of disk drive systems, ADSL modems and 3G/4G wireless communications. He holds a PhD in electrical engineering and bachelor of science in engineering physics, both from the University of California, San Diego.

                                        End of Updates as of Dec 6, 2013


                                        The original post (8 months ago):

                                        HP Moonshot: Designed for the Data Center, Built for the Planet [HP press kit, April 8, 2013]

                                        On April 8, 2013, HP unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

                                        For more details on the disruptive potential of HP Moonshot, visit TheDisruption.com

                                        Introducing HP Moonshot [HewlettPackardVideos April 11, 2013]

                                        See how HP is defining disruption with the introduction of HP Moonshot.

                                        HP’s Cutting Edge Data Center Innovation [Ramón Baez, Senior Vice President and Chief Information Officer (CIO) of HP, HP Next [launched on April 2], April 10, 2013]

                                        This is an exciting time to be in the IT industry right now. For those of you who have been around for a while — as I have — there have been dramatic shifts that have changed how businesses operate.
                                        From the early days of the mainframes, to the explosion of the Internet and now social networks, every so often very important game-changing innovation comes along. We’re in the midst of another sea change in technology.
                                        Inside HP IT, we are testing the company’s Moonshot servers. With these servers running the same chips found in smart phones and tablets, they are using incredibly less power, require considerably less cooling and have a smaller footprint.

                                        We currently are running some of our intensive hp.com applications on Moonshot and are seeing very encouraging results. Over half a billion people will visit hp.com this year, and the new Moonshot technology will run at a fraction of the space, power and cost – basically we expect to run HP.com off of the same amount of energy needed for a dozen 60-watt light bulbs.

                                        This technology will revolutionize data centers.
                                        Within HP IT, we are fortunate in that over the past several years we have built a solid data center foundation to run our company. Like many companies, we were a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States.
                                        With the addition of four new EcoPODs to our infrastructure and these new Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.
                                        Moonshot is just the beginning.The product roadmap for Moonshot is extremely promising and I am excited to see what we can do with it within HP IT, and what benefits our customers will see.

                                        What Calxeda is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013] which is best to start with for its simple and efficient message, as well as what Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] already contained on this blog earlier:

                                        Calxeda discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

                                        Then we can turn to the Moonshot product launch by HP 2 days ago:

                                        Note that the first three videos following here were released 3 days later, so don’t be surpised by YouTube dates, in fact the same 3 videos (as well as the “Introducing HP Moonshot” embedded above) were delivered on April 8 live webcast, see the first 18 minutes of that, and then follow according HP’s flow of the presentation if you like. I would certainly recommend my own presentation compiled here.

                                        HP president and CEO Meg Whitman on the emergence of a new style of IT [HewlettPackardVideos YouTube channel, April 11, 2013]

                                        HP president and CEO Meg Whitman outlines the four megatrends causing strain on current infrastructure and how HP Project Moonshot servers are built to withstand data center challenges.

                                        EVP and GM of HP’s Enterprise Group Dave Donatelli discusses HP Moonshot [HewlettPackardVideos YouTube channel, April 11, 2013]

                                        EVP and GM of HP’s Enterprise Group Dave Donatelli details how HP Moonshot redefines the server market.

                                        Tour the Houston Discovery Lab — where the next generation of innovation is created [HewlettPackardVideos YouTube channel, April 11, 2013]

                                        SVP and GM of HP’s Industry Standard Servers and Software Mark Potter and VP and GM of HP’s Hyperscale Business Unit Paul Santeler tour HP’s Discovery Lab in Houston, Texas. HP’s Discovery Lab allows customers to test, tune and port their applications on HP Moonshot servers in-person and remotely.

                                        A new era of accelerated innovation [HP Moonshot minisite, April 8, 2013]

                                        Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale.

                                        Watch the unveiling [link to HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’]image

                                        On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part  of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.

                                        image

                                        imageWith up to a 180 servers inside the box (45 now) it was necessary to integrate network switching. There are two sockets (see left) for the network switch so you can configure for redundancy. The downlink module which talks to the cartridges is on left of the below image. This module is paired with an uplink module (see on the middle of the below image as taken out, and then shown with the uplink module on the right) that is in the back of the server. There will be more options available.image

                                        More information:
                                        Enterprise Information Library for Moonshot
                                        HP Moonshot System [Technical white paper from HP, April 5, 2013] from which I will include here the following excerpts for more information:

                                        HP Moonshot 1500 Chassis

                                        The HP Moonshot 1500 Chassis is a 4.3U form factor and slides out of the rack on a set of rails like a file cabinet drawer. It supports 45 HP ProLiant Moonshot Servers and an HP Moonshot-45G Switch Module that are serviceable from the top.
                                        It is a modern architecture engineered for the new style of IT that can support server cartridges, server and storage cartridges, storage only cartridges and a range of x86, ARM or accelerator based processor technologies.
                                        As an initial offering, the HP Moonshot 1500 Chassis is fully populated 45 HP ProLiant Moonshot Servers and one HP Moonshot-45G Switch Module and a second HP Moonshot-45G Switch Module can be purchased as an option. Future offerings will include quad server cartridges and will result in up to 180 servers per chassis. The 4.3U form factor allows for 10 chassis per rack, which with the quad server cartridge amounts to 1800 servers in a single rack.
                                        The Moonshot 1500 Chassis simplifies management with four iLO processors that share management responsibility for the 45 servers, power, cooling, and switches.

                                        Highly flexible fabric

                                        Built into the HP Moonshot 1500 Chassis architecture are four separate and independent fabrics that support a range of current and future capabilities:
                                        • Network fabric
                                        • Storage fabric
                                        • Management fabric
                                        • Integrated cluster fabric
                                        Network fabric
                                        The Network fabric provides the primary external communication path for the HP Moonshot 1500 Chassis.
                                        For communication within the chassis, the network switch has four communication channels to each of the 45 servers. Each channel supports a 1-GbE or 10-GbE interface. Each HP Moonshot-45G Switch Module supports 6 channels of 10GbE interface to the HP Moonshot-6SFP network uplink modules located in the rear of the chassis.
                                        Storage fabric
                                        The Storage fabric provides dedicated SAS lanes between server and storage cartridges. We utilize HP Smart Storage firmware found in the ProLiant family of servers to enable multiple core to spindle ratios for specific solutions. A hard drive can be shared among multiple server cartridges to enable low cost boot, logging, or attached to a node to provide storage expansion.
                                        The current HP Moonshot System configuration targets light scale-out applications. To provide the best operating environment for these applications, it includes HP ProLiant Moonshot Servers with a hard disk drive (HDD) as part of the server architecture. Shared storage is not an advantage for these environments. Future releases of the servers thattarget different solutions will take advantage of the storage fabric.
                                        Management fabric
                                        We utilize the Integrated Lights-Out (iLO) application-specific integrated circuit (ASIC) standard in the HP ProLiant family of servers to provide the innovative management features in the HP Moonshot System. To handle the range of extreme low energy processors we provide a device neutral approach to management, which can be easily consumed by data center operators to deploy at scale.
                                        The Management fabric enables management of the HP Moonshot System components as one platform with a dedicated iLO network. Benefits of the management fabric include:
                                        • The iLO Chassis Manager aggregates data to a common set of management interfaces.
                                        • The HP Moonshot 1500 Chassis has a single Ethernet port gateway that is the single point of access for the Moonshot Chassis manager.
                                        • Intelligent Platform Management Interface (IPMI) and Serial Console for each server
                                        • True out-of-band firmware update services
                                        • SL-APM Rack Management spans rack or multiple racks
                                        Integrated Cluster fabric
                                        The Integrated Cluster fabric provides a high-speed interface among future server cartridge technologies that will benefit from high bandwidth node-to-node communication. North, south, east, and west lanes are provided between individual server cartridges.
                                        The current HP ProLiant Moonshot Servertargets light scale-out applications. These applications do not benefit from the node-to-node communications, so the Integrated Cluster fabric is not utilized. Future releases of the cartridges that target different workloads that require low latency interconnects will take advantage of the Integrated Cluster fabric.

                                        HP ProLiant Moonshot Server

                                        HP will bring a growing library of cartridges, utilizing cutting-edge technology from industry leading partners. Each server will target specific solutions that support emerging Web, Cloud, and Massive-Scale Environments, as well as Analytics and Telecommunications. We are continuing server development for other applications, including Big Data, High-Performance Computing, Gaming, Financial Services, Genomics, Facial Recognition, Video Analysis, and more.
                                        Figure 4. Cartridges target specific solutions

                                        image

                                        The first server cartridge now available is HP ProLiant Moonshot Server, which includes the Intel® Atom Processor S1260. This is a low power processor that is right-sized for the light workloads. It has dedicated memory and storage, with discrete resources. This server design is idealfor light scale-out applications. Light scale-out applications require relatively little processing but moderately high I/O and include environments that perform the following functions:
                                        • Dedicated web hosting
                                        • Simple content delivery
                                        The HP ProLiant Moonshot Server can hot plug in the HP Moonshot 1500 Chassis. If service is necessary, it can be removed without affecting the other servers in the chassis. Table 1 defines the HP ProLiant Moonshot Server specifications.
                                        Table 1. HP ProLiant Moonshot Server specifications

                                        Processor
                                        One Intel® Atom Processor S1260
                                        Memory
                                        8 GB DDR3 ECC 1333 MHz
                                        Networking
                                        Integrated dual-port 1Gb Ethernet NIC
                                        Storage
                                        500 GB or 1 TB HDD or SSD, non-hot-plug, small form factor
                                        Operating systems
                                        Canonical Ubuntu 12.04
                                        Red Hat Enterprise Linux 6.4
                                        SUSE Linux Enterprise Server 11 SP2

                                        imageWith that HP CEO Seeks Turnaround Unveiling ‘Moonshot’ Super-Server: Tech [Bloomberg, April, 2013] as well as HP Moonshot: Say Goodbye to the Vanilla Server [Forbes, April 8, 2013]. HP however is much more eyeing the ARM based Moonshot servers which are expected to come later, because of the trends reflected on the left (source: HP). The software defined server concept is very general. image

                                        There are a number of quite different server cartridges expected to come, all specialised by server software installed on it. Typical specialised servers, for example, are the ones on which CyWee from Taiwan is working on with Texas Instruments’ new KeyStone II architecture featuring both ARM Cortex-A15 CPU cores and TI’s own C66x DSP cores for a mixture of up to 32 DSP and RISC cores in TI’s new 66AK2Hx family of SoCs, first of which is the TMS320TCI6636 implemented in 28nm foundry technology. Based on that CyWee will deliver multimedia Moonshot server cartridges for cloud gaming, virtual office, video conferencing and remote education (see even the first Keystone announcement). This CyWee involvement in HP Moonshot effort is part of HP’s Pathfinder Partner Program which Texas Instruments also joined recently to exploit a larger opportunity as:

                                        TI’s 66AK2Hx family and its integrated c66x multicore DSPs are applicable for workloads ranging from high performance computing, media processing, video conferencing, off-line image processing & analytics, video recorders (DVR/NVR), gaming, virtual desktop infrastructure and medical imaging.

                                        But Intel was able to win the central piece of the Moonshot System launch (originally initiated by HP as the “Moonshot Project” in November 2011 for disruption in terms of power and TCO for servers, actually with a Calxeda board used for research and development with other partners), at least as it was productized just two days ago:
                                        Raejeanne Skillern from Intel – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel]

                                        Raejeanne Skillern, Intel Director of Marketing for Cloud Computing, at HP Moonshot 2013 with John Furrier and Dave Vellante

                                        However ARM was not left out either just relegated in the beginning to highly advanced and/or specialised server roles with its SoC partners, and coming later in the year:

                                        • Applied Micro with networking and connectivity background having now the X-Gene ARM 64-bit Server on a Chip platform as well which features 8 ARM 64-bit high-performance cores developed from scratch according to an architecture license (i.e. not ARM’s own Cortex-A50 series core), clocked at up to 2.4GHz and also has 4 smaller cores for network and storage offloads (see AppliedMicro on the X-Gene ARM Server Platform and HP Moonshot [SiliconANGLE blog [April 9, 2013]). Sample reference boards to key customers were shipped in March (see Applied Micro’s cloud chip is an ARM-based, switch-killing machine [GigaOM, April 3, 2013]). In the latest X-Gene Arrives in Silicon [Open Compute Summit Winter 2013 presentation, Jan 16, 2013] video you can have the most recent strategic details (upto 2014 with FinFET implementation of a “Software defined X-Gene based data center components”, should be assumed that at 16nm). Here I will include a more product-oriented AppliedMicro Shows ARM 64-bit X-Gene Server on a Chip Hardware and Software [Charbax YouTube channel, Nov 3, 2012] overview video:
                                          Vinay Ravuri, Vice President and General Manager, Server Products at AppliedMicro gives an update on the 64bit ARM X-Gene Server Platform. At ARM Techcon 2012, AppliedMicro, ARM and several open-source software providers gave updates on their support of the ARM 64-bit X-Gene Server on a Chip Platform.

                                          More information: A 2013 Resolution for the Data Center [Applied Micro on Smart Connected Devices blog from ARM, Feb 4, 2013] about “plans from Oracle, Red Hat, Citrix and Cloudera to support this revolutionary architecture … Dell’s “Iron” server concept with X-Gene … an X-Gene based ARM server managed by the Dell DCS Software suite …” etc.

                                        • Texas Instruments with digital signal processing (DSP) background, as it was already presented above. 
                                        • Calxeda with integration of storage fabric and Internet switching background, with details coming later, etc.:

                                        This is what is empasized by Lakshmi Mandyam from ARM – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

                                        Lakshmi Mandyam, Director of Server Systems and Ecosystems, ARM, at HP Moonshot 2013, with John Furrier and Dave Vellante

                                        She is also mentioning in the talk the achievements which could put ARM and its SoC partners into a role which Intel now has with its general Atom S1200 based server cartridge product fitting into the Moonshot system. Perspective information on that is already available on my ‘Experiencing the Cloud’ blog here:
                                        The state of big.LITTLE processing [April 7, 2013]
                                        The future of mobile gaming at GDC 2013 and elsewhere [April 6, 2013]
                                        TSMC’s 16nm FinFET process to be further optimised with Imagination’s PowerVR Series6 GPUs and Cadence design infrastructure [April 8, 2013]
                                        With 28nm non-exclusive in 2013 TSMC tested first tape-out of an ARM Cortex™-A57 processor on 16nm FinFET process technology [April 3, 2013]

                                        The absence of Microsoft is even more interesting as AMD is also on this Moonshot bandwagon: Suresh Gopalakrishnan from AMD – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

                                        Suresh Gopalakrishnan, Vice President and General Manager, Server Business, AMD, at HP Moonshot 2013, with John Furrier and Dave Vellante

                                        already showing a Moonshot fitting server cartridge with AMD’s four next-generation SoCs (while Intel’s already productized cartridge is not yet at an SoC level). We know from CES 2013 that AMD Unveils Innovative New APUs and SoCs that Give Consumers a More Exciting and Immersive Experience [press release, Jan 7, 2013] with the:

                                        Temash” … elite low-power mobility processor for Windows 8 tablets and hybrids … to be the highest-performance SoC for tablets in the market, with 100 percent more graphics processing performance2 than its predecessor (codenamed “Hondo.”)
                                        Kabini” [SoC which] targets ultrathin notebooks with exceptional battery life and offers impressive levels of performance in both dual- and quad-core options. “Kabini” is expected to deliver an increase of more than 50 percent in performance3 over the previous generation of AMD essential computing APUs (codenamed “Brazos 2.0.”)
                                        Both APUs are scheduled to ship in the first half of 2013

                                        so AMD is really close to a server SoC to be delivered soon as well.

                                        The “more information” sections which follow her are:

                                        1. The Announcement
                                        2. Software Partners
                                        3. Hardware Partners


                                        1. The Announcement

                                        HP Moonshot [MultiVuOnlineVideo YouTube channel, April 8, 2013]

                                        HP today unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

                                        HP Launches New Class of Server for Social, Mobile, Cloud and Big Data [press release, April 8, 2013]

                                        Software defined servers designed for the data center and built for the planet
                                        … Built from HP’s industry-leading server intellectual property (IP) and 10 years of extensive research from HP Labs, the company’s central research arm, HP Moonshot delivers a significant improvement in energy, space, cost and simplicity. …
                                        The HP Moonshot system consists of the HP Moonshot 1500 enclosure and application-optimized HP ProLiant Moonshot servers. These servers will offer processors from multiple HP partners, each targeting a specific workload.
                                        With support for up to 1,800 servers per rack, HP Moonshot servers occupy one-eighth of the space required by traditional servers. This offers a compelling solution to the problem of physical data center space.(3) Each chassis shares traditional components including the fabric, HP Integrated Lights-Out (iLo) management, power supply and cooling fans. These shared components reduce complexity as well as add to the reduction in energy use and space.  
                                        The first HP ProLiant Moonshot server is available with the Intel® Atom S1200 processor and supports web-hosting workloads. HP Moonshot 1500, a 4.3u server enclosure, is fully equipped with 45 Intel-based servers, one network switch and supporting components.
                                        HP also announced a comprehensive roadmap of workload-optimized HP ProLiant Moonshot servers incorporating processors from a broad ecosystem of HP partners including AMD, AppliedMicro, Calxeda, Intel and Texas Instruments Incorporated.

                                        Scheduled to be released in the second half of 2013, the new HP ProLiant Moonshot servers will support emerging web, cloud and massive scale environments, as well as analytics and telecommunications. Future servers will be delivered for big data, high-performance computing, gaming, financial services, genomics, facial recognition, video analysis and other applications.

                                        The HP Moonshot system is immediately available in the United States and Canada and will be available in Europe, Asia and Latin America beginning next month.
                                        Pricing begins at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch.(4)
                                        (4) Estimated U.S. street prices. Actual prices may vary.

                                        More information:
                                        HP Moonshot System [Family data sheet, April 8, 2013]
                                        HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’ with embedded video gallery, press kit and more, originally created on April 12, 2010, obviously updated for the April 8, 2013 event]

                                        Moonshot 101 [HewlettPackardVideos YouTube channel, April 8, 2013]

                                        Paul Santeler, Vice President & GM of Hyperscale Business Unit at HP, discusses how HP Project Moonshot creates the new style of IT.http://hp.com/go/moonshot

                                        Alert for Microsoft:

                                        [4:42] We defined the industry standard server market [reference to HP’s Compaq heritage] and we’ve been the leader for years. With Moonshot we bring to find the market and taking it to the next level. [4:53]

                                        People Behind HP Moonshot [HP YouTube channel, April 10, 2013]

                                        HP Moonshot is a groundbreaking new class of server that requires less energy, less space and less cost. Built from HP’s industry-leading server IP and 10 years of research from HP Labs, HP Moonshot is an example of the best of HP working together. In the video: Gerald Kleyn, Director of Platform Research and Development, Hyperscale Business Unit, Industry Standard Servers; Scott Herbel, Worldwide Product Marketing Manager, Hyperscale Business Unit, Industry Standard Servers; Ron Mann, Director of Engineering, Industry Standard Servers; Kelly Pracht, Hardware Platform Manager R&D, Hyperscale Business Unit, Industry Standard Servers; Mike Sabotta, Distinguished Technologist, Hyperscale Business Unit, Industry Standard Servers; Dwight Barron, HP Fellow, Chief Technologist, Hyperscale Business Unit, Industry Standard Servers. For more information, visit http://www.hpnext.com.

                                        HP Moonshot System Tour [HewlettPackardVideos YouTube channel, April 8, 2013]

                                        Kelly Pracht, Moonshot Hardware Platform Program Manager, HP, takes you on a private tour of the HP Moonshot System and introduces the foundational HW components of HP Project Moonshot. This video guides you around the entire system highlighting the cartridges and switches.http://hp.com/go/moonshot

                                        HP Moonshot System is Hot Pluggable [HewlettPackardVideos YouTube channel, April 8, 2013]

                                        “Show me around the HP Moonshot System!” Vicki Doehring, Moonshot Hardware Engineer, HP, shows us just how simple and intuitive it is to remove components in the HP Moonshot System. This video explains how HP’s hot pluggable technology works with the HP Moonshot System.http://hp.com/go/moonshot

                                        Alert for Microsoft: how and when will you have a system like this with all the bells and whistles as presented above, as well as the rich ecosystem of hardware and software partners given below 

                                        HP Pathfinder Innovation Ecosystem [HewlettPackardVideos YouTube channel, April 8, 2013]

                                        A key element of HP Moonshot, the HP Pathfinder Innovation Ecosystem brings together industry leading sofware and hardware partners to accelerate the development of workload optimized applications. http://hp.com/go/moonshot

                                        Software partners:

                                        What Linaro is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

                                        Linaro discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

                                        Alert for Microsoft:

                                        [0:11] In HP approach Linaro is about forming an enterprise group. What they were hoping for, what’s happened is to get a bunch of companies who are interested in taking the ARM architecture into the server space. [0:26]

                                        Canonical joins Linaro Enterprise Group (LEG) and commits Ubuntu Hyperscale Availability for ARM V8 in 2013 [press release, Nov 1, 2012]

                                          • Canonical continues its leadership of commercial deployment for ARM-based servers through membership of Linaro Enterprise Group (LEG)
                                          • Ubuntu, the only commercially supported OS for ARM v7 today, commits to support ARM v8 server next year
                                          • Ubuntu extends its position as the natural choice for hyperscale  server computing with long term support

                                        … “Canonical has been supporting our work optimising and consolidating the Linux kernel since our founding in June 2010”, said George Grey, CEO of Linaro. “We’re very happy to welcome them as a member of the Linaro Enterprise Group, building on our relationship to help accelerate development of the ARM server software ecosystem.” …

                                        … “Calxeda has been thrilled with Canonical’s leadership in developing the ARM ecosystem”,  said Karl Freund, VP marketing at Calxeda. “These guys get it. They are driving hard and fast, already delivering enterprise-class code and support for Calxeda’s 32-bit product today to our mutual clients.  Working together in LEG will enable us to continue to build on the momentum we have already created.” …

                                        What Canonical is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

                                        HP Moonshot and Ubuntu work together [Ubuntu partner site, April 9, 2013]

                                        … Ubuntu, as the lead operating system platform for x86 and ARM-based HP Moonshot Systems, featured extensively at the launch of the program in April 2013. …
                                        Ubuntu Server is the only OS fully operational today across HP Moonshot x86 and ARM servers, launched in April 2013.
                                        Ubuntu is recognised as the leader in scale out and Hyperscale. Together, Canonical and HP are delivering massive reductions in data-center energy, space and costs. …

                                        Canonical has been working with HP for the past two years
                                        on HP Moonshot
                                        , and with Ubuntu, customers can achieve higher performance with greater manageability across both x86 and ARM chip sets” Paul Santeler, VP & GM, Hyperscale Business Unit, HP

                                        Ubuntu & HP’s project Moonshot [Canonical blog, Nov 2, 2011]

                                        Today HP announced Project Moonshot  – a programme to accelerate the use of low power processors in the data centre.
                                        The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86),  the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.
                                        Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.
                                        imageThe HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.
                                        The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?
                                        Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.
                                        The good news is that Canonical has been working with ARM and Calxeda for several years now and we released the first version of Ubuntu Server ported for ARM Cortex A9 class  processors last month.
                                        The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM.  This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.
                                        As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.

                                        The biggest enterprise alert for Microsoft because of what was discussed in Will Microsoft Stand Out In the Big Data Fray? [Redmondmag.com, March 22, 2013]: What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013] especially as it is a brand new offering, see NuoDB Announces General Availability of Industry’s First & Only Cloud Data Management System at Live-Streamed Event [press release, Jan 15, 2013] now available in archive at this link: http://go.nuodb.com/cdms-2013-register-e.html

                                        Barry Morris, founder and CEO of NuoDB discusses HP’s Project Moonshot and the database innovations delivered by the combined offering

                                        Extreme density on HP’s Project Moonshot [NuoDB Techblog, April 9, 2013]

                                        A few months ago HP came to us with something very cool. It’s called Project Moonshot, and it’s a new way of thinking about how you design infrastructure. Essentially, it’s a composable system that gives you serious flexibility and density.

                                        A single Moonshot System is 4.3u tall and holds 45 independent servers connected to each other via 1-Gig Ethernet. There’s a 10-Gig Ethernet interface to the system as a whole, and management interfaces for the system and each individual server. The long-term design is to have servers that provide specific capabilities (compute, storage, memory, etc.) and can scale to up to 180 nodes in a single 4.3u chassis.
                                        The initial system, announced this week, comes with a single server configuration: an Intel Atom S1260 processor, 8 Gigabytes of memory and either a 200GB SSD or a 500GB HDD. On its own, that’s not a powerful server, but when you put 45 of these into a 4.3 rack-unit space you get something in aggregate that has a lot of capacity while still drawing very little power (see below). The challenge, then, is how to really take advantage of this collection of servers.

                                        NuoDB on Project Moonshot: Density and Efficiency

                                        We’ve shown how NuoDB can scale a single database to large transaction rates. For this new system, however, we decided to try a different approach. Rather than make a single database scale to large volume we decided to see how many individual, smaller databases we could support at the same time. Essentially, could we take a fully-configured HP Project Moonshot System and turn it into a high-density, low-power, easy to manage hosting appliance.

                                        To put this in context, think about a web site that hosts blogs. Typically, each blog is going to have a single database supporting it (just like this blog you’re reading). The problem is that while a few blogs will be active all the time, most of them see relatively light traffic. This is known as a long-tail pattern. Still, because the blogs always need to be available, so too the backing databases always need to be running.

                                        This leads to a design trade-off. Do you map the blogs to a single database (breaking isolation and making management harder) or somehow try to juggle multiple database instances (which is hard to automate, expensive in resource-usage and makes migration difficult)? And what happens when a blog suddenly takes off in popularity? In other words, how do you make it easy to manage the databases and make resource-utilization as efficient as possible so you don’t over-spend on hardware?

                                        As I’ve discussed on this blog NuoDB is a multi-tenant system that manages individual databases dynamically and efficiently. That should mean that we’re a perfect fit for this very cool (pun intended) new system from HP.

                                        The Design

                                        After some initial profiling on a single server, we came up with a goal: support 7,200 active databases. You can read all about how we did the math, but essentially this was a balance between available CPU, Memory, Disk and bandwidth. In this case a “database” is a single Transaction Engine and Storage Manager pair, running on one of the 45 available servers.

                                        When we need to start a database, we pick the server that’s least-utilized. We choose this based on local monitoring at each server that is rolled up through the management tier to the Connection Brokers. It’s simple to do given all that NuoDB already provides, and because we know what each server supports it lets us calculate a single capacity percentage.
                                        It gets better. Because a NuoDB database is made of an agile collection of processes, it’s very inexpensive to start or stop a database. So, in addition to monitoring for server capacity we also watch what’s going on inside each database, and if we think it’s been idle long enough that something else could use the associated resources more effectively we shut it down. In other words, if a database isn’t doing anything active we stop it to make room for other databases.
                                        When an SQL client needs to access that database, we simply re-start it where there are available resources. We call this mechanism hibernating and waking a database. This on-demand resource management means that while there are some number of databases actively running, we can really support a much larger in total (remember, we’re talking about applications that exhibit a long-tail access pattern). With this capability, our original goal of 7,200 active databases translates into 72,000 total supported databases. On a single 4.3u System.
                                        The final piece we added is what we call database bursting. If a single database gets really popular it will start to take up too many resources on a single server. If you provision another server, separate from the Moonshot System, then we’ll temporarily “burst” a high-activity database to that new host until activity dies down. It’s automatic, quick and gives you on-demand capacity support when something gets suddenly hot.
                                        The Tests
                                        I’m not going to repeat too much here about how we drove our tests. That’s already covered in the discussion on how we’re trying to design a new kind of benchmark focused on density and efficiency. You should go check that out … it’s pretty neat. Suffice it say, the really critical thing to us in all of this was that we were demonstrating something that solves a real-world problem under real-world load.
                                        You should also go read about how we setup and ran on a Moonshot System. The bottom-line is that the system worked just like you’d expect, and gave us the kinds of management and monitoring features to go beyond basic load testing.
                                        The Results
                                        We were really lucky to be given access to a full Moonshot System. It gave us a chance to test out our ideas, and we actually were able to do better than our target. You can see this in the view from our management interface running against a real system under our benchmark load. You can see there that when we hit 7200 active databases we were only at about 70% utilization, so there was a lot more room to grow. Huge thanks to HP for giving us time on a real Moonshot System to see all those idea work!

                                        Something that’s easy to lose track of in all this discussion is the question of power. Part of the value proposition from Project Moonshot is in energy efficiency, and we saw that in spades. Under load a single server only draws 18 Watts, and the system infrastructure is closer to 250 Watts. Taken together, that’s a seriously dense system that is using very little energy for each database.

                                        Bottom Line
                                        We were psyched to have the chance to test on a Moonshot System. It gave us the chance to prove out ideas around automation and efficiency that we’ll be folding into NuoDB over the next few releases. It also gave us the perfect platform to put our architecture through its paces and validate a lot about the flexibility of our core architecture.
                                        We’re also seriously impressed by what we experienced from Project Moonshot itself. We were able to create something self-contained and easy to manage that solves a real-world problem. Couple that with the fact that a Moonshot System draws so little power, the Total Cost of Ownership is impressively low.  That’s probably the last point to make about all this: the combination of our two technologies gave us something where we could talk concretely about capacity and TCO, something that’s usually hard to do in such clear terms.
                                        In case it’s not obvious, we’re excited. We’ve already been posting this week about some ideas that came out of this work, and we’ll keep posting as the week goes on. Look for the moonshot tag and please follow-up with comments if you’re curious about anything specific and would like to hear more!

                                        Project Moonshot by the Numbers [NuoDB Techblog, April 9, 2013]

                                        To really understand the value from HP Project Moonshot you need to think beyond the list price of one system and focus instead on the Total Cost of Ownership. Figuring out the TCO for a server running arbitrary software is often a hard (and thankless?) task, so one of the things we’ve tried to do is not just demonstrate great technology but something that naturally lets you think about TCO in a simple way. We think the final metrics are pretty simple, but to get there requires a little math.

                                        Executive Summary

                                        If you’re a CIO, and just want to know the bottom line, then we’ll ruin the suspense and cut to the chase. It will cost you about $70,500 up-front, $1,800 in your first year’s electricity bills and take 8.3 rack-units to support the web-front end and database back-end for 72,000 blogs under real-world load.

                                        Cost of a Single Database
                                        Recall that we set the goal at 72,000 databases within a single system. At launch the list price for a fully-configured Moonshot System is around $60,000, so we start out at 83 cents per-database. In practice were seeing much higher capacity in our tests, but let’s start with this conservative number.
                                        Now consider the power used by the system. From what we’ve measured through the iLO interfaces a single server draws no more than 18 Watts at peak load (measured against CPU and IO activity). The System itself (fans, switches etc.) draws around 250 Watts in our tests. That means that under full load each database is drawing about .015 Watts.
                                        NuoDB is a commercial software offering, which means that you pay up-front to deploy the software (and get support as part of that fee). For anyone who wants to run a Moonshot System in production as a super-dense NuoDB appliance we’ll offer you a flat-rate license.
                                        Put together, we can say that the cost per database-watt is 1.22 cents. That’s on a 4.3 rack-unit system. Awesome.
                                        Quantify the Supported Load
                                        As we discussed in our post on benchmarking, we’re trying to test under real-world load. As a simple starting-point we chose a profile based on WordPress because it’s fairly ubiquitous and has somewhat serious transactional requirements. In our benchmarking discussion we explain that a typical application action (post, read, comment) does around 20 SQL operations.
                                        Given 72,000 databases most of these are fairly inactive, so on average we’ll say that each database gets about 250 hits a day (generous by most reports I’ve seen). That’s 18,000,000 hits a day or 208 hits per-second. 4,166 SQL statements a second isn’t much for a single database, but it’s pretty significant given that we’re spreading it across many databases some of which might have to be “woken” on-demand.
                                        HP was generous enough not only to give us time on a Moonshot System but also access to some co-located servers for driving our load tests. In this case, 16 lower-powered ARM-based Calxeda systems that all went through the same 1-Gig ethernet connection to our Moonshot System. These came from HP’s Discovery Lab; check out our post about working with the Moonshot System for more details.
                                        From these load-drivers we able to run our benchmark application with up to 16 threads per server, simulating 128 simultaneous clients. In this case a typical “client” would be a web server trying to respond to a web client request. We averaged around 320 hits per-second, well above the target of 208. From what we could observe, we expect that given more capable network and client drivers we would be able to get 3 or 4 times that rate easily.
                                        Tangible Cost
                                        We have the cost of the Moonshot System itself. We also know that it can support expected load from a fairly small collection of low-end servers. In our own labs we use systems that cost around $10,000, fit in 3 rack-units and would be able to drive at least the same kind of load we’re citing here. Add a single switch at around $500 and you have a full system ready to serve blogs. That’s $70,500 total in 8.3 rack units, still under $1 per database.
                                        I don’t know what power costs you have in your data center, but I’ve seen numbers ranging from 2.5 to 25 cents per Kilowatt-Hour. In our tests, where we saw .015 Watts per-database, if you assume an average rate of 13.75 cents per KwH that comes out to .00020625 cents per-hour per-database in energy costs. In one year, with no down-time, that would cost you $1,276.77 in total electricity fees.
                                        Just as an aside, according to the New York Times, Facebook uses around 60,000,000 Watts a year!
                                        One of the great things about a Moonshot System is that the 45 servers are already being switched inside the chassis. This means that you don’t need to buy switches & cabling, and you don’t need to allocate all the associated space in your racks. For our systems administrator that alone would make him very happy.
                                        Intangible Cost
                                        What I haven’t been talking about in all of this are the intangible costs. This is where figuring out TCO becomes harder.
                                        For instance, one of the value-propositions here is that the Moonshot System is a self-contained, automated component. That means that systems administrators are freed up from the tasks of figuring out how to allocate and monitor databases, and how to size the data-center for growth. Database developers can focus more easily on their target applications. CIOs can spend less time staring at spreadsheets … or, at least, can allocate more time to spreadsheets on different topics.
                                        Providing a single number in terms of capacity makes it easy to figure out what you need in your datacenter. When a single server within a Moonshot System fails you can simply replace it, and in the meantime you know that the system will still run smoothly just with slightly lower capacity. From a provisioning point of view, all you need to figure out is where your ceiling is and how much stand-by capacity you need to have at the ready.
                                        NuoDB by its nature is dynamic, even when you’re doing upgrades. This means that you can roll through a running Moonshot System applying patches or new versions with no down-time. I don’t know how you calculate the value in saved cost here, but you probably do!
                                        Comparisons and Planned Optimizations
                                        It’s hard to do an “apples-to-apples” comparison against other database software here. Mostly, this is because other databases aren’t designed to be dynamic enough to support hibernation, bursting and capacity-based automated balancing. So, you can’t really get the same levels of density, and a lot of the “intangible” cost benefits would go away.
                                        Still, to be fair, we tried running MySQL on the same system and under the same benchmarks. We could indeed run 7200 instances, although that was already hitting the upper-bounds of memory/swap. In order to get the same density you would need 10 Moonshot Systems, or you would need larger-powered expensive servers. Either way, the power, density, automation and efficiency savings go out the window, and obviously there’s no support for bursting to more capable systems on-demand.
                                        Unsurprisingly, the response time was faster on-average (about half the time) from MySQL instances. I say “unsurprisingly” for two reasons. First, we tried to use schema/queries directly from WordPress to be fair in our comparison, and these are doing things that are still known to be less-optimized in NuoDB. They’re also in the path of what we’re currently optimizing and expect to be much faster in the near-term.
                                        The second is that NuoDB clients were originally designed assuming longer-running connections (or pooled connections) to databases that always run with security & encryption enabled. We ran all of our tests in our default modes to be fair. That means we’re spending more time on each action setting up & tearing down a connection. We’ve already been working on optimizations here that would shrink the gap pretty substantially.
                                        In the end, however, our response time is still on the order of a few hundred milliseconds worst-case, and is less important than the overall density and efficiency metrics that we proved out. We think the value in terms of ease of use, density, flexibility on load spikes and low-cost speaks for itself. This setup is inexpensive by comparison to deploying multiple servers and supports what we believe is real-world load. Just wait until the next generation of HP Project Moonshot servers roll out and we can start scaling out individual databases at the same time!

                                        More information:
                                        Benchmarking Density & Efficiency [NuoDB Techblog, April 9, 2013]
                                        Database Hibernation and Bursting [NuoDB Techblog, April 8, 2013]
                                        An Enterprise Management UI for Project Moonshot [NuoDB Techblog, April 9, 2013]Regarding the cloud based version of NuoDB see:
                                        NuoDB Partners with Amazon [press release, March 26, 2013]
                                        NuoDB Extends Database Leadership in Scalability & Performance on a Private Cloud [press release, March 14, 2013] “… the industry’s first and only patented, elastically scalable Cloud Data Management System (CDMS), announced performance of 1.84 million transactions per second (TPS) running on 32 machines. … With NuoDB Starlings release 1.0.1, available as of March 1, 2013, the company has made advancements in performance and scalability and customers can now experience 26% improvement in TPS per machine.
                                        Google Compute Engine: interview with NuoDB [GoogleDevelopers YouTube channel, March 21, 2013]

                                        Meet engineers from NuoDB: an elastically scalable SQL database built for the cloud. We will learn about their approach to distributed SQL databases and get a live demo. We’ll cover the steps they took to get NuoDB running on Google Compute Engine, talk about how they evaluate infrastructure (both physical hardware and cloud), and reveal the results of their evaluation of Compute Engine performance.

                                        Actually Calxeda was best to explain the preeminence of software over the SoC itself:
                                        Karl Freund from Calxeda – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013], see also HP Moonshot: It’s a lot closer than it looks! [Calxeda’s ‘ARM Servers, Now!’ blog, April 8, 2013]

                                        Karl Freund, VP of Marketing, Calxeda, at HP Moonshot 2013 with John Furrier and Dave Vellante.

                                        as well as ending with Calxeda’s very practical, gradual approach to ARM based served market with things like:

                                        [16.03] Our 2nd generation platform called Midway, which will be out later this year [in the 2nd half of the year], that’s probably the target for Big Data. Our current product is great for web serving, it’s great for media serving, it’s great for storage. It doesn’t have enough memory for Big Data … in a large. So we’ll getting that 2nd generation product out, and that should be a really good Big Data platform. Why? Because it’s low power, it’s low cost, but it’s also got a lot of I/O. Big Data is all that moving a lot of data around. And if you do that more cost effectively you save a lot of money. [16:38]

                                        mentioning also that their strategy is using standard ARM cores like the Cortex-A57 for their H1 2014 product, and focus on things like the fabric and the management, which actually allows them to work with a streamlined staff of around 150 people.

                                        Detailed background about Calxeda in a concise form:
                                        Redefining Datacenter Efficiency: An Overview of Calxeda’s architecture and early performance measurements [Karl Freund, Nov 12, 2012] from where the core info is:

                                          • Founded in 2008   
                                          • $103M Funding       
                                          • 1st Product Announced with HP,  Nov  2011   
                                          • Initial Shipments in Q2 2012   
                                          • Volume production in Q4 2012

                                        image

                                        image* The power consumed under normal operating conditions
                                        under full application load (ie, 100% CPU utilization)

                                        imageA small Calxeda Cluster: a Simple Example
                                        • Start with four ServerNodes
                                        • Consumes only 20W total power   
                                        • Connected via distributed fabric switches   
                                        • Connect up to 4 SATA drives per node   
                                        • Then scale this to thousands of ServerNodes

                                        EnergyCard: a Quad-Node Reference Design

                                          • Four-node reference platform from Calxeda
                                          • Available as product and/or design
                                          • Plugs into OEM system board with passive fabric, no additional switch HW
                                            EnergyCard delivers 80Gb Bandwidth to the system board. (8 x 10Gb links)

                                        image

                                        image

                                        It is also important to have a look at what were the Open Source Software Packages for Initial Calxeda Shipments [Calxeda’s ‘ARM Servers, Now!’ blog, May 24, 2012]

                                        We are often asked what open-source software packages are available for initial shipments of Calxeda-based servers.

                                        Here’s the current list (changing frequently).  Let us know what else you need!

                                        image

                                        Then Perspectives From Linaro Connect [Calxeda’s ‘ARM Servers, Now!’ blog, March 20, 2013] sheds more light on the recent software alliances which make Calxeda to deliver:

                                        – From Larry Wikelius,   Co-Founder and VP Ecosystems,  Calxeda:

                                        The most recent Linaro Connect (Linaro Connect Asia 2013 – LCA), held in Hong Kong the first week of March, really put a spotlight on the incredible momentum around ARM based technology and products moving into the Data Center.  Yes – you read that correctly – the DATA CENTER!

                                        When Linaro was originally launched almost three years ago the focus was exclusively on the mobile and client market – where ARM has and continues to be dominant.  However, as Calxeda has demonstrated, the opportunity for the ARM architecture goes well beyond devices that you carry in your pocket.  Calxeda was a key driver in the formation of the Linaro Enterprise Group (LEG), which was publicly launched at the previous LinaroConnect event in Copenhagen in early November, 2012.

                                        LEG has been an exciting development for Linaro and now has 13 member companies that include server vendors such as Calxeda, Linux distribution companies Red Hat and Canonical, OEM representation from HP and even Hyperscale Data Center end user Facebook.  There were many sessions throughout the week that focused on Server specific topics such as UEFI, ACPI, Virtualization, Hyperscale Testing with LAVA and Distributed Storage.  Calxeda was very active throughout the week with the team participating directly in a number of roadmap definition sessions, presenting on Server RAS and providing guidance in key areas such as application optimization and compiler focus for Servers.

                                        Linaro Connect is proving to be a tremendous catalyst for the the growing eco-system around the ARM software community as a whole and the server segment in particular.  A great example of this was the keynote presentation given jointly by Mark Heath and Lars Kurth from Citrix on Tuesday morning.  Mark is the VP of XenServer at Citirix and Lars is well know in the OpenSource community for his work with Xen.  The most exciting announcement coming out of Mark’s presentation is that Citrix will be joining Linaro as a member of LEG.  Citrix will be certainly prove to be another valuable member of the Linaro team and during the week attendees were able to appreciate how serious Citrix is about supporting ARM servers.  The Xen team has not only added full support for ARM V7 systems in the Xen 4.3 release but they have accomplished some very impressive optimizations for the ARM platform.  The Xen team has leveraged Device Tree for optimal device discovery.  Combined with a number of other code optimizations they showed a dramatically smaller code base for the ARM platform.  We at Calxeda are thrilled to welcome Citrix into LEG!

                                        As an indication of the draw that the Linaro Connect conference is already having on the broader industry the Open Compute Project (OCP) held their first International Event co-incident with LCA at the same venue.  The synergy between Linaro and OCP is significant with the emphasis on both organizations around Open Source development (one software and one hardware) along with the dramatically changing design points for today’s Hyperscale Data Center.  In fact the keynote at LCA on Wednesday morning really put a spotlight on how significant this is likely to be.  Jason Taylor, Director of Capacity Engineering and Analysis at Facebook, presented on Facebook’s approach to ARM based servers.   Facebook’s consumption of Data Center equipment is quite stunning – Jason quoted from Facebook’s 10-Q filed in October 2012 which stated that “The first nine months of 2012 … $1.0 billion for capital expenditures” related to data center equipment and infrastructure.  Clearly with this level of investment Facebook is extremely motivated to optimize where possible.  Jason focused on the strategic opportunity for ARM based severs in a disaggregated Data Center of the future to provide lower cost computing capabilities with much greater flexibility.

                                        Calxeda has been very active in building the Server Eco-System for ARM based servers.  This week in Hong Kong really underscored how important that investment has become – not just for Calxeda but for the industry as a whole. Our commitment to Open Source software development in general and Linaro in particular has resulted in a thriving Linux Infrastructure for ARM servers that allows Calxeda to leverage and focus on key differentiation for our end users.  The Open Compute Project, which we are an active member in and have contributed to key projects such as the Knockout Storage design as well as the Open Slot Specification, demonstrates how the combination of an Open Source approach for both Software and Hardware can compliment each other and can drive Data Center innovation.  We are early in this journey but it is very exciting!

                                        Calxeda will continue to invest aggressively in forums and industry groups such as these to drive the ARM based server market.  We look forward to continue to work with the incredibly innovative partners that are members in these groups and we are confident that more will join this exciting revolution.  If you are interested in more information on these events and activities please reach out to us directly at info@calxeda.com.

                                        The next Linaro Connnect is scheduled for early July in Dublin. We expect more exciting events and topics there and hope to see you there!

                                        They are also referring on their blog to Mobile, cloud computing spur tripling of micro server shipments this year [IHS iSuppli press release, Feb 6, 2013] which showing the general market situation well into the future as:

                                        Driven by booming demand for new data center services for mobile platforms and cloud computing, shipments of micro servers are expected to more than triple this year, according to an IHS iSuppli Compute Platforms Topical Report from information and analytics provider IHS (NYSE: IHS).
                                        Shipments this year of micro servers are forecast to reach 291,000 units, up 230 percent from 88,000 units in 2012. Shipments of micro servers commenced in 2011 with just 19,000 units. However, shipments by the end of 2016 will rise to some 1.2 million units, as shown in the attached figure.

                                        image

                                        The penetration of micro servers compared to total server shipments amounted to a negligible 0.2 percent in 2011. But by 2016, the machines will claim a penetration rate of more than 10 percent—a stunning fiftyfold jump.
                                        Micro servers are general-purpose computers, housing single or multiple low-power microprocessors and usually consuming less than 45 watts in a single motherboard. The machines employ shared infrastructure such as power, cooling and cabling with other similar devices, allowing for an extremely dense configuration when micro servers are cascaded together.
                                        “Micro servers provide a solution to the challenge of increasing data-center usage driven by mobile platforms,” said Peter Lin, senior analyst for compute platforms at IHS. “With cloud computing and data centers in high demand in order to serve more smartphones, tablets and mobile PCs online, specific aspects of server design are becoming increasingly important, including maintenance, expandability, energy efficiency and low cost. Such factors are among the advantages delivered by micro servers compared to higher-end machines like mainframes, supercomputers and enterprise servers—all of which emphasize performance and reliability instead.”
                                        Server Salad Days
                                        Micro servers are not the only type of server that will experience rapid expansion in 2013 and the years to come. Other high-growth segments of the server market are cloud servers, blade servers and virtualization servers.
                                        The distinction of fastest-growing server segment, however, belongs solely to micro servers.
                                        The compound annual growth rate for micro servers from 2011 to 2016 stands at a remarkable 130 percent—higher than that of the entire server market by a factor of 26. Shipments will rise by double- and even triple-digit percentages for each year during the period.
                                        Key Players Stand to Benefit
                                        Given the dazzling outlook for micro servers, makers with strong product portfolios of the machines will be well-positioned during the next five years—as will their component suppliers and contract manufacturers.
                                        A slew of hardware providers are in line to reap benefits, including microprocessor vendors like Intel, ARM and AMD; server original equipment manufacturers such as Dell and Hewlett-Packard; and server original development manufacturers including Taiwanese firms Quanta Computer and Wistron.
                                        Among software providers, the list of potential beneficiaries from the micro server boom extends to Microsoft, Red Hat, Citrix and Oracle. For the group of application or service providers that offer micro servers to the public, entities like Amazon, eBay, Google and Yahoo are foremost.
                                        The most aggressive bid for the micro server space comes from Intel and ARM.
                                        Intel first unveiled the micro server concept and reference design in 2009, ostensibly to block rival ARM from entering the field.
                                        ARM, the leader for many years in the mobile world with smartphone and tablet chips because of the low-power design of its central processing units, has been just as eager to enter the server arena—dominated by x86 chip architecture from the likes of Intel and a third chip player, AMD. ARM faces an uphill battle, as the majority of server software is written for x86 architecture. Shifting from x86 to ARM will also be difficult for legacy products.
                                        ARM, however, is gaining greater support from software and OS vendors, which could potentially put pressure on Intel in the coming years.
                                        Read More > Micro Servers: When Small is the Next Big Thing

                                        Then there are a number of Intel competitive posts on Calxeda’s ‘ARM Servers, Now!’ blog:
                                        What is a “Server-Class” SOC? [Dec 12, 2012]
                                        Comparing Calxeda ECX1000 to Intel’s new S1200 Centerton chip [Dec 11, 2012]
                                        which you can also find in my Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] with significantly wider additional information upto binary translation from x86 to ARM with Linux

                                        See also:
                                        ARM Powered Servers: 2013 is off to a great start & it is only March! [Smart Connected Devices blog of ARM, March 6, 2013]
                                        Moonshot – a shot in the ARM for the 21st century data center [Smart Connected Devices blog of ARM, April 9, 2013]
                                        Are you running out of data center space? It may be time for a new server architecture: HP Moonshot [Hyperscale Computing Blog of HP, April 8, 2013]
                                        HP Moonshot: the HP Labs team that did some of the groundbreaking research [Innovation @ HP Labs blog of HP, April 9, 2013]
                                        HP Moonshot: An Accelerator for Hyperscale Workloads [Moor Insights White Paper, April 8, 2013]
                                        Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop [HP Vertica blog, April 9, 2013] by team of HP Labs researchers show how the Vertica Analytics Platform can be used to find patterns from a billion records in a couple of minutes, about 9x faster than Hadoop.
                                        PCs and cloud clients are not parts of Hewlett-Packard’s strategy anymore [‘Experiencing the Cloud’, Aug 11, 2011 – Jan 17, 2012] see the Autonomy IDOL related content there
                                        ENCO Systems Selects HP Autonomy for Audio and Video Processing [HP Autonomy press release, April 8, 2013]

                                        HP Autonomy today announced that ENCO Systems, a global provider of radio automation and live television audio solutions, has selected Autonomy’s Intelligent Data Operating Layer (IDOL) to upgrade ENCO’s latest-generation enCaption product.

                                        ENCO Systems provides live automated captioning solutions to the broadcast industry, leveraging technology to deliver closed captioning by taking live audio data and turning it into text. ENCO Systems is capitalizing on IDOL’s unique ability to understand meaning, concepts and patterns within massive volumes of spoken and visual content to deliver more accurate speech analytics as part of enCaption3.

                                        “Many television stations count on ENCO to provide real-time closed captioning so that all of their viewers get news and information as it happens, regardless of their auditory limitations,” said Ken Frommert, director, Marketing, ENCO Systems. “Autonomy IDOL helps us provide industry-leading automated closed captioning for a fraction of the cost of traditional services.”
                                        enCaption3 is the only fully automated speech recognition-based closed captioning system for live television that does not require speaker training. It gives broadcasters the ability to caption their programming, including breaking news and weather, any time, day or night, since it is always on and always available. enCaption3 provides captioning in near real time-with only a 3 to 6 second delay-in nearly 30 languages.
                                        “Television networks are under increasing pressure to provide real-time closed captioning services-they face fines if they don’t, and their growing and diverse viewers demand it,” said Rohit de Souza, general manager, Power, HP Autonomy. “This is another example of a technology company integrating Autonomy IDOL to create a stronger, faster and more accurate product offering, and demonstrates yet another powerful way in which IDOL can be applied to help organizations succeed in the human information era.”

                                        Using Big Data to change the game in the Energy industry [Enterprise Services Blog of HP, Oct 24, 2012]

                                        … Tools like HP’s Autonomy that analyzes the unstructured data found in call recordings, survey responses, chat logs, e-mails, social media posts and more. Autonomy’s Intelligent Data Operating Layer (IDOL) technology uses sophisticated pattern-matching techniques and probabilistic modeling to interpret information in much the same way that humans do. …

                                        Stouffer Egan turns the tables on computers in keynote address at HP Discover [Enterprise Services Blog of HP, June 8, 2012]

                                        For decades now, the human mind has adjusted itself to computers by providing and retrieving structured data in two-dimensional worksheets with constraints on format, data types, list of values, etc. But, this is not the way the human mind has been architected to work. Our minds have the uncanny ability to capture the essence of what is being conveyed in a facial expression in a photograph, the tone of voice or inflection in an audio and the body language in a video. At the HP Discover conference, Autonomy VP for United States, Stouffer Egan showed the audience how software can begin to do what the human mind has being doing since the dawn of time. In a demonstration where Iron Man came live out of a two-dimensional photograph, Egan turned the tables on computers. It is about time computers started thinking like us rather than us forcing us to think like them.
                                        Egan states that the “I” in IT is where the change is happening. We have a newfound wealth of data through various channels including video, social, click stream, audio, etc. However, data unprocessed without any analysis is just that — raw data. For enterprises to realize business value from this unstructured data, we need tools that can process it across multiple media. Imagine software that recognizes the picture in a photograph and searches for a video matching the person in the picture. The cover page of a newspaper showing a basketball star doing a slam dunk suddenly turns live pulling up the video of this superstar’s winning shot in last night’s game. …


                                        2. Software Partners

                                        image
                                        HP Moonshot is setting the roadmap for next generation data centers by changing the model for density, power, cost and innovation. Ubuntu has been designed to meet the needs of Hyperscale customers and, combined with its management tools, is ideally suited be the operating system platform for HP Moonshot. Canonical has been working with HP since the beginning of the Moonshot Project, and Ubuntu is the only OS integrated and fully operational across the complete Moonshot System covering x86 and ARM chip technologies.
                                        What Canonical is saying about HP Moonshot
                                        image
                                        As mobile workstyles become the norm, the scalability needs of today’s applications and devices are increasingly challenging what traditional infrastructures can support. With HP’s Moonshot System, customers will be able to rapidly deploy, scale, and manage any workload with dramatically lower space and energy constraints. The HP Pathfinder Innovation Ecosystem is a prime opportunity for Citrix to help accelerate the development of innovative solutions that will benefit our enterprise cloud, virtualization and mobility customers.
                                        image
                                        We’re committed to helping enterprises achieve the most from their Big Data initiatives. Our partnership with HP enables joint customers to keep and query their data at scale so they can ask bigger questions and get bigger answers. By using HP’s Moonshot System, our customers can benefit from the improved resource utilization of next generation data center solutions that are workload optimized for specific applications.
                                         
                                        imageToday’s interactive applications are accessed 24×365 by millions of web and mobile users, and the volume and velocity of data they generate is growing at an unprecedented rate. Traditional technologies are hard pressed to keep up with the scalability and performance demands of these new applications. Couchbase NoSQL database technology combined with HP’s Moonshot System is a powerful offering for customers who want to easily develop interactive web and mobile applications and run them reliably at scale. image
                                        Our partnership with HP facilitates CyWee’s goal of offering solutions that merge the digital and physical worlds. With TI’s new SoCs, we are one step closer to making this a reality by pushing state-of-the-art video to specialized server environments. Together, CyWee and HP will deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.
                                        image
                                        HP’s new Moonshot System will enable organizations to increase the energy efficiency of their data centers while reducing costs. Our Cassandra-based database platform provides the massive scalability and multi-datacenter capabilities that are a perfect complement to this initiative, and we are excited to be working with HP to bring this solution to a wide range of customers.
                                        image
                                        Big data comes in a wide range for formats and types and is a result of the connected everything world we live in. Through Project Moonshot, HP has enabled a new class of infrastructure to run more efficient workloads, like Apache Hadoop, and meet the market demand of more performance for less.
                                        image
                                        The unprecedented volume and variety of data introduces unique challenges to organizations today… By combining the HP Moonshot system with Autonomy IDOL’s unique ability to understand concepts in information, organizations can dramatically reduce the cost, space, and energy requirements for their big data initiatives, and at the same time gain insights that grow revenue, reduce risk, and increase their overall Return on Information.
                                        image
                                        Big Data is not just for Big Companies – or Big Servers – anymore – it’s affecting all sectors of the market. At HP Vertica we’re very excited about the work we’ve been doing with the Moonshot team on innovative configurations and types of analytic appliances which will allow us to bring the benefits of real-time Big Data analytics to new segments of the market. The combination of the HP Vertica Analytics Platform and Moonshot is going to be a game-changer for many.
                                        image
                                        HP worked closely with Linaro to establish the Linaro Enterprise Group (LEG). This will help accelerate the development of the software ecosystem around ARM Powered servers. HP’s Moonshot System is a great platform for innovation – encouraging a wide range of silicon vendors to offer competing ‘plug-and-play’ server solutions, which will give end users maximum choice for all their different workloads.
                                        What Linaro is saying about HP Moonshot[HewlettPackardVideos YouTube channel, April 8, 2013]
                                        image
                                        Organizations are looking for ways to rapidly deploy, scale, and manage their infrastructure, with an architecture that is optimized for today’s application workloads. HP Moonshot System is an energy efficient, space saving, workload-optimized solution to meet these needs, and HP has partnered with MapR Technologies, a Hadoop technology leader, to accelerate innovation and deployment of Big Data solutions.
                                        image
                                        NuoDB and HP are shattering the scalability and density barriers of a traditional database server. NuoDB on the HP Moonshot System delivers unparalleled database density, where customers can now run their applications across thousands of databases on a single box, significantly reducing the total cost across hardware, software, and power consumption. The flexible architecture of HP Moonshot coupled with NuoDB’s hyper-pluggable database design and its innovative “database hibernation” technology makes it possible to bring this unprecedented hardware and software combination to market.
                                        What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013]
                                        image
                                        As the leading solution provider for the hosting market, Parallels is excited to be collaborating in the HP Pathfinder Innovation Ecosystem. The HP Moonshot System in concert with Parallels Plesk Panel and Parallels Containers provides a flexible and efficient solution for cloud computing and hosting.
                                        image
                                        Red Hat Enterprise Linux on HP’s converged infrastructure means predictability, consistency and stability. Companies around the globe rely on these attributes when deploying applications every day, and our value proposition is just as important in the Hyperscale segment. When customers require a standard operating environment based on Red Hat Enterprise Linux, I believe they will look to the HP Moonshot System as a strong platform for high-density Hyperscale implementations.
                                        What Red Hat is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
                                        image
                                        HP Project Moonshot’s promise of extreme low-energy servers is a game changer, and SUSE is pleased to partner with HP to bring this new innovation to market. For more than twenty years, SUSE has adapted its enterprise-grade Linux operating system to achieve ever-increasing performance needs that succeed both today and tomorrow in areas such as Big Data and cloud computing.
                                        What SUSE is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]


                                        3. Hardware Partners

                                        image
                                        AMD is excited to continue our deep collaboration with HP to bring extreme low-energy, ultra dense, specialized server solutions to the market. Both companies share a passion to bring innovative workload optimized solutions to the market, enabling customers to scale-out to new levels within existing energy and space constraints. The new low-power x86 AMD Opteron™ APU is optimized in the HP Moonshot System to dramatically lower TCO in quickly emerging media oriented workloads.
                                        What AMD is saying about HP Moonshot
                                        image

                                        It is exciting to see HP take the lead in innovating low-energy servers for the cloud. Applied Micro’s ARM 64-bit X-Gene Server on a Chip will enable performance levels seen in today’s deployments while offering higher densities, greatly improved I/O, and substantial reductions in the total cost of ownership. Together, we will unleash innovation unlike anything we’ve seen in the server market for decades.

                                        What Applied Micro is saying about HP Moonshot

                                        image
                                        In the current economic and power realities, today’s server infrastructure cannot meet the needs of the next billion data users, or the evolving needs of currently supported users. Customers need innovative SoC solutions which deliver more integration and optimization than has historically been required by traditional enterprise workloads. HP’s Moonshot System is a departure from the one size fits all approach of traditional enterprise and embraces a range of ARM partner solutions that address different performance, workloads and cost points.
                                        What ARM is saying HP Moonshot
                                        image
                                        Calxeda and HP’s new Moonshot System are a powerful combination, and sets a new standard for ultra-efficient web and application serving. Fulfilling a journey started together in November 2011, Project Moonshot creates the foundation for the new age of application-specific computing.
                                        What Calxeda is saying about HP Moonshot
                                        image
                                        HP Moonshot System is a game changer for delivering optimized server solutions. It beautifully balances the need for mixing different processor solutions optimized for different workloads under a standard hardware and software framework. Cavium’s Project Thunder will provide a family of 64-bit ARM v8 processors with dense and scalable sever class performance at extremely attractive power and cost metrics. We are doing this by blending performance and power efficient compute, high performance memory and networking into a single, highly integrated SoC.
                                        What Cavium is saying about HP Moonshot
                                        image
                                        Intel is proud to deliver the only server class, 64-bit SoC technology that powers the first and only production shipping HP ProLiant Moonshot Server today. 64-bit Intel Atom processor S1200 family features extreme low power combined with required datacenter class capabilities for lightweight web scale workloads, such as low end dedicated hosting and static web serving. In collaboration with HP, we have a strong roadmap of additional server solutions shipping later this year, including Intel’s 2nd generation 64-bit SoC, “Avoton” based on leading 22nm manufacturing technology, that will deliver best in class energy efficiency and density for HP Moonshot System.
                                        What Intel is saying about HP Moonshot
                                        image What Marvell is saying about HP Moonshot
                                        image
                                        HP Moonshot System’s high density packaging coupled with integrated network capability provides the perfect platform to enable HP Pathfinder Innovation Ecosystem partners to deliver cutting edge technology to the hyper-scale market. SRC Computers is excited to bring its history of delivering paradigm shifting high-performance, low-power, reconfigurable processors to HP Project Moonshot’s vision of optimizing hardware for maximum application performance at lowest TCO.
                                        What SRC Computers is saying about HP Moonshot
                                        image
                                        The scalability and high performance at low power offered through HP’s Moonshot System gives customers an unmatched ability to adapt their solutions to the ever-changing and demanding market needs in the high performance computing, cloud computing and communications infrastructure markets. The strong collaboration efforts between HP and TI through the HP Pathfinder Innovation Ecosystem ensure that customers understand and get the most benefit from the processors at a system-level.
                                        What TI is saying about HP Moonshot