Home » Cloud SW engineering

Category Archives: Cloud SW engineering

OpenStack adoption (by Q1 2016)

OpenStack Promise as per Moogsoft -- June 3, 2015For information on OpenStack provided earlier on this blog see:
– Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others, ‘Experiencing the Cloud’, Dec 10, 2013
– Red Hat Enterprise Linux OpenStack Platform 4 delivery and Dell as the first company to OEM it co-engineered on Dell infrastructure with Red Hat, ‘Experiencing the Cloud’, Feb 19, 2014
To understand the OpenStack V4 level state-of-technology-development as of June 25, 2015:
– go to my homepage: https://lazure2.wordpress.com/
– or to the OpenStack related part of Microsoft Cloud state-of-the-art: Hyper-scale Azure with host SDN — IaaS 2.0 — Hybrid flexibility and freedom, ‘Experiencing the Cloud’, July 11, 2015

May 19, 2016:

Oh, the places you’ll go with OpenStack! by Mark Collier, OpenStack Foundation COO on ‘OpenStack Superuser’:

With OpenStack in tow you’ll go far — be it your house, your bank, your city or your car.

Just look at all of the exciting places we’re going:

From the phone in your pocket

The telecom industry is undergoing a massive shift, away from hundreds of proprietary devices in thousands of central offices accumulated over decades, to a much more efficient and flexible software plus commodity hardware approach. While some carriers like AT&T have already begun routing traffic from the 4G networks over OpenStack powered clouds to millions of cellphone users, the major wave of adoption is coming with the move to 5G, including plans from AT&T, Telefonica, SK Telekom, and Verizon.

We are on the cusp of a revolution that will completely re-imagine what it means to provide services in the trillion dollar telecom industry, with billions of connected devices riding on OpenStack-powered infrastructure in just a few years.

To the living room socket

The titans of TV like Comcast, DirecTV, and Time Warner Cable all rely on OpenStack to bring the latest entertainment to our homes efficiently, and innovators like DigitalFilm Tree are producing that content faster than ever thanks to cloud-based production workflows.

Your car, too, will get smart

Speaking of going places, back here on earth many of the world’s top automakers, such as BMW and the Volkswagen group, which includes Audi, Lamborghini, and even Bentley, are designing the future of transportation using OpenStack and big data. The hottest trends to watch in the auto world are electric zero emissions cars and self-driving cars. Like the “smart city” mentioned above, a proliferation of sensors plus connectivity call for distributed systems to bring it all together, creating a huge opportunity for OpenStack.

And your bank will take part

Money moves faster than ever, with digital payments from startups and established players alike competing for consumer attention. Against this backdrop of enormous market change, banks must meet an increasingly rigid set of regulatory rules, not to mention growing security threats. To empower their developers to innovate while staying diligent on regs and security, financial leaders like PayPal, FICO, TD Bank, American Express, and Visa are adopting OpenStack.

Your city must keep the pace

Powering the world’s cities is a complex task and here OpenStack is again driving automation, this time in the energy sector. State Grid Corporation, the world’s largest electric utility, serves over 120 million customers in China while relying on OpenStack in production.

Looking to the future, cities will be transformed by the proliferation of fast networks combined with cheap sensors. Unlocking the power of this mix are distributed systems, including OpenStack, to process, store, and move data. Case in point: tcpcloud in Prague is helping introduce “smart city” technology by utilizing inexpensive Raspberry Pis embedded in street poles, backed by a distributed system based on Kubernetes and OpenStack. These systems give city planners insight into traffic flows of both pedestrians and cars, and even measure weather quality. By routing not just packets but people, cities are literally load balancing their way to lower congestion and pollution.

From inner to outer space

The greatest medical breakthroughs of the next decade will come from analyzing massive data sets, thanks to the proliferation of distributed systems that put supercomputer power into the hands of every scientist. And OpenStack has a huge role to play empowering researchers all over the globe: from Melbourne to Madrid, Chicago to Chennai, or Berkeley to Beijing, everywhere you look you’ll find OpenStack.

To explore this world, I recently visited the Texas Advanced Computing Center (TACC) at the University of Texas at Austin where I toured a facility that houses one of the top 10 supercomputers in the world, code named “Stampede

But what really got me excited about the future was the sight of two large OpenStack clusters: one called Chameleon, and the newest addition, Jetstream, which put the power of more than 1,000 nodes and more than 15,000 cores into the hands of scientists at 350 universities. In fact, the Chameleon cloud was recently used in a class at the University of Arizona by students looking to discover exoplanets. Perhaps the next Neil deGrasse Tyson is out there using OpenStack to find a planet to explore for NASA’s Jet Propulsion Laboratories.

Where should we go next?

Mark Collier is OpenStack co-founder, and currently the OpenStack Foundation COO. This article was first published in Superuser Magazine, distributed at the Austin Summit.

May 9, 2016:

From OpenStack Summit Austin, Part 1: Vendors digging in for long haul by Al Sadowski, 451 Research, LLC:  This report provides highlights from the most recent OpenStack Summit

THE 451 TAKE OpenStack mindshare continues to grow for enterprises interested in deploying cloud-native applications in greenfield private cloud environments. However, its appeal is limited for legacy applications and enterprises sold on hyperscale multi-tenant cloud providers like AWS and Azure. There are several marquee enterprises with OpenStack as the central component of cloud transformations, but many are still leery of the perceived complexity of configuring, deploying and maintaining OpenStack-based architectures. Over the last few releases, processes for installation and upgrades, tooling, and API standardization across projects have improved as operators have become more vocal during the requirements phase. Community membership continues to grow on a global basis, and the supporting organization also depicts a similar geographic trend.

…  Horizontal scaling of Nova is much improved, based on input from CERN and Rackspace. CERN, an early OpenStack adopter, demonstrated the ability for the open source platform to scale – it now has 165,000 cores running OpenStack. However, Walmart, PayPal and eBay are operating larger OpenStack environments.

May 18, 2015:

Walmart‘s Cloud Journey by Amandeep Singh Juneja
Sr. Director, Cloud Engineering and Operations, WalmartLabs: Introduction to World’s largest retailer and its journey to build a large private Cloud.

Amandeep Singh Juneja is Senior Director for Cloud Operations and Engineering at WalmartLabs. In his current role, Amandeep is responsible for the build out of elastic cloud used by various Walmart Ecommerce properties. Prior to his current role at Walmart Labs, Amandeep has held various leadership roles at HP, WebOS (Palm) and eBay.

May 19, 2015:

OpenStack Update from eBay and PayPal by Subbu Allamaraju
Chief Engineer, Cloud, eBay Inc: Journey and future of OpenStack eBay and PayPal

Subbu is the Chief Engineer of cloud at eBay Inc. His team builds and operates a multi-tenant geographically distributed OpenStack based private cloud. This cloud now serves 100% of PayPal web and mid tier workloads, significant parts of eBay front end and services, and thousands of users for their dev/test activities.

May 18, 2015:

Architecting Organizational Change at TD Bank by Graeme Peacock, VP Engineering, TD Bank Group

Graeme cut his teeth in the financial services consulting industry by designing and developing real-time Trading, Risk and Clearing applications. He then joined NatWest Markets and J.P. Morgan in executive level roles within the Equity Derivatives business lines.
Graeme then moved to a Silicon Valley Startup to expand his skillset as V.P. of Engineering at Application Networks. His responsibility extended to Strategy, Innovation, Product Development, Release Management and Support to some of the biggest names in the Financial Services Sector.
For the last 10 years, he has held Divisional CIO roles at Citigroup and Deutsche Bank, both of which saw him responsible for Credit, Securitized and Emerging Market businesses.
Graeme moved back to a V.P. of Engineering role at TD Bank Group several years ago. He currently oversees all Infrastructure Innovation — everything form Mobile and Desktop to Database, Middleware and Cloud.  His focus is on the transformational: software development techniques, infrastructure design patterns, and DevOps processes.

TD Bank uses cloud as catalyst for cultural change in IT
May 18, 2015 Written by Jonathan Brandon for Business Cloud News

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.
Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.
“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.
But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.
“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.
“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”
Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.
In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.
The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.
“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.
“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:

While OpenStack may have been conceived as an open source multi-tenant IaaS, its future success will mainly come from hosted and on-premises private cloud deployments. Yes, there are many pockets of success with regional or vertical-focused public clouds based on OpenStack, but none with the scale of AWS or the growth of Microsoft Azure. Hewlett Packard Enterprise shuttered its OpenStack Helion-based public cloud, and Rackspace shifted engineering resources away from its own public cloud. Rackspace, the service provider with the largest share of OpenStack-related revenue, says its private cloud is growing in the ‘high double digits.’ Currently, 56% of OpenStack’s service-provider revenue total is public cloud-based, but we expect private cloud will account for a larger portion over the next few years.

October 21, 2015:

A new model to deliver public cloud by Bill Hill, SVP and GM, HP Cloud

Over the past several years, HP has built its strategy on the idea that a hybrid infrastructure is the future of enterprise IT. In doing so, we have committed to helping our customers seamlessly manage their business across traditional IT and private, managed or public cloud environments, allowing them to optimize their infrastructure for each application’s unique requirements.
The market for hybrid infrastructure is evolving quickly. Today, our customers are consistently telling us that in order to meet their full spectrum of needs, they want a hybrid combination of efficiently managed traditional IT and private cloud, as well as access to SaaS applications and public cloud capabilities for certain workloads. In addition, they are pushing for delivery of these solutions faster than ever before.
With these customer needs in mind, we have made the decision to double-down on our private and managed cloud capabilities. For cloud-enabling software and solutions, we will continue to innovate and invest in our HP Helion OpenStack®platform. HP Helion OpenStack® has seen strong customer adoption and now runs our industry leading private cloud solution, HP Helion CloudSystem, which continues to deliver strong double-digit revenue growth and win enterprise customers. On the cloud services side, we will focus our resources on our Managed and Virtual Private Cloud offerings. These offerings will continue to expand, and we will have some very exciting announcements on these fronts in the coming weeks.

Public cloud is also an important part of our customers’ hybrid cloud strategy, and our customers are telling us that the lines between all the different cloud manifestations are blurring. Customers tell us that they want the ability to bring together multiple cloud environments under a flexible and enterprise-grade hybrid cloud model. In order to deliver on this demand with best-of-breed public cloud offerings, we will move to a strategic, multiple partner-based model for public cloud capabilities, as a component of how we deliver these hybrid cloud solutions to enterprise customers.

Therefore, we will sunset our HP Helion Public Cloud offering on January 31, 2016. As we have before, we will help our customers design, build and run the best cloud environments suited to their needs – based on their workloads and their business and industry requirements.

To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments. To enable this flexibility, we are helping customers build cloud-portable applications based on HP Helion OpenStack® and the HP Helion Development Platform. In Europe, we are leading the Cloud28+ initiative that is bringing together commercial and public sector IT vendors and EU regulators to develop common cloud service offerings across 28 different countries.
For customers who want access to existing large-scale public cloud providers, we have already added greater support for Amazon Web Services as part of our hybrid delivery with HP Helion Eucalyptus, and we have worked with Microsoft to support Office 365 and Azure. We also support our PaaS customers wherever they want to run our Cloud Foundry platform – in their own private clouds, in our managed cloud, or in a large-scale public cloud such as AWS or Azure.
All of these are key elements in helping our customers transform into a hybrid, multi-cloud IT world. We will continue to innovate and grow in our areas of strength, we will continue to help our partners and to help develop the broader open cloud ecosystem, and we will continue to listen to our customers to understand how we can help them with their entire end-to-end IT strategies.

 December 1, 2015:

Hewlett Packard Enterprise and Microsoft announce plans to deliver integrated hybrid IT infrastructure press release

London, U.K. – December 1, 2015 – Today at Hewlett Packard Enterprise Discover, HPE and Microsoft Corp. announced new innovation in Hybrid Cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft’s hybrid cloud offerings.

“Hewlett Packard Enterprise is committed to helping businesses transform to hybrid cloud environments in order to drive growth and value,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “Public cloud services, like those Azure provides, are an important aspect of a hybrid cloud strategy and Microsoft Azure blends perfectly with HPE solutions to deliver what our customers need most.”
The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models and accelerate their business further, faster.
“Our mission to empower every organization on the planet is a driving force behind our broad partnership with Hewlett Packard Enterprise that spans Microsoft Azure, Office 365 and Windows 10,” said Satya Nadella, CEO, Microsoft. “We are now extending our longstanding partnership by blending the power of Azure with HPE’s leading infrastructure, support and services to make the cloud more accessible to enterprises around the globe.”
Product Integration and Collaboration HPE and Microsoft are introducing the first hyper-converged system with true hybrid cloud capabilities, the HPE Hyper-Converged 250 for Microsoft Cloud Platform System StandardBringing together industry leading HPE ProLiant technology and Microsoft Azure innovation, the jointly engineered solution brings Azure services to customers’ datacenters, empowering users to choose where and how they want to leverage the cloud. An Azure management portal enables business users to self-deploy Windows and Linux workloads, while ensuring IT has central oversight. Azure services provide reliable backup and disaster recovery, and with HPE OneView for Microsoft System Center, customers get an integrated management experience across all system components. HPE offers hardware and software support, installation and startup services to customers to speed deployment to just a matter of hours, lower risk and decrease total cost of ownership. The CS 250 is available to order today.
As part of the expanded partnership, HPE will enable Azure consumption and services on every HPE server, which allows customers to rapidly realize the benefits of hybrid cloud.
Extended Support and Services to Simplify Cloud
HPE and Microsoft will create HPE Azure Centers of Excellence in Palo Alto, Calif. and Houston, Texas, to ensure customers have a seamless hybrid cloud experience when leveraging Azure across HPE infrastructure, software and services. Through the work at these centers, both companies will invest in continuing advancements in Hybrid IT and Composable Infrastructure.
Because Azure is a preferred provider of public cloud for HPE customers, HPE also plans to certify an additional 5,000 Azure Cloud Architects through its Global Services Practice. This will extend its Enterprise Services offerings to bring customers an open, agile hybrid cloud with improved security that integrates with Azure.
Partner Program Collaboration
Microsoft will join the HPE Composable Infrastructure Partner Program to accelerate innovation for the next-generation infrastructure and advance the automation and integration of Microsoft System Center and HPE OneView orchestration tools with today’s infrastructure.
Likewise, HPE joined two Microsoft programs that help customers accelerate their hybrid cloud journey through end-to-end cloud, mobility, identity and productivity solutions. As a participant in Microsoft’s Cloud Solution Provider program, HPE will sell Microsoft cloud solutions across Azure, the Microsoft Enterprise Mobility Suite and Office 365.

May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:

VENDOR DEVELOPMENTS

As of the Mitaka release, two new gold members were added: UnitedStack and EasyStack, both from China. Other service providers and vendors shared their customer momentum and product updates with 451 Research during the summit. Among the highlights are: ƒ

  • AT&T has cobbled together a DevOps team from 67 different organizations, in order to transform into a software company. ƒ
  • All of GoDaddy’s new servers are going into its OpenStack environment. It is also using the Ironic (bare metal) project and exploring containers on OpenStack. ƒ
  • SwiftStack built a commercial product with an AWS-like consumption model using the Swift (object storage) project. It now has over 60 customers, including eBay, PayPal, Burton Snowboards and Ancestry.com. ƒ
  • OVH is based in France and operates a predominately pan-Europe public cloud. It added Nova compute in 2014, and currently has 75PB on Swift storage. ƒ
  • Unitas Global says OpenStack-related enterprise engagements are a large part of its 100% Y/Y growth. While it does not contribute code, it is helping to develop operational efficiencies and working with Canonical to deploy ‘vanilla’ OpenStack using Juju charms. Tableau Software is a client. ƒ
  • DreamHost is operating an OpenStack public cloud, DreamCompute, and is a supporter of the Astara (network orchestration) project. It claims 2,000 customers for DreamCompute and 10,000 customers for its object storage product. ƒ
  • Platform9 is a unique OpenStack in SaaS startup with 20 paying customers. Clients bring their own hardware, and the software provides the management functions and takes care of patching and upgrades. ƒ
  • AppFormix is a software startup focused on cloud operators and application developers that has formed a licensing agreement with Rackspace. Its analytics and capacity-planning dashboard software will now be deployed on Rackspace’s OpenStack private cloud. The software also works with Azure and AWS. ƒ
  • Tesora is leveraging the Trove project to offer DBaaS. The vendor built a plug-in for Mirantis’ Fuel installer. The collaboration claims to make commercial, open source relational and NoSQL databases easier for administrators to deploy.

April 25, 2016:

AT&T’s Cloud Journey with OpenStack by Sorabh Saxena SVP, Software Development & Engineering, AT&T

OpenStack + AT&T Innovation = AT&T Integrated Cloud.

AT&T’s network has experienced enormous growth in traffic in the last several years and the trend continues unabated. Our software defined network initiative addresses the escalating traffic demands and brings greater agility and velocity to delivering features to end customers. The underlying fabric of this software defined network is AT&T Integrated Cloud (AIC).

Sorabh Saxena, AT&T’s SVP of Software Development & Engineering, will share several use cases that will highlight a multi-dimensional strategy for delivering an enterprise & service provider scale cloud. The use cases will illustrate OpenStack as the foundational element of AIC, AT&T solutions that complement it, and how it’s integrated with the larger AT&T ecosystem.

http://att.com/ecomp


As the Senior Vice President of Software Development and Engineering at AT&T, Sorabh Saxena is leading AT&T’s transformation to a software-based company.  Towards that goal, he is leading the development of platforms that include AT&T’s Integrated Cloud (AIC), API, Data, and Business Functions. Additionally, he manages delivery and production support of AT&T’s software defined network.

Sorabh and his organization are also responsible for technology solutions and architecture for all IT projects, AT&T Operation Support Systems and software driven business transformation programs that are positioning AT&T to be a digital first, integrated communications company with a best in class cost structure. Sorabh is also championing a cultural shift with a focus on workforce development and software & technology skills development.

Through Sorabh and his team’s efforts associated with AIC, AT&T is implementing an industry leading, highly complex and massively scaled OpenStack cloud.  He is an advocate of OpenStack and his organization contributes content to the community that represents the needs of large enterprises and communication services providers.

April 25, 2016And the Superuser Award goes to… AT&T takes the fourth annual Superuser Award.

AUSTIN, Texas — The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.

NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list–read their full application here.

mkih1tshgq4senfbijhh[1]

Sorabh Saxena gives a snapshot of AT&Ts OpenStack projects during the keynote.

The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.

The legacy telecom is in the top 20 percent for upstream contributions with plans to increase this significantly in 2016.

It’s time for the community to determine the winner of the Superuser Award to be presented at the OpenStack Austin Summit. Based on the nominations received, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to four finalists.

Now, it’s your turn.

The team from AT&T is one of the four finalists. Review the nomination criteria below, check out the other nominees and cast your vote before the deadline, Friday, April 8 at 11:59 p.m.Pacific Daylight Time. Voting is limited to one ballot per person.

How has OpenStack transformed your business?

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond.

  1. Virtualization and virtual network functions (VNFs) are of critical importance to the Telecom industry to address growth and agility. AT&T’s Domain 2.0 Industry Whitepaper released in 2013 outlines the need as well as direction.
  2. AT&T chose OpenStack as the core foundation of their cloud and virtualization strategy
  3. OpenStack has reinforced AT&T’s open source strategy and strengthened our dedication to the community as we actively promote and invest resources in OpenStack
  4. AT&T is committing staff and resources to drive the vision and innovation in the OpenStack and OPNFV communities to help drive OpenStack as the default cloud orchestrator for the Telecom industry
  5. AT&T as a founding member of the ETSI ISG network functions virtualization (NFV) helped drive OpenStack as the cloud orchestrator in the NFV platform framework. OpenStack was positioned as the VIM – Virtual Infrastructure Manager. This accelerated the convergence of the Telco industry onto OpenStack.

OpenStack serves as a critical foundation for AT&T’s software-defined networking (SDN) and NFV future and we take pride in the following:

  • AT&T has deployed 70+ OpenStack (Juno & Kilo based) clouds globally, which are currently operational. Of the 70+ clouds 57 are production application and network clouds.
  • AT&T plans 90% growth, going to 100+ production application and network clouds by the end of 2016.
  • AT&T connects more than 14 million wireless customers via virtualized networks, with significant subscriber cut-over planned again in 2016
  • AT&T controls 5.7% of our network resources (29 Telco production grade VNFs) with OpenStack, with plans to reach 30% by the end of 2016 and 75% by 2020.
  • AT&T trained more than 100 staff in OpenStack in 2015

AT&T plans to expand to expand its community team of 50+ employees in 2016 As the chosen cloud platform OpenStack enabled AT&T in the following SDN and NFV related initiatives:

  • Our recently announced 5G field trials in Austin
  • Re-launch of unlimited data to mobility customers
  • Launch of AT&T Collaborate a next generation communication tool for enterprise
  • Provisioning of a Network on Demand platform to more than 500 enterprise customers
  • Connected Car and MVNO (Mobile Virtual Network Operator)
  • Mobile Call Recording
  • Internally we are virtualizing our control services like DNS, NAT, NTP, DHCP, radius, firewalls, load balancers and probes for fault and performance management.

Since 2012, AT&T has developed all of our significant new applications in a cloud native fashion hosted on OpenStack. We also architected OpenStack to support legacy apps.

  • AT&T’s SilverLining Cloud (predecessor to AIC) leveraged the OpenStack Diablo release, dating as far back as 2011
  • OpenStack currently resides on over 15,000 VMs worldwide, with the expectation of further, significant growth coming in 2016-17
  • AT&T’s OpenStack integrated Orchestration framework has resulted in a 75% reduction in turnaround time for requests for virtual resources
  • AT&T Plans to move 80% of our Legacy IT into the OpenStack based virtualized cloud environment within coming years
  • Uniform set of APIs exposed by OpenStack allows AT&T business units to leverage a “develop-once-run-everywhere” set of tools OpenStack helps AT&T’s strategy to begin to adopt best of the breed solutions at five 9’s of reliability for:
    • NFV
    • Internet-scale storage service
    • SDN
  • Putting all AT&T’s workloads on one common platform Deployment Automation: OpenStack modules have enabled AT&T to cost-effectively manage the OpenStack configuration in an automated, holistic fashion.
  • Using OpenStack Heat, AT&T pushed rolling updates and incremental changes across 70+ OpenStack clouds. Doing it manually would be take many more people and a much longer schedule.
  • Using OpenStack Fuel as a pivotal component in its cloud deployments AT&T accelerates the otherwise consuming, complex, and error-prone process of deploying, testing, and maintaining various configuration flavors of OpenStack at scale. AT&T was a major contributor towards Fuel 7.0 and Fuel 8.0 requirements. OpenStack has been a pivotal driver of AT&T’s overall culture shift. AT&T as an organization is in the midst of a massive culture shift from a Legacy Telco to a company where new skills, techniques and solutions are embraced.

OpenStack has been a key driver of this transformation in the following ways:

  • AT&T is now building 50 percent of all software on open source technologies
  • Allowing for the adoption of a dev ops model that creates a more unified team working towards a better end product
  • Development transitioned from a waterfall to cloud-native CICD methodologies
  • Developers continue to support OpenStack and make their applications cloud-native whenever possible.

How has the organization participated in or contributed to the OpenStack community?

AT&T was the first U.S. telecom service provider to sign up for and adopt the then early stage NASA-spawned OpenStack cloud initiative, back in 2011.

  • AT&T has been an active OpenStack contributor since the Bexar release.
  • AT&T has been a Platinum Member of the OpenStack Foundation since its origins in 2012 after helping to create its bylaws.
  • Toby Ford, AVP AT&T Cloud Technology has provided vision, technology leadership, and innovation to OpenStack ecosystem as an OpenStack Foundation board member since late 2012.
  • AT&T is founding member of ETSI, and OPNFV.
  • AT&T has invested in building an OpenStack upstream contribution team with 25 current employees and a target for 50+ employees by the end of 2016.
  • During the early years of OpenStack, AT&T brought many important use-cases to the community. AT&T worked towards solving those use-cases by leveraging various OpenStack modules, in turn encouraging other enterprises to have confidence in the young ecosystem.
  • AT&T drove these following Telco-grade blueprint contributions to past releases of OpenStack:
    • VLAN aware VMs (i.e. Trunked vNICs) – Support for BGP VPN, and shared volumes between guest VMs
    • Complex query support for statistics in Ceilometer
    • Spell checker gate job
    • Metering support for PCI/PCIe per VM tenant
    • PCI passthrough measurement in Ceilometer – Coverage measurement gate job
    • Nova using ephemeral storage with cinder
    • Climate subscription mechanism
    • Access switch port discovery for bare metal nodes
    • SLA enforcement per vNIC – MPLS VPNaaS
    • NIC-state aware scheduling
  • Toby Ford has regularly been invited to present keynotes, sessions, and panel talks at a number of OpenStack summits. For instance: Role of OpenStack in a Telco: User case study – at Atlanta Summit May 2014 – Leveraging OpenStack to Solve Telco needs: Intro to SDN/NFV – Atlanta Summit May 2014 – Telco OpenStack Roadmap Panel Talk – Tokyo Summit October 2015 – OpenStack Roadmap Software Trajectory – Atlanta Summit May 2014 – Cloud Control to Major Telco – Paris Summit November 2014.
  • Greg Stiegler, assistant vice president – AT&T cloud tools & development organization represented the AT&T technology development organization at the Tokyo Summit.
  • AT&T Cloud and D2 Architecture team members were invited to present various keynote sessions, summit sessions and panel talks including: – Participation at the Women of OpenStack Event – Tokyo Summit 2015 – Empower Your Cloud Through Neutron Service Function Chaining – Tokyo Summit Oct 2015 – OPNFV Panel – Vancouver Summit May 2015 – OpenStack as a Platform for Innovation – Keynote at OpenStack Silicon Valley – Aug 2015 – Taking OpenStack From Zero to Production in a Fortune-500 – Tokyo Summit October 2015 – Operating at Web-scale: Containers and OpenStack Panel Talk – Tokyo Summit October 2015 * AT&T strives to collaborate with other leading industry partners in the OpenStack ecosystem. This has led to the entire community benefiting from AT&T’s innovation.
  • Margaret Chiosi gives talks worldwide on AT&T’s D2.0 vision at many Telco conferences ranging from Optics (OFC) to SDN/NFV conferences advocating OpenStack as the de-facto cloud orchestrator.
  • AT&T Entertainment Group (DirecTV) architected multi-hypervisor hybrid OpenStack cloud by designing Neutron ML2 plugin. This innovation helped achieve integration between legacy virtualization and OpenStack.
  • AT&T is proud to drive OpenStack adoption by sharing knowledge back to the OpenStack community in the form of these summit sessions at the upcoming Austin summit:
    • Telco Cloud Requirements: What VNFs Are Asking For
    • Using a Service VM as an IPv6 vRouter
    • Service Function Chaining
    • Technology Analysis Perspective
    • Deploying Lots of Teeny Tiny Telco Clouds
    • Everything You Ever Wanted to Know about OpenStack At Scale
    • Valet: Holistic Data Center Optimization for OpenStack
    • Gluon: An Enabler for NFV
    • Among the Cloud: Open Source NFV + SDN Deployment
    • AT&T: Driving Enterprise Workloads on KVM and vCenter using OpenStack as the Unified Control Plane
    • Striving for High-Performance NFV Grid on OpenStack. Why you, and every OpenStack community member should be excited about it
    • OpenStack at Carrier Scale
  • AT&T is the “first to marketwith deployment of OpenStack supported carrier-grade Virtual Network Functions. We provide the community with integral data, information, and first-hand knowledge on the trials and tribulations experienced deploying NFV technology.
  • AT&T ranks in the top 20 percent of all companies in terms of upstream contribution (code, documentation, blueprints), with plans to increase this significantly in 2016.
    • Commits: 1200+
    • Lines of Code: 116,566
    • Change Requests: 618
    • Patch Sets: 1490
    • Draft Blueprints: 76
    • Completed Blueprints: 30
    • Filed Bugs: 350
    • Resolved Bugs: 250

What is the scale of the OpenStack deployment?

  • AT&T’s OpenStack based AIC is deployed at 70+ sites across the world. Of the 70+ 57 are production app and network clouds.
  • AT&T plans 90% growth, going to 100+ production app and network clouds by end of 2016.
  • AT&T connects more than 14 million of the 134.5 million wireless customers via virtualized networks with significant subscriber cutover planned again in 2016
  • AT&T controls 5.7% of our network resources (29 Telco production grade VNF) with a goal of high 80s by end of 2016) on OpenStack.
  • Production workloads also include AT&T’s Connected Car, Network on Demand, and AT&T Collaborate among many more.

How is this team innovating with OpenStack?

  • AT&T and AT&T Labs are leveraging OpenStack to innovate with Containers and NFV technology.
  • Containers are a key part of AT&Ts Cloud Native Architecture. AT&T chairs the Open Container Initiative (OCI) to drive the standardization around container formats.
  • AT&T is leading the effort to improve Nova and Neutron’s interface to SDN controllers.
  • Margaret Chiosi, an early design collaborator to Neutron, ETSI NFV, now serves as President of OPNFV. AT&T is utilizing its position with OPNFV to help shape the future of OpenStack / NFV. OpenStack has enabled AT&T to innovate extensively.

The following recent unique workloads would not be possible without the SDN and NFV capabilities which OpenStack enables: * Our recent announcements of 5G field trials in Austin * Re-launch of unlimited data to mobility customers * Launch of AT&T Collaborate * Network on Demand platform to more than 500 enterprise customers * Connected Car and MVNO (Mobile Virtual Network Operator) * Mobile Call Recording New services by AT&T Entertainment Group (DirecTV) that would use OpenStack based cloud infrastructure in coming years: * NFL Sunday Ticket with up to 8 simultaneous games * DirecTV Streaming Service Without Need For satellite dish

In summary – the innovation with OpenStack is not just our unique workloads, but also to support them together under the same framework, management systems, development/test, CI/CD pipelines, and deployment automation toolset(s).

Who are the team members?

  • AT&T Cloud and D2 architecture team
  • AT&T Integrated Cloud (AIC) Members: Margaret Chiosi, distinguished member of technical staff, president of OPNFV; Toby Ford, AVP – AT&T cloud technology & D2 architecture – strategy, architecture & pPlanning, and OpenStack Foundation Board Member; Sunil Jethwani – director, cloud & SDN architecture, AT&T Entertainment Group; Andrew Leasck – director – AT&T Integrated cloud development; Janet Morris – director – AT&T integrated cloud development; Sorabh Saxena, senior vice president – AT&T software development & engineering organization; Praful Shanghavi – director – AT&T integrated cloud development; Bryan Sullivan – director member of technical staff; Ryan Van Wyk – executive director – AT&T integrated cloud development.
  • AT&T’s project teams top contributors: Paul Carver, Steve Wilkerson, John Tran, Joe D’andrea, Darren Shaw.

April 30, 2016Swisscom in Production with OpenStack and Cloud Foundry

Swisscom has one of the largest in-production industry standard Platform as a Service built on OpenStack. Their offering is focused on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry, and PLUMgrid. With services such as MongoDB, MariaDB, RabbitMQ, ELK, and an object storage, the PaaS cloud offers what developers need to get started right away. Join this panel for take-away lessons on Swisscom’s journey, the technologies, partnerships, and developers who are building apps everyday on Swisscom’s OpenStack cloud.

May 23, 2016How OpenStack public cloud + Cloud Foundry = a winning platform for telecoms interview on ‘OpenStack Superuser’ with Marcel Härry, chief architect, PaaS at Swisscom

Swisscom has one of the largest in-production industry standard platform-as-a-service built on OpenStack.

Their offering focuses on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom, Switzerland’s leading telecom provider, embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry and PLUMgrid.

Superuser interviewed Marcel Härry, chief architect, PaaS at Swisscom and member of theTechnical Advisory Board of the Cloud Foundry Foundation to find out more.

How are you using OpenStack?

OpenStack has allowed us to rapidly develop and deploy our Cloud Foundry-based PaaS offering, as well as to rapidly develop new features within SDN and containers. OpenStack is the true enabler for rapid development and delivery.

An example: after half a year from the initial design and setup, we already delivered two production instances of our PaaS offering built on multiple OpenStack installations on different sites. Today we are already running multiple production deployments for high-profile customers, who further develop their SaaS offerings using our platform. Additionally, we are providing the infrastructure for numerous lab and development instances. These environments allow us to harden and stabilize new features while maintaining a rapid pace of innovation, while still ensuring a solid environment.

We are running numerous OpenStack stacks, all limited – by design – to a single region, and single availability zone. Their size ranges from a handful of compute nodes, to multiple dozens of compute nodes, scaled based on the needs of the specific workloads. Our intention is not to build overly large deployments, but rather to build multiple smaller stacks, hosting workloads that can be migrated between environments. These stacks are hosting thousands of VMs, which in turn are hosting tens of thousands of containers to run production applications or service instances for our customers.

What kinds of applications or workloads are you currently running on OpenStack?

We’ve been using OpenStack for almost three years now as our infrastructure orchestrator. Swisscom built its Elastic Cloud on top of OpenStack. On top of this we run Swisscom’s Application Cloud, or PaaS, built on Cloud Foundry with PLUMgrid as the SDN layer. Together, the company’s clouds deliver IaaS to IT architects, SaaS to end users and PaaS to app developers among other services and applications. We mainly run our PaaS/Cloud Foundry environment on OpenStack as well as the correlated managed services (i.e. a kind of DBaaS, Message Service aaS etc.) which are running themselves in Docker containers.

What challenges have you faced in your organization regarding OpenStack, and how did you overcome them?

The learning curve for OpenStack is pretty steep. When we started three years ago almost no reference architectures were available, especially none with enterprise-grade requirements such as dual-site, high availability (HA) capabilities on various levels and so forth. In addition, we went directly into the SDN, SDS levels of implementation which was a big, but very successful step at the end of the day.

What were your major milestones?

Swisscom’s go-live for its first beta environment was in spring of 2014, go live for an internal development (at Swisscom) was spring of 2015, and the go-live for its public Cloud Foundry environment fully hosted on OpenStack was in the fall of 2015. The go-live date for enterprise-grade and business-critical workloads on top of our stack from various multinational companies in verticals like finance or industry is spring, 2016, and Swisscom recently announced Swiss Re as one of its first large enterprise cloud customers.

What have been the biggest benefits to your organization as a result of using OpenStack?

Pluggability and multi-vendor interoperability (for instance with SDN like PLUMgrid or SDS like ScaleIO) to avoid vendor lock in and create a seamless system. OpenStack enabled Swisscom to experiment with deployments utilizing a DevOps model and environment to deploy and develop applications faster. It simplified the move from PoC to production environments and enabled us to easily scale out services utilizing a distributed cluster-based architecture.

What advice do you have for companies considering a move to OpenStack?

It’s hard in the beginning but it’s really worth it. Be wise when you select your partners and vendors, this will help you to be online in a very short amount of time. Think about driving your internal organization towards a dev-ops model to be ready for the first deployments, as well as enabling your firm to change deployment models (e.g. going cloud-native) for your workloads when needed.

How do you participate in the community?

This year’s Austin event was our second OpenStack Summit where we provided insights into our deployment and architecture, contributing back to the community in terms of best practices, as well as providing real-world production use-cases. Furthermore, we directly contribute patches and improvements to various OpenStack projects. Some of these patches have already been accepted, while a few are in the pipeline to be further polished for publishing. Additionally, we are working very closely together with our vendors – RedHat, EMC, ClusterHQ/Flocker, PLUMgrid as well as the Cloud Foundry Foundation – and work together to further improve their integration and stability within the OpenStack project. For example, we worked closely together with Flocker for their cinder-based driver to orchestrate persistency among containers. Furthermore, we have provided many bug reports through our vendors and have worked together with them on fixes which then have made their way back into the OpenStack community.

What’s next?

We have a perfect solution for non-persistent container workloads for our customers. We are constantly evolving this product and are working especially hard to meet the enterprise- and finance-verticals requirements when it comes to the infrastructure orchestration of OpenStack.

Härry spoke about OpenStack in production at the recent Austin Summit, along with Pere Monclus of PLUMgrid, Chip Childers of the Cloud Foundry Foundation, Chris Wright of Red Hat and analyst Rosalyn Roseboro. 

May 10, 2016: Lenovo‘s Highly-Available OpenStack Enterprise Cloud Platform Practice with EasyStack press release by EasyStack

BEIJING, May 10, 2016 /PRNewswire/ — In 2015, the Chinese IT superpower Lenovo chose EasyStack to build an OpenStack-based enterprise cloud platform to carry out their “Internet Strategy”. In six months, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It is expected that by the end of 2016, 20% of the IT system will be migrated onto the Cloud.

OpenStack is the foundation for Cloud, and perhaps has matured in the overseas market. In China, OpenStack practices worthy of noticing often come from the relatively new category of Internet Companies. Though it has long been marketed as “enterprise-ready”, traditional industries still tend to hold back towards OpenStack. This article aims to turn this perception around by presenting an OpenStack practice from the Chinese IT Superpower Lenovo, detailing their journey of transformation in both the technology and business realms to a private cloud built upon OpenStack. Although OpenStack will still be largely a carrier for internet businesses, Lenovo plans to migrate 20% of its IT system onto the cloud before the end of 2016 – taking a much applauded step forward.

Be it the traditional PC or the cellphone, technology’s evolving fast amidst this move towards mobile and social networking, and the competition’s fierce. In response to rapidly changing market dynamics, the Lenovo Group made the move of going from being product-oriented to a user-oriented strategy that can only be supported by an agile, flexible and scalable enterprise-level cloud platform capable of rapid iterations. After thorough consideration and careful evaluation, Lenovo chose OpenStack as the basis for their enterprise cloud platform to carry out this “Internet Strategy”. After six months of practice, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It’s expected that 20% of the IT system will be migrated onto the Cloud by the end of 2016.

Transformation and Picking the Right Cloud

In the past, internal IT at Lenovo has always been channel- and key client-oriented, with a traditional architecture consisting of IBM Power, AIX, PowerVM, DB2 and more recently, VMware virtualization. In the move towards becoming an Internet Company, such traditional architecture was far from being able to support the user and business volume brought by the B2C model. Cost-wise, Lenovo’s large-scale deployment of commercial solutions were reliable but complex to scale and extremely expensive.

Also, this traditional IT architecture was inadequate in terms of operational efficiency, security and compliance and unable to support Lenovo’s transition towards eCommerce and mobile business. In 2015, Lenovo’s IT entered a stage of infrastructural re-vamp, in need of using a cloud computing platform to support new businesses.

To find the right makeup for the cloud platform, Lenovo performed meticulous analyses and comparisons on mainstream x86 virtualization technologies, private cloud platforms, and public cloud platforms. After evaluating stability, usability, openness and ecosystem vitality and comprehensiveness, Lenovo deemed the OpenStack cloud platform technology able to fulfill its enterprise needs and decided to use OpenStack as the infrastructural cloud platform supporting their constant businesses innovations.

Disaster recovery plans on virtual machines, cloud hard drives and databases were considered early on into the OpenStack architectural design to ensure prompt switch over when needed to maintain business availability.

A Highly Available Architectural Design

On architectural logic, Lenovo’s Enterprise Cloud Platform managed infrastructures through a software-defined-environment, using x86 servers and 10GB network at the base layer, alongside internet-style monitoring and maintenance solutions, while employing the OpenStack platform to perform overall resource management.

To ensure high availability and improve the cloud platform’s system efficiency, Lenovo designed a physical architecture, and used capable servers with advanced configurations to make up the compute, storage network all-in-one, then using OpenStack to integrate into a single resource pool, placing compute nodes and storage nodes on the same physical node.

Two-way X3650 servers and four-way ThinkServer RQ940 server as backbones at the hardware layer. For every node there are five SSD hard drivers and 12 SAS hard drives to make up the storage module. SSD not only acts as the storage buffer, but also is the high performance storage resource pool, accessing the distributed storage through the VM to achieve high availability.

Lenovo had to resolve a number of problems and overcome numerous hurdles to elevate OpenStack to the enterprise-level.

Compute

Here, Lenovo utilized high-density virtual machine deployment. At the base is KVM virtualization technology, optimized in multiple way to maximize physical server performance, isolating CPU, Memory and other hardware resources under the compute-storage convergent architecture. The outcome is the ability to have over 50 VMs running smoothly and efficiently on every two-core CPU compute node.

In the cloud environment, it’s encouraged to achieve high availability through not hardware, but solutions. Yet still there are some traditional applications that hold certain requirements to a single host server. For such applications unable to achieve High Availability, Lenovo used Compute HA technology to achieve high availability on compute nodes, performing fault detection through various methods, migrating virtual machines on faulted physical machine to other available physical machines when needed. This entire process is automated, reducing as much as possible business disruptions caused by physical machine breakdowns.

Network

Network Isolation

Using different NIC, different switch or different VLAN to isolate various networks such as stand-alone OpenStack management networks, virtual production networks, storage networks, public networks, and PXE networks, so that interferences are avoided, increasing overall bandwidth and enabling better network control.

Multi-Public Network

Achieve network agility through multiple public networks to better manage security strategies. The Public Networks from Unicom, Telecom and at the office are some examples

Network and Optimization

Better integrate with the traditional data center network through the VLAN network model, then optimize its data package processing to achieve improved capability on network data pack process, bringing closer the virtual machine bandwidth to that of the physical network.

Dual Network Protocol Bundling and Multi Switch

Achieve high availability of physical networks through dual network protocol bundling to different switches.

Network Node HA

Achieve public network load balance, high availability and high performance through multiple network nodes, at which router-level Active/Standby methodology is used to achieve HA, which is ensured through independent network router monitoring services.

Storage

The Lenovo OpenStack Cloud Platform used Ceph as the unified storage backend, in which data storage for Glance image mirroring, Nova virtual machine system disc, and Cinder cloud hard drive are provided by Ceph RBD. Using Ceph’s Copy on Write function to revise OpenStack codes can deploy virtual machines within seconds.

With Ceph as the unified storage backend, its functionality is undoubtedly a key metric on whether the critical applications of an enterprise can be virtualized and cloud-ready. In a super-convergent deployment architecture where compute and storage run alongside each other, storage function optimization not only have to maximize storage capability, but also have to ensure the isolation between storage and compute resources to maintain system stability. For the IO stack below, Lenovo conducted bottom-up layer-by-layer optimization:

On the Networks

Open the Jumbo frame, improve data transfer efficiency while use 10Gb Ethernet to carry Ceph Cluster network traffics, improving the efficiency on Ceph data replication.

On Functionality

Leverage Solid State Disc as the Ceph OSD log to improve overall cluster IO functionality, to fulfill performance demands of critical businesses ( for example the eCommerce system’s database businesses, etc.) and achieve function-cost balance. SSD is known for its low power consumption, prompt response, high IOPS, and high throughput. In the Ceph log system, these are aligned to multithread access; using SSD to replace mechanical hard drives can fully unleash SSD’s trait of random access, rapid response and high IO throughput. Appropriately optimizing IO coordination strategy and further suit it to SSD and lower overall IO latency.

Purposeful Planning

Plan the number of Ceph OSD under the super-convergent node reasonably according to virtual machine density on the server, while assign in advance CPU and other memory resources. Cgroup, taskset and other tools can be used to perform resource isolation for QEMU-KVM and Ceph OSD

Parameter Tuning

Regarding parameter tuning for Ceph, performance can be effectively improved by fine-tuning parameters on FileStore’s default sequence, OSD’s OP thread and others. Additional tuning can be done through performing iteration test to find the most suitable parameter for the current hardware environment.

Data HA

Regarding data HA, besides existing OpenStack data protection measures, Lenovo has planned a comprehensive disaster recovery protocol for its three centers at two locations:

By employing exclusive low-latency fiber-optic cable, data can be simultaneously stored in local backup centers, and started asynchronously in long-distance centers, maximizing data security.

AD Integration

In addition, Lenovo has integrated its own business demands into the OpenStack enterprise cloud platform. As a mega company with tens of thousands of employees, AD activity logs are needed for authorization so that staffs won’t need to be individually set up user commands. Through customized development by part of the collaborator, Lenovo has successfully integrated AD functions into its OpenStack Enterprise Cloud Platform.

Overall Outcomes

Lenovo’s transformation towards being “internet-driven” was able to begin after the buildup of this OpenStack Enterprise Cloud Platform. eCommerce, Big Data and Analytics, IM, Online Mobile Phone Support and other internet based businesses, all supported by this cloud platform. Judging from feedback from the team, the Lenovo OpenStack Enterprise Cloud Platform is functioning as expected.

In the process of building up this OpenStack based enterprise cloud platform, Lenovo chose EasyStack, the leading Chinese OpenStack Company to provide professional implementation and consulting services, helping to build the initial platform, fostering a number of OpenStack experts. For Lenovo, community compatibility and continuous upgrade, as well as experiences in delivering services at the enterprise level are the main factors for consideration when choosing an OpenStack business partner.

Microsoft and partners to capitalize on Continuum for Phones instead of the exited Microsoft phone business

With The Nokia phone business is to be relaunched via a $500M private startup with Android smartphones and tablets in addition to the feature phones for which manufacturing, sales and distribution, would be acquired from Microsoft by a subsidiary of Foxconn published on this same ‘Experiencing the Cloud’ blog on May 20, 2016 I now dare to publish this follow-up post to the original message which was already available on October 13, 2015 under the title “Windows 10 enhancements for tablets and phones to achieve a powerful PC experience” (that original content see in the final part of this post) and with a statement for the start:

These are significant capabilities with which (although not only with these but with quite a number of other innovations) Microsoft—first time in its history—was able to beat Apple in its own game. You couldn’t believe it?

Unfortunately I’d felt a growing uncertainty about the future of the Microsoft Device business and therefore decided to wait till the picture gets clear. With the following Terry Myerson video appearing on the HP Business YouTube channel I’ve now felt certain to make the original information available in this curent post:

June 2, 2016HP Elite x3 and Windows 10: Terry Myerson

http://www.hp.com/go/elitex3Terry Myerson, Executive Vice President at Microsoft, talks about the collaboration between HP and Microsoft that brings to life the new HP Elite x3 with Windows 10 for business, pioneer in the 3-in-1 category.

My certainty was also supported by the Microsoft decision to exit the phone business as it had been acquired from Nokia:

May 25, 2016Microsoft announces streamlining of smartphone hardware business

Microsoft Corp. on Wednesday announced plans to streamline the company’s smartphone hardware business, which will impact up to 1,850 jobs. As a result, the company will record an impairment and restructuring charge of approximately $950 million, of which approximately $200 million will relate to severance payments.

“We are focusing our phone efforts where we have differentiation — with enterprises that value security, manageability and our Continuum capability, and consumers who value the same,” said Satya Nadella, chief executive officer of Microsoft. “We will continue to innovate across devices and on our cloud services across all mobile platforms.”

Microsoft anticipates this will result in the reduction of up to 1,350 jobs at Microsoft Mobile Oy in Finland, as well as up to 500 additional jobs globally. Employees working for Microsoft Oy, a separate Microsoft sales subsidiary based in Espoo, are not in scope for the planned reductions.

As a result of the action, Microsoft will record a charge in the fourth quarter of fiscal 2016 for the impairment of assets in its More Personal Computing segment, related to these phone decisions.

The actions associated with today’s announcement are expected to be substantially complete by the end of the calendar year and fully completed by July 2017, the end of the company’s next fiscal year.

More information about these charges will be provided in Microsoft’s fourth-quarter earnings announcement on July 19, 2016, and in the company’s 2016 Annual Report on Form 10-K.

In addition to the following sentence in the previous Microsoft selling feature phone business to FIH Mobile Ltd. and HMD Global, Oy press release on May 18, 2016:

Microsoft will continue to develop Windows 10 Mobile and support Lumia phones such as the Lumia 650, Lumia 950 and Lumia 950 XL, and phones from OEM partners like Acer, Alcatel, HP, Trinity and VAIO.

That last statement was not enough for me at that time, just 3 weeks ago as I had a truly shocking experience with upgrading my wife’s Lumia 640 XL to the Windows 10 Mobile version which had been released for that type of earlier Lumia phones last March. The software was so much buggy that I had’seen in my life any time before. I’d got so much angry that immediately bought an Android based Samsung Galaxy J5 for her. However, I became again confident in the future of Window 10 Mobile based phones after her bad experience with that Android software in terms of functionality (e.g. too many steps needed for some vital functions vs. that needed on Lumia) and the success of restoring the earlier 8.5 release on the 640 XL.

Several other videos which appeared on the same HP Business YouTube channel a little earlier gave me the final assurance:

May 27, 2016: HP Elite x3 turned heads at Mobile World Congress 2016

http://www.hp.com/go/elitex3 -HP Elite x3 made a powerful first impression at Mobile World Congress 2016 in Barcelona, winning 24 awards and positive reviews from industry experts. Meet the new HP Elite x3 the one device that’s every device.

June 2, 2016Reinventing mobility: Dion Weisler 

http://www.hp.com/go/elitex3 -Dion Weisler President and Chief Executive Officer for HP Inc. introduces to the revolution of mobility. Meet the new HP Elite x3 pioneer in the 3-in-1 category; the next generation of computing, designed specifically for business.

June 2, 2016The new HP Elite x3: Michael Park

http://www.hp.com/go/elitex3Michael Park, Vice President for Commercial Mobility & Software division at HP Inc., introduces the new HP Elite x3, pioneer in the 3-in-1 category that will transform business mobility.

June 2, 2016HP Elite x3 and Qualcomm: Steve Mollenkopf

http://www.hp.com/go/elitex3 -Steve Mollenkopf, Chief Executive Officer of Qualcomm Incorporated, presents the power of Snapdragon 820 processor in HP Elite x3, as part of the recent collaboration with HP. Meet the new HP Elite x3, pioneer in the 3-in-1 category; the next generation of computing, designed specifically for business.


Now a brief retrospective for the start:

From the full text of Q&A part of the Transcript of Microsoft Nokia Transaction Conference Call: Steve Ballmer, Stephen Elop, Brad Smith, Terry Myerson, Amy Hood; September 3, 2013 [Microsoft, Sept 3, 2013]

OPERATOR: Walter Pritchard, Citigroup, your line is open.
WALTER PRITCHARD: Great. Thanks for taking the question. Steve Ballmer, on the tablet side, obviously, we could say many of the same things as you’ve put into this slide deck as rationale for doing an acquisition on the phone side as we could say about the tablet side including picking up more gross margin.

I’m wondering how this transaction impacts the strategy going forward in tablets and whether or not you need to, in a sense, double down further on first-party hardware in the tablet market. And then just have one follow up.

STEVE BALLMER: Okay. Terry, do you want to talk a little bit about that? That would be great.

TERRY MYERSON: Well, phones and tablets are definitely a continuum. You know, we see the phone products growing up, the screen sizes and the user experience we have on the phones. We’ve now made that available in our Windows tablets, our application platform spans from phone to tablet. And I think it’s fair to say that our customers are expecting us to offer great tablets that look and feel and act in every way like our phones. We’ll be pursuing a strategy along those lines.

More information: Microsoft answers to the questions about Nokia devices and services acquisition: tablets, Windows downscaling, reorg effects, Windows Phone OEMs, cost rationalization, ‘One Microsoft’ empowerment, and supporting developers for an aggressive growth in market share ‘Experiencing the Cloud’, September 4, 2013

From the Microsoft Q4 2015 Earning Call Transcript by CEO Satya Nadella on July 21, 2015:

I am thrilled we are just days away from the start of Windows 10. It’s the first step towards our goal of 1 billion Windows 10 active devices in the fiscal year 2018. Our aspiration with Windows 10 is to move people from meeting to choosing to loving Windows. Based on feedback from more than 5 million people who have been using Windows 10, we believe people will love the familiarity of Windows 10 and the innovation. It’s safe, secure, and always up to date. Windows 10 is more personal and more productive with Cortana, Office, universal apps, and Continuum. And Windows 10 will deliver innovative new experiences like Inking on Microsoft Edge and gaming across Xbox and PCs, and also opens up entirely new device categories such as Hololens.

From Windows 10 available in 190 countries as a free upgrade Microsoft news release on July 28, 2015:

Windows 10 is more personal and productive, with voice, pen and gesture inputs for natural interaction with PCs. It’s designed to work with Office and Skype and allows you to switch between apps and stay organized with Snap and Task View. Windows 10 offers many innovative experiences and devices, including the following:

  • Cortana, the personal digital assistant, makes it easy to find the right information at the right time.
  • New Microsoft Edge browser lets people quickly browse, read, and mark up and share the Web.
  • The integrated Xbox app delivers the Xbox experience to Windows 10, bringing together friends, games and accomplishments across Xbox One and Windows 10 devices.
  • Continuum optimizes apps and experiences beautifully across touch and desktop modes.
  • Built-in apps including Photos; Maps; Microsoft’s new music app, Groove; and Movies & TV offer entertainment and productivity options. With OneDrive, files can be easily shared and kept up-to-date across all devices.
  • A Microsoft Phone Companion app enables iPhones, Android or Windows phones to work seamlessly with Windows 10 devices.
  • The all new Office Mobile apps for Windows 10 tablets are available today in the Windows Store.4 Built for work on-the-go, the Word, Excel and PowerPoint apps offer a consistent, touch-first experience for small tablets. For digital note-taking needs, the full-featured OneNote app comes pre-installed with Windows 10. The upcoming release of the Office desktop apps (Office 2016) will offer the richest feature set for professional content creation. Designed for the precision of a keyboard and mouse, these apps will be optimized for large-screen PCs, laptops and 2-in-1 devices such as the Surface Pro.

More information around the above 2 excerpts:
Windows 10 is here to help regain Microsoft’s leading position in ICT ‘Experiencing the Cloud’, July 31, 2015

From 2015 Annual Report>The ambitions that drive us on July 31, 2015:

Create more personal computing

Windows 10 is the cornerstone of our ambition to usher in an era of more personal computing. We see the launch of Windows 10 in July 2015 as a critical, transformative moment for the Company because we will move from an operating system that runs on a PC to a service that can power the full spectrum of devices in our customers’ lives. We developed Windows 10 not only to be familiar to our users, but more safe and secure, and always up-to-date. We believe Windows 10 is more personal and productive, working seamlessly with functionality such as Cortana, Office, Continuum, and universal applications. We designed Windows 10 to foster innovation – from us, our partners and developers – through experiences such as our new browser Microsoft Edge, across the range of existing devices, and into entirely new device categories.

Our future opportunity

There are several distinct areas of technology that we aim to drive forward. Our goal is to lead the industry in these areas over the long-term, which we expect will translate to sustained growth. We are investing significant resources in:

  • Delivering new productivity, entertainment, and business processes to improve how people communicate, collaborate, learn, work, play, and interact with one another.
  • Establishing the Windows platform across the PC, tablet, phone, server, other devices, and the cloud to drive a thriving ecosystem of developers,unify the cross-device user experience, and increase agility when bringing new advances to market.
  • Building and running cloud-based services in ways that unleash new experiences and opportunities for businesses and individuals.
  • Developing new devices that have increasingly natural ways to interact with them, including speech, pen, gesture, and augmented reality holograms.
  • Applying machine learning to make technology more intuitive and able to act on our behalf, instead of at our command.

January 14, 2016Continuum for Phones: Making the Phone Work Like a PC by  / Principal Program Manager Lead

Imagine having a phone that works like a PC. Continuum for Phones makes this a reality, enabling Windows customers to get things done like never before.

Check out the ways this capability comes alive. You’ll be able to travel and leave your laptop at home, knowing you’re still equipped to complete your most common tasks. Walk into a meeting with just your smartphone – you’re fully equipped for seamlessly projecting PowerPoint presentations to a larger screen. Or take a seat in a business center where you plug your phone into a monitor and keyboard – you’ve instantly gained PC-like productivity using Office apps and the Microsoft Edge browser.

Continuum for Phones - Making the Phone Work Like a PC -- January 14, 2016

How it all started

The road to Continuum began three years ago with a simple observation: we take our phones everywhere, we depend on them, and we feel lost without them. Yet, when the time comes to do “real work,” we reach for a laptop or desktop PC. So we end up carrying our phones plus our laptops, or we wait until we are at our desks to do the heavy lifting.

The thing is, today’s phones have more than enough processing power to handle our most common tasks and activities. We knew this was especially true in emerging markets where people rely only on their mobile phones to get online.  So — with these thoughts top of mind — we set out on our mission to help people get real work done with just their phone.

Who are we? We are the small team of people who built Continuum for Phones with a passion to change the future of personal productivity.

What people want

We started by talking to customers to understand what they needed. We spoke to people around the globe – from Chicago to Shanghai – and found that most people wanted the same thing: a phone that did more. Here are the main insights from the research:

  • “My most important device”: people universally describe their smartphone as the center of their connected life.
  • Connect to a bigger screen: people rely on their laptops and desktops because their phone lacks a large screen, keyboard and mouse. They want to easily connect to larger screens for both work and entertainment.
  • Tech-savvy people expect more: as the processing power of phones has risen, so has the expectations of the tech-savvy.
  • Many people around the world don’t have PCs: because they can’t afford a PC, people have a TV and a phone and that’s it. So any computing work gets done on their phone.

We realized that people embraced the idea of having a phone that could work like a PC.

Getting it done

So we started building Continuum, and we soon realized that we faced many technical and design challenges.

For example, there were two paradigms for connecting to a second screen: (1) mirroring your phone’s screen to a larger screen or (2) connecting your PC to multiple monitors. We needed to create a new design paradigm with two independent experiences – one on the phone and a separate one on the second screen. This was important because customers wanted to continue to use their phone as a phone, even while having a PC-like experience on the second screen. We spent months iterating with paper and software prototypes to arrive at an experience that was easy to understand and use.

The technical hurdles were just as big. For example, we had to build support for keyboard and mouse into Windows 10 Mobile. And many substantial architecture changes were needed in Windows to make Continuum work.

At the //Build conference in April 2015, we did our first live demo, and at the Windows 10 launch in July, we showed the full power of a phone running Office* apps on a second screen. The response – which exceeded our expectations — motivated us to keep going, working relentlessly with hundreds of colleagues around the world to deliver an integrated solution that required major changes to Windows, new capabilities in the phones, and creation of docks such as the Microsoft Display Dock.

Announcing Continuum

So, with the debut of Continuum for Phones, you really can have something new in your pocket: a smartphone that has the power and ability to work like a PC. In the words of our CEO Satya Nadella: “This is the beginning of how we are going to change what the form and function of a phone is.”

Right now, this means that you can carry a smartphone – like the new Lumia 950 and Lumia 950XL – and use a small dock or wireless dongle to connect it to a keyboard, mouse and monitor for a familiar PC-like experience. Run Office* apps, browse the Web, edit photos, write email, and much more.

Continuum for Phones No 2 - Making the Phone Work Like a PC -- January 14, 2016

While you’re working on the larger screen, you won’t lose your phone’s unique abilities. Continuum multi-tasks flawlessly so you can keep using your phone as a phone for calls, emails, texts, or Candy Crush. Or if you don’t have a mouse, you can use your phone as the trackpad for the apps on the larger screen.

If you share my enthusiasm for Continuum for Phones, please check out all the details, including multiple usage scenarios, at windows.com.

* App experience may vary. Office 365 subscription required for some Office features.

June 4, 2016 snapshot: New features coming soon to Windows 10 Anniversary Update

This year’s Windows 10 Anniversary Update will have great new innovative features including:1

The pen just got even mightier.

Windows 10 Anniversary Edition with a mightier Ink -- June 4, 2016

Turn thoughts into action with Windows Ink – using the pen, your fingertip, or both at once.2 Pair it with Office apps to effortlessly edit documents. With Windows Ink, you’ll be able to access features like Sticky Notes with a simple click of the pen.3 When you start drawing a figure like a chart or graph, it’ll turn into the real thing right before your eyes. And because Windows Ink stays active when your device is locked, you’ll be able to jot down notes even when you don’t have time to enter a password.

Cortana’s got you covered.

No time to enter your password but need some quick help? No problem — just ask. Cortana4 will now be at your service, even before you login. Whether you want to make a note, play music or set a reminder, Cortana will have you covered.

The secret password is: you.

With Windows Hello, unlocking your PC and devices is as quick as looking or touching.But the new Windows Hello will also let you unlock your PC simply by tapping your Windows Hello enabled phone.6 Beyond the hardware, Windows Hello will also give you instant access to paired apps and protected websites on Microsoft Edge – all while maintaining enterprise-level security. Windows Hello lets you say goodbye to cumbersome passwords.

Got game? We’ll deliver.

Windows 10 Anniversary Edition will deliver DirectX 12 games and Xbox Live features -- June 4, 2016Windows 10 will deliver incredible DirectX 12 games and Xbox Live features that will transform what you expect from PC gaming. Now you can play and connect with gamers across Xbox One and Windows 10 devices. From the best casual games to the next generation of PC releases, you’ll have more ways to play new games optimized for Windows.7

And that’s not all: Microsoft Studios is bringing a full portfolio of new games to Windows 10, including the forthcoming Forza Motorsport 6: Apex, which will be freefor Windows 10 users.

Ongoing progress reports (only two latest ones are summarised here):

June 1, 2016Announcing Windows 10 Mobile Insider Preview Build 14356

  • Cortana Improvements:
    – Get notifications from your phone to your PC
    – Send a photo from your phone to PC
    – New listening animation

May 26, 2016Announcing Windows 10 Insider Preview Build 14352

  • Cortana Improvements:
    – Cortana, Your Personal DJ
    – Set a timer
  • Windows Ink:
    – Updated Sticky Notes
    – Compass on the ruler
    – General improvements to the Windows Ink experience
  • Other items of note:
    – Windows Game bar improved with full-screen support
    – Feedback Hub will now show Microsoft responses
    – Updated File Explorer icon
    – Deploying Windows Enterprise edition gets easier
    – Limited Period Scanning
    – Introducing Hyper-V Containers (ADDED 5/31)

For more information see: https://blogs.windows.com/windowsexperience/tag/windows-insider-program/

Particularly relevant recent information from A change in leadership for the Windows Insider Program on June 1, 2016 by  / Corporate Vice President, Engineering Systems Team:

Since we first started the Windows Insider Program back in September 2014, Windows Insiders have helped us ship Windows 10 to over 300 million devices. We have released 35 PC builds and 22 Mobile builds to Insiders to date. This is a huge change from Windows 7 and Windows 8 which only had 2 and 3 public pre-release builds respectively. Windows Insiders have been more directly plugged in to our engineering processes for Windows than ever before, including participating in our first ever public Bug Bash this year. Windows Insiders contribute problem reports and suggestions which help us shape the platform, and are currently helping us get ready to ship the next major update to Windows 10 this summer – the Windows 10 Anniversary Update. This is just the beginning of the journey we’re on though. We really appreciate having such an amazing connection with our customers, and want Windows Insiders to continue to help shape Windows releases for years to come. With that in mind, I want to talk about a change to the Windows Insider Program going forward.

When I was introduced as leader of the Windows Insider Program over 18 months ago, I was responsible for the team that built our feedback and flighting systems for Windows. It made sense for me to be on the front lines talking with customers of the systems that my team was building to get Insider Preview Builds out and hear the feedback rolling in. In August of last year, I changed jobs to work on the Engineering Systems Team in WDG. In this role, I am responsible for the tools our engineers use to build Windows, including our planning and work management systems, source code management, build infrastructure, and test automation systems. …

Meet Dona Sarkar

I have worked with Dona for many years and think she is the perfect person to guide the Windows Insider Program forward. Her technical expertise, passion for customers, and commitment to listening to feedback is unmatched. …

You can follow Dona here on Twitter. Please welcome her as the new leader of the Windows Insider Program!

Get to know more about Dona here from Microsoft Stories!


Finally more as well as historic information on this subject which I’d originally put together on October 13, 2015 and intended to publish under the title:

Windows 10 enhancements for tablets and phones to achieve a powerful PC experience

These are significant capabilities with which (although not only with these but with quite a number of other innovations) Microsoft—first time in its history—was able to beat Apple in its own game. You couldn’t believe it?

First watch these two very short videos from CNNMoney presenting Microsoft’s “ultimate laptop” in terms of its device innovations:
Hands-on with Microsoft Surface Book


See Microsoft’s reversible laptop in :60

Then follow with the below information which is presenting one the most important Windows 10 software innovations, called Continuum (Continuum tablet mode for touch-capable devices) which makes that “ultimate laptop” an “ultimate tablet” as well.

Then get acquainted with a similar Windows 10 software innovation, called Continuum for Phones (it is rather for Mobile devices) which is allowing an entry level tablet or a premium phone to become a true PC with an extension to an external large size display after docking to it.

Note that while the “ultimate laptop/ultimate tablet” hybrid is for the premium client market, the second one is targeted at the entry level emerging markets as well. In that scenario Microsoft is hoping to capitalize on the availability of extremely low-cost tablets which could be enhanced to a PC-like experience with Continuum for Phones. When coupled with a similarly low-priced Windows 10 phone the emerging market user will have 2 devices for around $200 and a consistent Windows 10 experience easily dockable to a large size display, and with that easily achieving a true PC experience.

Suggested other information:
– July 30, 2015: Docking – Windows 10 hardware dev, Microsoft Hardware Dev Center
– March 28, 2015: Display – Windows 10 hardware dev, Microsoft Hardware Dev Center
– March 28, 2015: Graphics – Windows 10 hardware dev, Microsoft Hardware Dev Center

Continuum tablet mode for touch-capable devices

The Continuum feature of Windows 10 desktop edition adapts between tablet and PC modes when docking/undocking. More generally: “Continuum is available on all Windows 10 desktop editions by manually turning “tablet mode” on and off through the Action Center. Tablets and 2-in-1s with GPIO indicators or those that have a laptop and slate indicator will be able to be configured to enter ‘tablet mode’ automatically.” Source: Windows 10 Specifications, Microsoft, June 1, 2015

May 4, 2015Continuum For Windows 10 PCs and Tablets At Microsoft Ignite Event 2015 

June 12, 2015Continuum Overview – Windows 10 hardware dev, Microsoft Hardware Dev Center

Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. This document describes how to implement Continuum on 2-in-1 devices and tablets, specifically how to switch in and out of “tablet mode.”

Tablet Mode is a feature that switches your device experience from tablet mode to desktop mode and back. The primary way for a user to enter and exit “tablet mode” is manually through the Action Center. In addition, OEMs can report hardware transitions (for example, transformation of 2-in-1 device from clamshell to tablet and vice versa), enabling automatic switching between the two modes. However, a key promise of Continuum is that the user remains in control of their experience at all times, so these hardware transitions are surfaced through a toast prompt that must be confirmed by the user. The users also has the option to set the default response.

Target Devices

Dn917883.Continuum_tablet(en-us,VS.85).png Dn917883.Continuum_Detachables(en-us,VS.85).png Dn917883.Continuum_Convertibles(en-us,VS.85).png
Tablets Detachables Convertibles
Pure tablets and devices that can dock to external monitor + keyboard + mouse. Tablet-like devices with custom designed detachable keyboards. Laptop-like devices with keyboards that fold or swivel away.

When the device switches to tablet mode, the following occur:

  • Start resizes across the entire screen, providing an immersive experience.
  • The title bars of Store apps auto-hide to remove unnecessary chrome and let content shine through.
  • Store apps and Win32 apps can optimize their layout to be touch-first when in Tablet Mode.
  • The user can close apps, even Win32 apps, by swiping down from the top edge.
  • The user can snap up to two apps side-by-side, including Win32 apps, and easily resize them simultaneously with their finger.
  • The taskbar transforms into a navigation and status bar that’s more appropriate for tablets.
  • The touch keyboard can be auto-invoked.

Of course, even in “tablet mode”, users can enjoy Windows 10 features such as Snap Assist, Task View and Action Center. On touch-enabled devices, customers have access to touch-friendly invocations for those features: they can swipe in from the left edge to bring up Task View, or swipe in from the right edge to bring up Action Center.

With “tablet mode”, Continuum gives customers the flexibility to use their device in a way that is most comfortable for them. For example, a customer might want to use their 8” tablet in “tablet mode” exclusively until they dock it to an external monitor, mouse, and keyboard. At that point the customer will exit “tablet mode” and use all their apps as traditional windows on the desktop—the same way they have in previous versions of Windows. Similarly, a user of a convertible 2-in-1 device might want enter and exit “tablet mode” as they use their device throughout the day (for example, commuting on a bus, sitting at a desk in their office), using signals from the hardware to suggest appropriate transition moments.

Imagine the overall smoothness of that combined laptop and tablet experience on the brand new Microsoft Surface Book announced just on October 6, 2015. Out of a plethora of videos reporting on that new device with quite an entusiasm I’ve selected the one which—in my view—just right with its judgement and very concise at the same time.

Surface Book hands-on: Microsoft’s first laptop is simply amazing by Mark Hachman, senior editor of the PCWorld: “No one expected the Surface Book, and what they got was a true flagship for the Windows ecosystem.

And if you don’t need the leading edge ultrabook performance provided by the clever, “more power (GPU, longer batery life …) is in the detachable keyboard part” design of the Surface Book, then the 4th generation Surface Pro 4 may be more than sufficient for you to provide a state-of-the-art productivity work capability, including the best of the pen computing available on the market (which is also on the Surface Book, you could notice the same pen in the previous video), in addition to a new type cover for the tablet part. Here again the same source has been the best to present all that.

Surface Pro 4: Hands on with Microsoft’s category-creating productivity tablet by Mark Hachman, senior editor of the PCWorld 

Continuum for phones

With Continuum for phones in Windows 10 Mobile edition, connecting a phone enables a screen to become like a PC. Additionally: “Continuum for phones limited to select premium phones at launch. External monitor must support HDMI input. Continuum-compatible accessories sold separately. App availability and experience varies by device and market. Office 365 subscription required for some features.” Source: Windows 10 Specifications, Microsoft, June 1, 2015

April 29, 2015: As part of the Universal Windows Platform Microsoft shared at Build 2015 how apps can scale using Continuum for phones, enabling people to use their phones like PCs for productivity or entertainment. With that your phone app can start using a full-sized monitor, mouse, and keyboard, giving you even more mileage from your universal app’s shared code and UI.

April 29, 2015Windows Continuum for Phones See how new Windows Continuum functionality for mobile phones tailors the app experience across devices to transform a phone into a full-powered PC, TV or a Smart TV 

May 4, 2015Continuum For Windows 10 For Phones At Microsoft Ignite Event 2015

IC830854[1]7″ Tablet
[Sept 17, 2015]
IC830852[1]Premium Phone
[March 29, 2015]
Key Features
Low cost
Cortana
Continuum for Phones
Cortana
Windows Hello
Continuum for Phones
Operating System
Windows 10 Mobile
Windows 10 Mobile
Recommended Components
CPU
Supported entry SoC
Supported premium SoC
RAM/Storage
1-2GB/8-32GB eMMC w/SD card
2-4GB / 32-64GB with SD slot
Display
7” 480×800 or 1280×720 w/touch
4.5-5.5”+ / FHD-WQHD
Dimensions
<9mm & <.36kg
<7.5mm & <160g
Battery
10+ hours
2500+ mAh ( 1 day active use)
Connectivity
802.11ac+, 1 micro USB 2.0, mini HDMI, BT, LTE option
LTE/Cat 4+ /802.11b/g/n/ac 2×2, USB, 3.5mm jack, BT LE, NFC
Audio/Video/ Camera+
Front camera, speakers, headphones
20MP with OIS/Flash; 5MP FFC

Oct 6, 2015Windows 10 Continuum for Phones demo on Lumia 950 and Lumia 950 XL by Bryan Roper, Microsoft marketing manager, at Microsoft Windows 10 Devices Event 2015 

 

DataStax: a fully distributed and highly secure transactional database platform that is “always on”

When an open-source database written in Java that runs primarily in production on Linux becomes THE solution for the cloud platform from Microsoft (i.e. Azure) in the fully distributed, highly secure and “always on” transactional database space then we should take a special note of that. This is the case of DataStax:

July 15, 2015: Building the intelligent cloud Scott Guthrie’s keynote on the Microsoft Worldwide Partner Conference 2015, the DataStax related segment in 7 minutes only 

Transcript:

SCOTT GUTHRIE, EVP of Microsoft Cloud and Enterprise:  What I’d like to do is invite three different partners now on stage, one an ISV, one an SI, and one a managed service provider to talk about how they’re taking advantage of our cloud offerings to accelerate their businesses and make their customers even more successful.

First, and I think, you know, being able to take advantage of all of these different capabilities that we now offer.

Now, the first partner I want to bring on stage is DataStax.  DataStax delivers an enterprise-grade NoSQL offering based on Apache Cassandra.  And they enable customers to build solutions that can scale across literally thousands of servers, which is perfect for a hyper-scale cloud environment.

And one of the customers that they’re working with is First American, who are deploying a solution on Microsoft Azure to provide richer insurance and settlement services to their customers.

What I’d like to do is invite Billy Bosworth, the CEO of DataStax, on stage to join me to talk about the partnership that we’ve had and how some of the great solutions that we’re building together.  Here’s Billy.  (Applause.)

Well, thanks for joining me, Billy.  And it’s great to have you here.

BILLY BOSWORTH, CEO of DataStax:  Thank you.  It’s a real privilege to be here today.

SCOTT GUTHRIE:  So tell us a little bit about DataStax and the technology you guys build.

BILLY BOSWORTH:  Sure.  At DataStax, we deliver Apache Cassandra in a database platform that is really purpose-built for the new performance and availability demands that are being generated by today’s Web, mobile and IOT applications.

With DataStax Enterprise, we give our customers a fully distributed and highly secure transactional database platform.

Now, that probably sounds like a lot of other database vendors out there as well.  But, Scott, we have something that’s really different and really important to us and our customers, and that’s the notion of being always on.  And when you talk about “always on” and transactional databases, things can get pretty complicated pretty fast, as you well know.

The reason for that is in an always-on world, the datacenter itself becomes a single point of failure.  And that means you have to build an architecture that is going to be comprehensive and include multiple datacenters.  That’s tough enough with almost any other piece of the software stack.  But for transactional databases, that is really problematic.

Fortunately, we have a masterless architecture in Apache Cassandra that allows us to have DataStax enterprise scale in a single datacenter or across multiple datacenters, and yet at the same time remain operationally simple.  So that’s really the core of what we do.

SCOTT GUTHRIE:  Is the always-on angle the key differentiator in terms of the customer fit with Azure?

BILLY BOSWORTH:  So if you think about deployment to multiple datacenters, especially and including Azure, it creates an immediate benefit.  Going back to your hybrid clouds comment, we see a lot of our customers that begin their journey on premises.  So they take their local datacenter, they install DataStax Enterprise, it’s an active database up and running.  And then they extend that database into Azure.

Now, when I say that, I don’t mean they do so for disaster recovery or failover, it is active everywhere.  So it is taking full read-write requests on premises and in Azure at the same time.

So if you lose connectivity to your physical datacenter, then the Azure active nodes simply take over.  And that’s great, and that solves the always-on problem.

But that’s not the only thing that Azure helps to solve.  Our applications, because of their nature, tend to drive incredibly high throughput.  So for us, hundreds of millions or even tens and hundreds of billions of transactions a day is actually quite common.

You guys are pretty good, Scott, but I don’t think you’ve changed the laws of physics yet.  And so the way that you get that kind of throughput with unbelievable performance demands, because our customers demand millisecond and microsecond response times, is you push the data closer to the end points.  You geographically distribute it.

Now, what our customers are realizing is they can try and build 19 datacenters across the world, which I’m sure was really cheap and easy to do, or they can just look at what you’ve already done and turn to a partnership like ours to say, “Help us understand how we do this with Azure.”

So not only do you get the always-on benefit, which is critical, but there’s also a very important performance element to this type of architecture as well.

SCOTT GUTHRIE:  Can you tell us a little bit about the work you did with First American on Azure?

BILLY BOSWORTH:  Yeah.  First American is a leading name in the title insurance and settlement services businesses.  In fact, they manage more titles on more properties than anybody in the world.

Every title comes with an associated set of metadata.  And that metadata becomes very important in the new way that they want to do business because each element of that needs to be transacted, searched, and done in real-time analysis to provide better information back to the customer in real time.

And so for that on the database side, because of the type of data and because of the scale, they needed something like DataStax Enterprise, which we’ve delivered.  But they didn’t want to fight all those battles of the architecture that we discussed on their own, and that’s where they turned to our partnership to incorporate Microsoft Azure as the infrastructure with DataStax Enterprise running on top.

And this is one of many engagements that you know we have going on in the field that are really, really exciting and indicative of the way customers are thinking about transforming their business.

SCOTT GUTHRIE:  So what’s it like working with Microsoft as a partner?

BILLY BOSWORTH:  I tell you, it’s unbelievable.  Or, maybe put differently, highly improbable that you and I are on stage together.  I want you guys to think about this.  Here’s the type of company we are.  We’re an open-source database written in Java that runs primarily in production on Linux.

Now, Scott, Microsoft has a couple of pretty good databases, of which I’m very familiar from my past, and open source and Java and Linux haven’t always been synonymous with Microsoft, right?

So I would say the odds of us being on stage were almost none.  But over the past year or two, the way that you guys have opened up your aperture to include technologies like ours — and I don’t just say “include.”  His team has embraced us in a way that is truly incredible.  For a company the size of Microsoft to make us feel the way we do is just remarkable given the fact that none of our technologies have been something that Microsoft has traditionally said is part of their family.

So I want to thank you and your team for all the work you’ve done.  It’s been a great experience, but we are architecting systems that are going to drive businesses for the coming decades.  And that is super exciting to have a partner like you engaged with us.

SCOTT GUTHRIE:  Fantastic.  Well, thank you so much for joining us on stage.

BILLY BOSWORTH:  Thanks, Scott.  (Applause.)

The typical data framework capabilities of DataStax in all respects is best understood via the the following webinar which presents Apache Spark as well as the part of the complete data platform solution:
– Apache Cassandra is the leading distributed database in use at thousands of sites with the world’s most demanding scalability and availability requirements.
Apache Spark is a distributed data analytics computing framework that has gained a lot of traction in processing large amounts of data in an efficient and user-friendly manner.
The joining of both provides a powerful combination of real-time data collection with analytics.
After a brief overview of Cassandra and Spark, (Cassandra till 16:39, Spark till 19:25) this class will dive into various aspects of the integration (from 19:26).
August 19, 2015: Big Data Analytics with Cassandra and Spark by Brian Hess, Senior Product Manager of Analytics, DataStax

September 23, 2015: DataStax Announces Strategic Collaboration with Microsoft, company press release

  • DataStax delivers a leading fully-distributed database for public and private cloud deployments
  • DataStax Enterprise on Microsoft Azure enables developers to develop, deploy and monitor enterprise-ready IoT, Web and mobile applications spanning public and private clouds
  • Scott Guthrie, EVP Cloud and Enterprise, Microsoft, to co-deliver Cassandra Summit 2015 keynote

SANTA CLARA, CA – September 23, 2015 – (Cassandra Summit 2015) DataStax, the company that delivers Apache Cassandra™ to the enterprise, today announced a strategic collaboration with Microsoft to deliver Internet of Things (IoT), Web and mobile applications in public, private or hybrid cloud environments. With DataStax Enterprise (DSE), a leading fully-distributed database platform, available on Azure, Microsoft’s cloud computing platform, enterprises can quickly build high-performance applications that can massively scale and remain operationally simple across public and private clouds, with ease and at lightning speed.

Click to Tweet: #DataStax Announces Strategic Collaboration with @Microsoft at #CassandraSummit bit.ly/1V8KY4D

PERSPECTIVES ON THE NEWS

“At Microsoft we’re focused on enabling customers to run their businesses more productively and successfully,” said Scott Guthrie, Executive Vice President, Cloud and Enterprise, Microsoft. “As more organizations build their critical business applications in the cloud, DataStax has proved to be a natural  Azure partner through their ability to enable enterprises to build solutions that can scale across thousands of servers which is necessary in today’s hyper-scale cloud environment.”

“We are witnessing an increased adoption of DataStax Enterprise deployments in hybrid cloud environments, so closely aligning with Microsoft benefits any organization looking to quickly and easily build high-performance IoT, Web and mobile apps,” said Billy Bosworth, CEO, DataStax. “Working with a world-class organization like Microsoft has been an incredible experience and we look forward to continuing to work together to meet the needs of enterprises looking to successfully transition their business to the cloud.”

“As a leader in providing information and insight in critical areas that shape today’s business landscape, we knew it was critical to transform our back-end business processes to address scale and flexibility” said Graham Lammers, Director, IHS. “With DataStax Enterprise on Azure we are now able to create a next generation big data application to support the decision-making process of our customers across the globe.”

BUILD SIMPLE, SCALABLE AND ALWAY-ON APPS IN RECORD SPEED

To address the ever-increasing demands of modern businesses transitioning from on-premise to hybrid cloud environments, the DataStax Enterprise on Azure on-demand cloud database solution provides enterprises with both development and production ready Bring Your Own License (BYOL) DSE clusters that can be launched in minutes on theMicrosoft Azure Marketplace using Azure Resource Management (ARM) Templates. This enables the building of high-performance IoT, Web and mobile applications that can predictably scale across global Azure data centers with ease and at remarkable speed. Additional benefits include:

  • Hybrid Deployment: Easily move DSE workloads between data centers, service providers and Azure, and build hybrid applications that leverage resources across all three.
  • Simplicity: Easily manage, develop, deploy and monitor database clusters by eliminating data management complexities.
  • Scalability: Quickly replicate online applications globally across multiple data centers into the cloud/hybrid cloud environment.
  • Continuous Availability: DSE’s peer-to-peer architecture offers no single point of failure. DSE also provides maximum flexibility to distribute data where it’s needed most by replicating data across multiple data centers, the cloud and mixed cloud/on-premise environments.

MICROSOFT ENTERPRISE CLOUD ALLIANCE & FAST START PROGRAM

DataStax also announced it has joined Microsoft’s Enterprise Cloud Alliance, a collaboration that reinforces DataStax’scommitment to provide the best set of on-premise, hosted and public cloud database solutions in the industry. The goal of Microsoft’s Enterprise Cloud Alliance partner program is to create, nurture and grow a strong partner ecosystem across a broad set of Enterprise Cloud Products delivering the best on-premise, hosted and Public Cloud solutions in the industry. Through this alliance, DataStax and Microsoft are working together to create enhanced enterprise-grade offerings for the Azure Marketplace that reduce the complexities of deployment and provisioning through automated ARM scripting capabilities.

Additionally, as a member of Microsoft Azure’s Fast Start program, created to help users quickly deploy new cloud workloads, DataStax users receive immediate access to the DataStax Enterprise Sandbox on Azure for a hands-on experience testing out DSE on Azure capabilities. DataStax Enterprise Sandbox on Azure can be found here.

Cassandra Summit 2015, the world’s largest gathering of Cassandra users, is taking place this week and Microsoft Cloud and Enterprise Executive Vice President Scott Guthrie, DataStax CEO Billy Bosworth, and Apache Cassandra Project Chair and DataStax Co-founder and CTO Jonathan Ellis, will deliver the conference keynote at 10 a.m. PT on Wednesday, September 23. The keynote can be viewed at DataStax.com.

ABOUT DATASTAX

DataStax delivers Apache Cassandra™ in a database platform purpose-built for the performance and availability demands for IoT, Web and mobile applications. This gives enterprises a secure, always-on database technology that remains operationally simple when scaling in a single datacenter or across multiple datacenters and clouds.

With more than 500 customers in over 50 countries, DataStax is the database technology of choice for the world’s most innovative companies, such as Netflix, Safeway, ING, Adobe, Intuit and eBay. Based in Santa Clara, Calif., DataStax is backed by industry-leading investors including Comcast Ventures, Crosslink Capital, Lightspeed Venture Partners, Kleiner Perkins Caufield & Byers, Meritech Capital, Premji Invest and Scale Venture Partners. For more information, visit DataStax.com or follow us @DataStax.

September 30, 2014: Why Datastax’s increasing presence threatens Oracle’s database by Anne Shields at Market Realist 

Must know: An in-depth review of Oracle’s 1Q15 earnings (Part 9 of 12)

(Continued from Part 8)

Datastax databases are built on open-source technologies

Datastax is a California-based database management company. It offers an enterprise-grade NoSQL database that seamlessly and securely integrates real-time data with Apache Cassandra. Databases built on Apache Cassandra offer more flexibility than traditional databases. Even in case of calamities and uncertainties, like floods and earthquakes, data is available due to its replication at other data centers. NoSQL and Cassandra are open-source software.

Cassandra database was developed by Facebook (FB) to handle its enormous volumes of data. The technology behind Cassandra was developed by Amazon (AMZN) and Google (GOOGL). Oracle’s MySQL (ORCL), Microsoft’s SQL Server (MSFT), and IBM’s DB2 (IBM) are the traditional databases present in the market .

datastax[1]The above chart shows how NoSQL databases, NewSQL databases, and Data grid/cache products fit into the wider data management landscape.

Huge amounts of funds raised in the open-source technology database space

Datastax raised $106 million in September 2014 to expand its database operations. MongoDB Inc. and Couchbase Inc.—both open-source NoSQL database developers—raised $231 million and $115 million, respectively, in 2014. According to Market Research Media, a consultancy firm, spending on NoSQL technology in 2013 was less than $1 billion. It’s expected to reach $3.4 billion by 2020. This explains why this segment is attracting such huge investments.

Oracle’s dominance in the database market is uncertain

Oracle claims it’s a market leader in the relational database market, with a revenue share of 48.3%. In 2013, it launched Oracle Database 12C. According to Oracle, “Oracle Database 12c introduces a new multitenant architecture that simplifies the process of consolidating databases onto the cloud; enabling customers to manage many databases as one — without changing their applications.” To know in detail about Database 12c, please click here .

In July 2013, DataStax announced that dozens of companies have migrated from Oracle databases to DataStax databases. Customers cited scalability, disaster avoidance, and cost savings as the reasons for shifting databases. Datastax databases’ rising popularity jeopardizes Oracle’s dominant position in the database market.

Continue to Part 10

Browse this series on Market Realist:

September 24, 2014: Building a better experience for Azure and DataStax customers by Matt Rollender, VP Cloud Strategy, DataStax, Inc. on Microsoft Azure blog

Cassandra Summit is in high gear this week in Santa Clara, CA, representing the largest NoSQL event of its kind! This is the largest Cassandra Summit to date. With more than 7,000 attendees (both onsite and virtual), this is the first time the Summit is a three-day event with over 135 speaking sessions. This is also the first time DataStax will debut a formalized Apache Cassandra™ training and certification program in conjunction with O’Reilly Media. All incredibly exciting milestones!

We are excited to share another milestone. Yesterday, we announced our formal strategic collaboration with Microsoft. Dedicated DataStax and Microsoft teams have been collaborating closely behind the scenes for more than a year on product integration, QA testing, platform optimization, automated provisioning, and characterization of DataStax Enterprise (DSE) on Azure, and more to ensure product validation and a great customer experience for users of DataStax Enterprise on the Azure cloud. There is strong coordination across the two organizations – very close executive, field, and technical alignment – all critical components for a strong partnership.

This partnership is driven and shaped by our joint customers. Our customers oftentimes begin their journey with on-premise deployments of our database technology and then have a requirement to move to the cloud – Microsoft is a fantastic partner to help provide the flexibility of a true hybrid environment along with the ability to migrate to and scale applications in the cloud. Additionally, Microsoft has significant breadth regarding their data centers – customers can deploy in numerous Azure data centers around the globe, in order to be ‘closer’ to their end users. This is highly complementary to DataStax Enterprise software as we are a peer-to-peer distributed database and our customers need to be close to their end users with their always-on, always available enterprise applications.

To highlight a couple of joint customers and use cases we have First American Title and IHS, Inc. First American is a leading provider of title insurance and settlement services with revenue over $5B.  They ingest and store the largest number (billions) of real estate property records in the industry. Accessing, searching and analyzing large data-sets to get relevant details quickly is the new way they want to do business – to provide better information back to their customers in real-time and allow end users to easily search through the property records on-line. They chose DSE and Azure because of the large data requirements and because of the need to continue to scale the application.

A second great customer and use case is IHS, Inc., a $2B revenue-company that provides information and analysis to support the decision-making process of businesses and governments. This is a transformational project for IHS as they are building out an ‘internet age’ parts catalog – it’s a next generation big data application, using NoSQL, non-relational technology and they want to deploy in the cloud to bring the application to market faster.

As you can see, we are enabling enterprises to engage their customer like never before with their always on, highly available and distributed applications. Stay tuned for more as we move forward together in the coming months!

For Additional information go to http://www.datastax.com/marketplace-microsoft-azure to try out Datastax Enterprise Sandbox on Azure.

See also DataStax Enterprise Cluster Production on Microsoft Azure Marketplace

September 23, 2015: Making Cassandra Do Azure, But Not Windows by  Co-Editor, Co-Founder, The Next Platform

When Microsoft says that it is embracing Linux as a peer to Windows, it is not kidding. The company has created its own Linux distribution for switches used to build the Azure cloud, and it has embraced Spark in-memory processing and Cassandra as its data store for its first major open source big data project – in this case to help improve the quality of its Office365 user experience. And now, Microsoft is embracing Cassandra, the NoSQL data store originally created by Facebook when it could no longer scale the MySQL relational database to suit its needs, on the Azure public cloud.

Billy Bosworth, CEO at DataStax, the entity that took over steering development of and providing commercial support for Cassandra, tells The Next Platform that the deal with Microsoft has a number of facets, all of which should help boost the adoption of the enterprise-grade version of Cassandra. But the key one is that the Global 2000 customers that DataStax wants to sell support and services to are already quite familiar with both Windows Server in their datacenters and they are looking to burst out to the Azure cloud on a global scale.

“We are seeing a rapidly increasing number of our customers who need hybrid cloud, keeping pieces of our DataStax Enterprise on premise in their own datacenters and they also want to take pieces of that same live transactional data – not replication, but live data – and in the Azure cloud as well,” says Bosworth. “They have some unique capabilities, and one of the major requirements of customers is that even if they use cloud infrastructure, it still has to be distributed by the cloud provider. They can’t just run Cassandra in one availability zone in one region. They have to span data across the globe, and Microsoft has done a tremendous job of investing in its datacenters.”

With the Microsoft agreement, DataStax is now running its wares on the three big clouds, with Amazon Web Services and Google Compute Engine already certified able to run the production-grade Cassandra. And interestingly enough, Microsoft is supporting the DataStax implementation of Cassandra on top of Linux, not Windows. Bosworth says that while Cassandra can be run on Windows servers, DataStax does not recommend putting DataStax Enterprise (DSE), the commercial release, on Windows. (It does have a few customers who do, nonetheless, and it supports them.) Bosworth adds that DataStax and the Cassandra community have been “working diligently” for the past year to get a Windows port of DSE completed and that there has been “zero pressure” for the Microsoft Azure team to run DSE on anything other than Linux.

It is important to make the distinction between running Cassandra and other elements of DSE on Windows and having optimized drivers for Cassandra for the .NET programming environment for Windows.

“All we are really talking about is the ability to run the back-end Cassandra on Linux or Windows, and to the developer, it is irrelevant on what that back end is running,” explains Bosworth. This takes away some of that friction, and what we find is that on the back end, we just don’t find religious conviction about whether it should run on Windows or Linux, and this is different from five years ago. We sell mostly to enterprises, and we have not had one customer raise their hand and say they can’t use DSE because it does not run on Windows.”

What is more important is the ability to seamless put Cassandra on public clouds and spread transactional data around for performance and resiliency reasons – the same reasons that Facebook created Cassandra for in the first place.

What Is In The Stack, Who Uses It, And How

The DataStax Enterprise distribution does not just include the Apache Cassandra data store, but has an integrated search engine that is API compatible with the open source Solr search engine and in-memory extensions that can speed up data accesses by anywhere from 30X to 100X compared to server clusters using flash SSDs or disk drives. The Cassandra data store can be used to underpin Hadoop, allowing it to be queried by MapReduce, Hive, Pig, and Mahout, and it can also underpin Spark and Spark Streaming as their data stores if customers decide to not go with the Hadoop Distributed File System that is commonly packaged with a Hadoop distribution.

It is hard to say for sure how many organizations are running Cassandra today, but Bosworth reckons that it is on the order of tens of thousands worldwide, based on a number of factors. DataStax does not do any tracking of its DataStax Community edition because it wants a “frictionless download” like many open source projects have. (Developers don’t want software companies to see what tools they are playing with, even though they might love open source code.) DataStax provides free training for Cassandra, however, where it does keep track, and developers are consuming over 10,000 units of this training per month, so that probably indicates that the Cassandra installed base (including tests, prototypes, and production) is in the five figures.

datastax-momentum[1]

DataStax itself has over 500 paying customers – now including Microsoft after its partner tried to build its own Spark-Cassandra cluster using open source code and decided that the supported versions were better thanks to the extra goodies that DataStax puts into its distro. DataStax has 30 of the Fortune 100 using its distribution of Cassandra in one form or another, and it is always for transactional, rather than batch analytic, jobs and in most cases also for distributed data stores that make use of the “eventual consistency” features of Cassandra to replicate data across multiple clusters. The company has another 600 firms participating in its startup program, which gives young companies freebie support on the DSE distro until they hit a certain size and can afford to start kicking some cash into the kitty.

The largest installation of Cassandra is running at Apple, which as we previously reported has over 75,000 nodes, with clusters ranging in size from hundreds to over 1,000 nodes and with a total capacity in the petabytes range. Netflix, which used to employ the open source Cassandra, switched to DSE last May and had over 80 clusters with more than 2,500 nodes supporting various aspects of its video distribution business. In both cases, Cassandra is very likely housing user session state data as well as feeding product or play lists and recommendations or doing faceted search for their online customers.

We are always intrigued to learn how customers are actually deploying tools such as Cassandra in production and how they scale it. Bosworth says that it is not uncommon to run a prototype project on as few as ten nodes, and when the project goes into production, to see it grow to dozens to hundreds of nodes. The midrange DSE clusters range from maybe 500 to 1,000 nodes and there are some that get well over 1,000 nodes for large-scale workloads like those running at Apple.

In general, Cassandra does not, like Hadoop, run on disk-heavy nodes. Remember, the system was designed to support hot transactional data, not to become a lake with a mix of warm and cold data that would be sifted in batch mode as is still done with MapReduce running atop Hadoop.

The typical node configuration has changed as Cassandra has evolved and improved, says Robin Schumacher, vice president of products at DataStax. But before getting into feeds and speeds, Schumacher offered this advice. “There are two golden rules for Cassandra. First, get your data model right, and second, get your storage system right. If you get those two things right, you can do a lot wrong with your configuration or your hardware and Cassandra will still treat you right. Whenever we have to dive in and help someone out, it is because they have just moved over a relational data model or they have hooked their servers up to a NAS or a SAN or something like that, which is absolutely not recommended.”

datastax-table[1]

Only four years ago, because of the limitations in Cassandra (which like Hadoop and many other analytics tools is coded in Java), the rule of thumb was to put no more than 512 GB of disk capacity onto a single node. (It is hard to imagine such small disk capacities these days, with 8 TB and 10 TB disks.) The typical Cassandra node has two processors, with somewhere between 12 and 24 cores, and has between 64 GB and 128 GB of main memory. Customers who want the best performance tend to go with flash SSDs, although you can do all-disk setups, too.

Fast forward to today, and Cassandra can make use of a server node with maybe 5 TB of capacity for a mix of reads and writes, and if you have a write intensive application, then you can push that up to 20 TB. (DataStax has done this in its labs, says Schumacher, without any performance degradation.) Pushing the capacity up is important because it helps reduce server node count for a given amount of storage, which cuts hardware and software licensing and support costs. Incidentally, only a quarter of DSE customers surveyed said they were using spinning disks, but disk drives are fine for certain kinds of log data. SSDs are used for most transactional data, but the bits that are most latency sensitive should use DSE to store data on PCI-Express flash cards, which have lower latency.

Schumacher says that in most cases, the commercial-grade DSE Cassandra is used for a Web or mobile application, and a DSE cluster is not set up for hosting multiple applications, but rather companies have a different cluster for each use case. (As you can see is the case with Apple and Netflix.) Most of the DSE shops to make use of the eventual consistency replication features of Cassandra to span multiple datacenters with their data stores, and span anywhere from eight to twelve datacenters with their transactional data.

Here’s where it gets interesting, and why Microsoft is relevant to DataStax. Only about 30 percent of the DSE installations are running on premises. The remaining 70 percent are running on public clouds. About half of DSE customers are running on Amazon Web Services, with the remaining 20 percent split more or less evenly between Google Compute Engine and Microsoft Azure. If DataStax wants to grow its business, the easiest way to do that is to grow along with AWS, Compute Engine, and Azure.

So Microsoft and DataStax are sharing their roadmaps and coordinating development of their respective wares, and will be doing product validation, benchmarking, and optimization. The two will be working on demand generation and marketing together, too, and aligning their compensation to sell DSE on top of Azure and, eventually, on top of Windows Server for those who want to run it on premises.

In addition to announcing the Microsoft partnership at the Cassandra Summit this week, DataStax is also releasing its DSE 4.8 stack, which includes certification for Cassandra to be used as the back end for the new Spark 1.4 in-memory analytics tool. DSE Search has a performance boosts for live indexing, and running DSE instances inside of Docker containers has been improved. The stack also includes Titan 1.0, the graph database overlay for Cassandra, HBase, and BerkeleyDB that DataStax got through its acquisition of Aurelius back in February. DataStax is also previewing Cassandra 3.0, which will include support for JSON documents, role-based access control, and a lot of little tweaks that will make the storage more efficient, DataStax says. It is expected to ship later this year.

 

Julia Liuson: “Microsoft must transform from a company that throws a box with software into the market … into a company that offers pure services”

April 9, 2015Microsoft Changes Course by Hsiao-wen Wang
from CommonWealth Magazine, Taiwan

On Oct 21, 2015 she expanded her role as the head of the Visual Studio and .NET engineering with all the rest of the once existing DevDiv, except Brian Harry's Visual Studio Online Team (responsible for both the 3rd party developer services from Microsoft, as well as for new One Microsoft Engineering System–1ES). So product management and cross-platform developer tools belong to her as well. See the announcement below.

On Oct 21, 2015 Julia Liuson, corporate vice president of Microsoft, expanded her role as the head of the Visual Studio and .NET engineering with all the rest of the once existing DevDiv, except Brian Harry’s Visual Studio Online Team (responsible for both the 3rd party developer services from Microsoft, as well as for new One Microsoft Engineering System). So now product management and cross-platform developer tools belong to her as well. See the announcement below. Note that in August 2013 she was the manager of Brian Harry’s TFS (Team Foundation Server) DevTeam. In her …get more girls in STEM disciplines (STEM=science, technology, engineering and mathematics) article of Sept 10, 2012 she is already a corporate vice president in Microsoft’s Developer Tools business. In June 2010 she was Visual Studio Business Applications and Server & Tools Business (STB) China co-General Manager.

Microsoft remains a technology giant that is able to post net earnings of more than NT$100 billion [$3B USD] per quarter. The giant is currently transforming itself and redefining its battlefield. Industry insiders wonder how Microsoft will make money if it can no longer rely on software licensing.

It seems that commercial cloud services are Microsoft’s answer. One of the crucial parties in overcoming in-house resistance to turning Windows into open source software is Julia Liuson. Born in Shanghai, Liuson grew up in Beijing. Upon obtaining a bachelor of science in electrical engineering from the University of Washington, she joined Microsoft in 1992, holding various technical and managerial positions when the company was still in its heyday. Liuson [as a corporate vice president] works closely with Microsoft’s new CEO Satya Nadella and oversees software development for Visual Studio and the .Net framework.

Q: When taking the helm of Microsoft, Nadella said, “Our industry does not respect tradition — it only respects innovation.” How has Microsoft changed since Nadella called for the company’s transformation when taking office more than a year ago?

A: There has been a very big change in terms of acceptance for going open-source. In terms of operating procedures, we have also seen massive changes. In the past we used to release major software updates every three years, as if we were selling a precious encyclopedia set. But in a speed-hungry Internet business environment, someone needs to run and maintain [software], racing against time 24 hours a day. It is like having to update one encyclopedia page per day, updating a chapter every week.

We have also changed our organization’s operating model. In the past, the ratio of software developers to software testing personnel was one to one. When the developers had developed new software, they would throw it over the wall to the testing staff, where it was no longer the developers’ business. Now, the real work begins when the developers have written the software and release it into the market, because we need to pay attention to customer feedback before we go back to make modifications.

In order to tear down the fences between developers and other departments, we reorganized our staff in work teams of eight to twelve members so that planning, development, testing, marketing and sales as well as customer support can communicate closely with each other and shorten the time needed for product updates and new releases.

INSERT from Oct 1, 2015: Our DevOps Journey – Microsoft Engineering Stories

… In the past, we had three distinct roles on what we call “feature teams”: program managers, developers, and testers. We wanted to reduce delays in handoffs between developers and testers and focus on quality for all software created, so we combined the traditional developer and tester roles into one discipline: software engineers. Software engineers are now responsible for every aspect of making their features come to life and performing well in production. … One of our first steps was to bring the operations teams into the same organization. Before, our ops teams were organizationally distant from the engineering team. … [Now] we call our operations team “Service Engineers.” Service Engineers have to know the application architecture to be more efficient troubleshooters, suggest architectural changes to the infrastructure, be able to develop and test things like infrastructure as code and automation scripts, and make high-value contributions that impact the service design or management. …

In addition to the Our DevOps Journey – Microsoft Engineering Stories briefing from Microsoft see also the background information in the end of this post under the “DevOps Journey” title.
END OF THE INSERT


Q: As Microsoft transforms, what attitudes and skills are needed most?

A: Microsoft must learn to listen more closely to its customers; that’s a huge change.

Just the Beginning

Corresponding to these attitudinal changes, everything was different from before, like the requirements of the products, analysis of customer behavior, and the collection of big data.

Previously, we only needed to sell our products and everything was fine; we didn’t need to look at what the user wanted. However, now that I need to collect [data] on the behavior of these users, how am I going to go about my product support? How do I analyze the data I’ve gathered? These have all been huge transformations at Microsoft.

We cannot dig moats like before to protect the high market share of our products Windows and Office. Now we are a challenger, a new service that starts with a zero market share with zero users. We need to win over every single customer.

We need to adjust our own mindset: If I were a small startup, what would I do? This is completely different from our mindset in the past, when Microsoft was the industry leader with a market share above 90 percent.

Q: What keeps you awake at night?

A: Everything (laughs)! Just kidding. Come to think of it, I am in charge of Microsoft software, which has millions of users around the globe. But I don’t know who they are and how they use our software. If you told this to the people at Amazon, they would laugh at you.

Microsoft must transform from a company that throws a box with software into the market, a company that does not know who its customers are into a company that offers pure services, that knows who every single customer is and how they use its services. This is what keeps me awake at night.

There are still many things that need to be done. How much I wish it was still yesterday. Then I would have another 24 hours to get things done (laughs).

Dec 22, 2011: cached by Zoominfo page of Microsoft Chinese Employee – 微软华人协会 > Julia Liuson

Julia[1]Julia Liuson (潘正磊) is the General Manager for Visual Studio Business Applications. Her teams are responsible for enabling developers to easily build business applications on Microsoft platforms by reinvigorating development paradigms for build LOB application, deliver first class tooling for Office server and client, and bring .Net programmability to all ISV applications.

Julia joined Microsoft in 1992 as a software developer on Access 1.0. After the successful launch of Access 1.0, 1.1, and 2.0 she became development lead for the database and web project tools in Visual InterDev 1.0 and 2.0. In 1998, she assumed the role of development manager for Visual Basic.Net, and led the development effort for Visual Basic.Net 2002 and 2003. Julia served as Director of Development for all Visual Studio product line, and tackled division wide process and engineering excellence issues.

As the Partner Product Unit Manager of Visual Studio Team Architect, she was a core member of the leadership team that led the successful development and launch of Visual Studio Team System in 2005.

In 2006, she became the Partner Product Unit Manager for Visual Basic, and was responsible for delivering the most productive development tool on .Net for professional developers, and for moving millions of VB6 users forward to the .Net platform.

Oct 21, 2015: Microsoft Executive VP of the Cloud and Enterprise Group [C+E] Scott Guthrie:

Today we are announcing some organizational changes within C+E that will enable us to further accelerate our customer momentum and move even faster as an organization.  Our new C+E structure will be aligned around our key strategic businesses (Cloud Infrastructure, Data and Analytics, Business Applications and App Platform, Enterprise Mobility, Developer).  As part of today’s changes we are also bringing several teams even closer together to enable us to make deeper shared technology bets.

Each team in C+E will have a clear, focused charter.  Our culture will continue to be grounded in a Growth Mindset.  We’ll exercise this by being Customer-Obsessed, Diverse and Inclusive, and by working as One Microsoft to Make a Difference for our customers and partners. We’ll embrace data driven decision making and optimize for continuous learning and improvement.

Developer Tools and Services

Our Visual Studio Family of developer tools and services provides a complete solution for building modern cloud and mobile applications.

The Visual Studio Tools and .NET Team will be led by Julia Liuson.  John Montgomery who leads the Visual Studio and .NET PM team will report to Julia going forward.  The VS Code Team, led by Shanku Niyogi, which is responsible for our cross-platform developer tools, will also today join the Visual Studio Tools and .NET Team with Shanku also reporting to Julia.

The Visual Studio Online Team will continue to be led by Brian Harry.  The VSO team is responsible for both our 3rd party developer services, as well as for new One Microsoft Engineering System.

4336.image_5F00_thumb_5F00_768E6E83[1]

“… TFS on-prem[ises] is growing slowly because it’s already huge. VS Online usage is growing more rapidly but is still far smaller than TFS on-prem[ises]. … Here’s a month by month trend of VS Online adoption by major organization. The numbers look a little larger than they really are because adoption is still early and people are using only subsets of the functionality or using VS Online as a supplement to on-prem TFS.” ASG = Application & Services Group for “Reinvent productivity and business processes” ambition, C&E = Cloud & Enterprise for “Build the intelligent cloud platform” ambition, OSG = Operating Systems Group for “Create more personal computing” ambition. Forrás: Team Foundation Server and VS Online adoption at Microsoft by Brian Harry, June 3, 2015

Oct 24, 2014 excerpt from the web according to “Visual Studio Online” “One Microsoft Engineering System” search:

… Visual Studio Online’s goal is to become the single place for all developer targeted services – for both the internal One Microsoft Engineering System and for customers. It provides software development teams with capabilities of project planning, work item management, version control, build automation, test lab management, elastic load test, Application Insights and more. We ship new features every 3 weeks at http://www.visualstudio.com&gt;   and our adoption is growing at a very rapid clip. Ultimately, our audience is Engineers like YOU! Come onboard to build one of the most mission-critical services that will set the tone for all future engineering practices – inside Microsoft and outside in the developer community!

VS Online makes use of a wide range of technologies on premise and in the cloud, so you’ll have the opportunity to learn new stuff and go deep in many domains. Our key technologies are Azure, SQL Azure, AAD, and ASP.NET MVC on the backend. On the front end we use Knockout to build out an awesome user experience on the web, WPF for VS, and SWT for Eclipse. …

Sept 1, 2015: Cached Software Engineer II career

As Microsoft transforms to a devices + services company, Visual Studio continues to evolve and adapt in significant ways to support this transformation; requiring a strong team to deliver great engineering tools and systems. The Visual Studio Engineering Tools and Systems team is driving big, bold improvements for current and future releases in the ability to operate at a faster cadence by improving daily engineer productivity, speeding up builds and other advancements in how the software is built and delivered. This team is tasked with creating the next generation engineering system that aligns with the One Microsoft Engineering System vision (1ES). An engineering system that allows hundreds of people to work together efficiently and be very productive on one of the most important products at Microsoft, Visual Studio. This team is responsible for designing, creating, implementing, and managing the tools, services, and processes to arm the Developer Division engineers to do their best work.

As of 24 Oct, 2015Principal Software Engineer Manager – C+E career

… The Tools for Software Engineers team (TSE) is set out to maximize the productivity of all Microsoft engineers and reduce the time from idea to production.

In Satya’s memo to the company he states “In order to deliver the experiences our customers need for the mobile-first and cloud-first world, we will modernize our engineering processes to be customer-obsessed, data-driven, speed-oriented and quality-focused.” Come join us to be a part of this change!

TSE develops and operates a set of engineering tools and services including build tools, build languages (MSBuild), CloudBuild service, drop and artifact services, verification services including unit test execution and code review tools, engineering reporting and analysis services; all working towards a unified, world-class engineering system offering for internal Microsoft needs and third parties.

CloudBuild is at the center of Microsoft 1ES and is helping major groups within the company build faster, more reliable and at scale. CloudBuild serves thousands of developers and builds millions of targets daily in a highly scalable and distributed service running at scale in multiple Data Centers across the world. …

July 31, 215: 2015 Annual Report>The ambitions that drive us

To carry out our strategy, our research and development efforts focus on three interconnected ambitions:

  • Reinvent productivity and business processes.

  • Build the intelligent cloud platform.

  • Create more personal computing.

Reinvent productivity and business processes

We believe we can significantly enhance the lives of our customers using our broad portfolio of communication, productivity, and information services that spans devices and platforms. Productivity will be the first and foremost objective, to enable people to meet and collaborate more easily, and to effectively express ideas in new ways. We will design applications as dual-use with the intelligence to partition data between work and life while respecting each person’s privacy choices. The foundation for these efforts will rest on advancing our leading productivity, collaboration, and business process tools including Skype, OneDrive, OneNote, Outlook, Word, Excel, PowerPoint, Bing, and Dynamics. With Office 365, we provide these familiar industry-leading productivity and business process tools as cloud services, enabling access from anywhere and any device. This creates an opportunity to reach new customers, and expand the usage of our services by our existing customers.

We see opportunity in combining our offerings in new ways that are more contextual and personal, while ensuring people, rather than their devices, remain at the center of the digital experience. We will offer our services across ecosystems and devices outside our own. As people move from device to device, so will their content and the richness of their services. We are engineering our applications so users can find, try, and buy them in friction-free ways.

Build the intelligent cloud platform

In deploying technology that advances business strategy, enterprises decide what solutions will make employees more productive, collaborative, and satisfied, and connect with customers in new and compelling ways. They work to unlock business insights from a world of data. To achieve these objectives, increasingly businesses look to leverage the benefits of the cloud. Helping businesses move to the cloud is one of our largest opportunities, and we believe we work from a position of strength.

The shift to the cloud is driven by three important economies of scale: larger datacenters can deploy computational resources at significantly lower cost per unit than smaller ones; larger datacenters can coordinate and aggregate diverse customer, geographic, and application demand patterns, improving the utilization of computing, storage, and network resources; and multi-tenancy lowers application maintenance labor costs for large public clouds. As one of the largest providers of cloud computing at scale, we are well-positioned to help businesses move to the cloud so that businesses can focus on innovation while leaving non-differentiating activities to reliable and cost-effective providers like Microsoft.

With Azure, we are one of very few cloud vendors that run at a scale that meets the needs of businesses of all sizes and complexities. We believe the combination of Azure and Windows Server makes us the only company with a public, private, and hybrid cloud platform that can power modern business. We are working to enhance the return on information technology (“IT”) investment by enabling enterprises to combine their existing datacenters and our public cloud into a single cohesive infrastructure. Businesses can deploy applications in their own datacenter, a partner’s datacenter, or in our datacenters with common security, management, and administration across all environments, with the flexibility and scale they want.

We enable organizations to securely adopt software-as-a-service applications (both our own and third-party) and integrate them with their existing security and management infrastructure. We will continue to innovate with higher-level services including identity and directory services that manage employee corporate identity and manage and secure corporate information accessed and stored across a growing number of devices, rich data storage and analytics services, machine learning services, media services, web and mobile backend services, and developer productivity services. To foster a rich developer ecosystem, our digital work and life experiences will also be extensible, enabling customers and partners to further customize and enhance our solutions, achieving even more value. This strategy requires continuing investment in datacenters and other infrastructure to support our devices and services.

Create more personal computing

Windows 10 is the cornerstone of our ambition to usher in an era of more personal computing. We see the launch of Windows 10 in July 2015 as a critical, transformative moment for the Company because we will move from an operating system that runs on a PC to a service that can power the full spectrum of devices in our customers’ lives. We developed Windows 10 not only to be familiar to our users, but more safe and secure, and always up-to-date. We believe Windows 10 is more personal and productive, working seamlessly with functionality such as Cortana, Office, Continuum, and universal applications. We designed Windows 10 to foster innovation – from us, our partners and developers – through experiences such as our new browser Microsoft Edge, across the range of existing devices, and into entirely new device categories.

Our ambition for Windows 10 is to broaden our economic opportunity through three key levers: an original equipment manufacturer (“OEM”) ecosystem that creates exciting new hardware designs for Windows 10; our own commitment to the health and profitability of our first-party premium device portfolio; and monetization opportunities such as services, subscriptions, gaming, and search. Our OEM partners are investing in an extensive portfolio of hardware designs and configurations as they ready for Windows 10. By December 2015, we anticipate the widest range of Windows hardware ever to be available.

With the launch of Windows 10, we are realizing our vision of a single, unified Windows operating system on which developers and OEMs can contribute to a thriving Windows ecosystem. We invest heavily to make Windows the most secure, manageable, and capable operating system for the needs of a modern workforce. We are working to create a broad developer opportunity by unifying the installed base to Windows 10 through upgrades and ongoing updates, and by enabling universal Windows applications to run across all device targets. As part of our strategic objectives, we are committed to designing and marketing first-party devices to help drive innovation, create new categories, and stimulate demand in the Windows ecosystem, including across PCs, phones, tablets, consoles, wearables, large multi-touch displays, and new categories such as the HoloLens holographic computing platform. We are developing new input/output methods like speech, pen, gesture, and augmented reality holograms to power more personal computing experiences with Windows 10.

Our future opportunity

There are several distinct areas of technology that we aim to drive forward. Our goal is to lead the industry in these areas over the long-term, which we expect will translate to sustained growth. We are investing significant resources in:

  • Delivering new productivity, entertainment, and business processes to improve how people communicate, collaborate, learn, work, play, and interact with one another.
  • Establishing the Windows platform across the PC, tablet, phone, server, other devices, and the cloud to drive a thriving ecosystem of developers, unify the cross-device user experience, and increase agility when bringing new advances to market.
  • Building and running cloud-based services in ways that unleash new experiences and opportunities for businesses and individuals.
  • Developing new devices that have increasingly natural ways to interact with them, including speech, pen, gesture, and augmented reality holograms.
  • Applying machine learning to make technology more intuitive and able to act on our behalf, instead of at our command.

We believe the breadth of our products and services portfolio, our large global partner and customer base, our growing ecosystem, and our ongoing investment in innovation position us to be a leader in these areas and differentiate ourselves from competitors.

Regarding the digital work and life experiences see my earlier Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft post of July 23, 2014:

Those ambitions are also reporting segments now
Oct 22, 2015Earnings Release FY16 Q1

Revenue in Productivity and Business Processes declined 3% (up 4% in constant currency) to $6.3 billion, with the following business highlights:

  • Office commercial products and cloud services revenue grew 5% in constant currency with Office 365 revenue growth of nearly 70% in constant currency and continued user growth across our productivity offerings
  • Office 365 consumer subscribers increased to 18.2 million, with approximately 3 million subscribers added in the quarter
  • Dynamics revenue grew 12% in constant currency, with the Dynamics CRM Online enterprise installed base growing more than 3x year-over-year

Revenue in Intelligent Cloud grew 8% (up 14% in constant currency) to $5.9 billion, with the following business highlights:

  • Server products and cloud services revenue grew 13% in constant currency, with revenue from premium products and services growing double-digits
  • Azure revenue and compute usage more than doubled year-over-year
  • Enterprise Mobility customers more than doubled year-over-year to over 20,000, and the installed base grew nearly 6x year-over-year

Revenue in More Personal Computing declined 17% (down 13% in constant currency) to $9.4 billion, with the following business highlights:

  • Windows OEM revenue declined 6%, performing better than the overall PC market, as the Windows 10 launch spurred PC ecosystem innovation and helped drive hardware mix toward premium devices
  • Phone revenue declined 54% in constant currency reflecting our updated strategy
  • Search advertising revenue excluding traffic acquisition costs grew 29% in constant currency with Bing US market share benefiting from Windows 10 usage
  • Xbox Live monthly active users grew 28% to 39 million

July 9, 2014Upcoming VS Online Licensing Changes by Brian Harry

Through the fall and spring, we transitioned VS Online from Preview to General Availability.  That process included changes to branding, the SLA, the announcement of pricing, the end of the early adopter program and more.  We’ve been working closely with customers to understand where the friction is and what we can do to make adopting VS Online as easy as possible.  This is a continuing process and includes discussions about product functionality, compliance and privacy, pricing and licensing, etc.  This is a journey and we’ll keep taking feedback and adjusting.

Today I want to talk about one set of adjustments that we want to make to licensing.

As we ended the early adopter period, we got a lot of questions from customers about how to apply the licensing to their situation.  We also watched as people assigned licenses to their users: What kind of licenses did they choose?  How many people did they choose to remove from their account?  Etc.

From all of this learning, we’ve decided to roll out 2 licensing changes in the next couple of months:

Stakeholders

A common question we saw was “What do I do with all of the stakeholders in my organization?”  While the early adopter program was in effect and all users were free, customers were liberal with adding people to their account.  People who just wanted to track progress or file a bug or a suggestion occasionally, were included.  As the early adopter period ended, customers had to decide – Is this really worth $20/user/month (minus appropriate Azure discounts)?  The result was that many of these “stakeholders” were removed from the VS Online accounts in the transition, just adding more friction for the development teams.

As a result of all this feedback we proposed a new “Stakeholder” license for VS Online.  Based on the scenarios we wanted to address, we designed a set of features that matched the needs most customers have.  These include:

    • Full read/write/create on all work items
    • Create, run and save (to “My Queries”) work item queries
    • View project and team home pages
    • Access to the backlog, including add and update (but no ability to reprioritize the work)
    • Ability to receive work item alerts

Some of the explicitly excluded items are:

    • No access to Code, Build or Test hubs.
    • No access to Team Rooms
    • No access to any administrative functionality (Team membership, license administration, permissions, area/iterations configuration, sprint configuration, home page configuration, creation of shared queries, etc.)

We then surveyed our “Top Customers” and tuned the list of features (to arrive at what I listed above).  One of the conversations we had with them was about the price/value of this feature set.  We tested 3 different price points – $5/user/month, $2/user/month and free.  Many thought it was worth $5.  Every single one thought it was worth $2.  However, one of the questions we asked was “How many stakeholders would you add to your account at each of these price points?”  The result was 3X more stakeholders if it’s free than if it’s $2.  That told us that any amount of money, even if it is perceived as “worth it”, is too much friction.  Our goal is to enable everyone who has a stake to participate in the development process (and, of course, to run a business in the process).  Ultimately, in balancing the goals of enabling everyone to participate and running a business, we concluded that “free” is the right answer.

As a result, any VS Online  account will be able to have an unlimited number of “Stakeholder” users with access to the functionality listed above, at no charge.

Access to the Test Hub

Another point of friction that emerged in the transition was access to the Test hub.  During the Preview, all users had access to the Test hub but, at the end of the early adopter program, the only way to get access to the Test hub was by purchasing Visual Studio Test Professional with MSDN (or one of the other products that include it, like VS Premium or VS Ultimate).

We got ample feedback that there were a class of users who really only need access to the web based Test functionality and don’t need all that’s in VS Test Professional.

Because of this, we’ve decided to include access to all of the Test hub functionality in the Visual Studio Online Advanced plan.

Timing

I’m letting you know now so that, if you are currently planning your future, you know what is coming.  I’m always loathe to get too specific about dates in the future because, as we all know, stuff happens.  However, we are working hard to implement these licensing changes now and my expectation is that we’ve got about 2 sprints of work to do to get it all finished.  That would put the effective date somewhere in the neighborhood of mid-August.  I’ll update you with more certainty as the date gets a little closer.

What about Team Foundation Server?

In general, our goal is to keep the licensing for VS Online and Team Foundation Server as “parallel” as we can – to limit how confusing it could be.  As a result, we will be evolving the current “Work Item Web Access” TFS CAL exemption (currently known as “Limited” users in TFS) to match the “Stakeholder” capabilities.  That will result in significantly more functionality available to TFS users without CALs.  My hope is to get that change made for Team Foundation Server 2013 Update 4.  It’s too early yet to be sure that’s going to be possible but I’m hopeful.  We do not, currently, plan to provide an alternate license for the Test Hub functionality in TFS, though it’s certainly something we’re looking at and may have a solution in a future TFS version.

Conclusion

As I said, it’s a journey and we’ll keep listening.  It was interesting to me to watch the phenomenon of the transition from Preview to GA.  Despite announcing the planned pricing many months in advance, the feedback didn’t get really intense until, literally, the week before the end of the early adopter period when everyone had to finish choosing licenses.

One of the things that I’m proud of is that we were able to absorb that feedback, create a plan, review it with enough people, create an engineering plan and (assuming our timelines hold), deliver it in about 3 monthsIn years past that kind of change would take a year or two.

Hopefully you’ll find this change valuable.  We’ll keep listening to feedback and tuning our offering to create the best, most friction-free solution that we can.

Thanks,

Brian

July 7, 2014: TFS Adoption at Microsoft – July 2014 by Brian Harry

Years ago, I used to do monthly updates on TFS adoption at Microsoft.  Eventually, the numbers got so astronomical that it just seemed silly so I stopped doing them.  It’s been long enough and there’s some changes happening that I figured it was worth updating you all on where we are.

First of all, adoption has continued to grow steadily year over year.  We’ve continued to onboard more teams and to deepen the feature set teams are using.  Any major change in the ALM solution of an organization of our size and complexity is journey.

Let’s start with some stats:

As of today, we have 68 TFS “instances”.  Instance sizes vary from modest hardware up to very large scaled out hardware for the larger teams.  We have over 60K monthly active users and that number is still growing rapidly.  Growth varies month to month and the growth below seems unusually high (over 10%).  I grabbed the latest data I could get my hands on – and that happened to be from April.  The numbers are really staggeringly large.

Current 30 day growth
Unique users 62,553 7,256
TPCs 788 46
Projects 15,581 187
Work items 42,088,748 5,572,355
Source files 320,224,466 11,959,935
Builds/month 568,190 109,764
Test cases 9,483,760 1,172,495

In addition we’ve started to make progress recently with Windows and Office – two of the Microsoft teams with the oldest and most entrenched engineering systems.  They’ve both used TFS in the past for work planning but recently Windows has also adopted TFS for all work management (including bugs) and Office is planning a move.  We’re also working with them on plans to move their source code over.

In the first couple of years of adoption of TFS at Microsoft, I remember a lot of fire drills.  Bringing on so many people and so much data with such mission critical needs really pushed the system and we spent a lot of time chasing down performance (and occasionally availability) problems.  These days things run pretty smoothly.  The system is scaled out enough and the code, and our dev processes have been tuned enough, that for the most part, the system just works.  We upgrade it pretty regularly (a couple of times a year for the breadth of the service, as often as every 3 weeks for our own instances).

As we close in on completing the first leg of our journeygetting all teams at Microsoft onto TFS, we are now beginning the second.  A few months ago, The TFS team and a few engineering systems teams working closely with them moved all of their assets into VS Online – code, work items, builds, etc.  This is a big step and, I think, foreshadows the future for the entire company.  At this point it’s only a few hundred people accessing it but it’s already the largest and most active account on VS Online and it will continue to grow.

It was a big decision for us – and we went through a lot of the same anxieties I hear from anyone wanting to adopt a cloud solution for a mission critical need.  Will be intellectual property be safe?  What happens when the service goes down?  Will I lose any data?  Will performance be good?  Etc.  Etc.  At the same time, it was important to us to live the life that we are suggesting our customers live – taking the same risks and working to ensure that all of those risks are mitigated.

The benefits of moving are already visible.  I’ve had countless people remark to me how much they’ve enjoyed having access to their work – work items, build status, code reviews, etc from any device, anywhere.  No messing with remote desktop or any other connectivity technology.  As part of this, we also bound the account to the Microsoft Active Directory tenant so we can log in using the same corporate credentials as we do for everything else.  Combining this with a move to Office 365/SharePoint Online for our other collaboration workflows has created for us a fantastic mobile, cloud experience.

I’ll see about starting to post some statistics on our move to the cloud.  As, I say, at this point, it’s a few hundred people and mostly just the TFS codebase – which is pretty large at this point.  Over time that will grow but I expect it will be slow – getting larger year over year into a distant future when all of Microsoft has moved to the cloud for our engineering system tools.

I know I have to say this because people will ask.  No, we are not abandoning on-prem TFS.  The vast majority of our customers still use it, the overwhelming majority of our internal teams still use it (the few hundred people using VS Online is still rounding error on the more than 60K people using TFS on premises).  We continue to share a codebase between VS Online and TFS and the vast majority of the work we do accrues to both scenarios – and that will continue to be the case.  TFS is here to stay and we’ll keep using it ourselves for a very long time.  At the same time VS Online is here to stay too and our use of it will grow rapidly in the coming years.  It will be a big milestone when the first big product engineering team not associated with building VS Online/TFS moves over to VSO for all of their core engineering system needs – I’ll be sure to let you know when that happens.

Brian

DevOps Journey

Sept 2, 2015DevOps – Enabling DevOps on the Microsoft Stack by Michael Learned a Visual Studio ALM Ranger currently focused on DevOps and Microsoft Azure

There’s a lot of buzz around DevOps right now. An organization’s custom software is critical to providing rich experiences and useful data to its business users. Rapidly delivering quality software is no longer an option, it’s a requirement. Gone are the days of lengthy planning sessions and development iterations.  Cloud platforms such as Microsoft Azure have removed traditional bottlenecks and helped commoditize infrastructure. Software reigns in every business as the key differentiator and factor in business outcomes. No organization, developer or IT worker can or should avoid the DevOps movement.

DevOps is defined from numerous points of view, but most often refers to removing both cultural and technology barriers between development and operations teams so software can move into production as efficiently as possible. Once software is running in production you need to ensure you can capture rich usage data and feed that data back into development teams and decision makers.

There are many technologies and tools that can help with DevOps. These tools and processes support rapid release cycles and data collection on production applications. On the Microsoft stack, tools such as Release Management to drive rapid, predictable releases and Application Insights help capture rich app usage data. This article will explore and shed some light on critical tools and techniques used in DevOps, as well as the various aspects of DevOps (as shown in Figure 1).

IC826498[1]
Figure 1 The Various Aspects of DevOps

The Role of DevOps

Most organizations want to improve their DevOps story in the following areas:

  • Automated release pipelines in which you can reliably test and release on much shorter cycles.
  • Once the application is running in production, you need the ability to respond quickly to change requests and defects.
  • You must capture telemetry and usage data from running production applications and leverage that for data-driven decision making versus “crystal ball” decision making.

Are there silos in your organization blocking those aspects of DevOps? These silos exist in many forms, such as differing tools, scripting languages, politics and departmental boundaries. They intend to provide separation of duties and to keep security controls and stability in production.

Despite their intentions, these silos can sometimes impede an organization from achieving many DevOps goals, such as speedy, reliable releases and handling and responding to production defects. In many cases, this silo structure generates an alarming amount of waste. Developers and operations workers have traditionally worked on different teams with different goals. Those teams spend cycles fixing issues caused by these barriers and less time focused on driving the business.

Corporate decision makers need to take a fresh look at the various boundaries to evaluate the true ROI or benefits these silos intend to provide. It’s becoming clear the more you can remove those barriers, the easier it will be to implement DevOps solutions and reduce waste.

It’s a challenge to maintain proper security, controls, compliance and so on while balancing agility needs. Enterprise security teams must ensure data is kept secure and private. Security is arguably as important as anything else an organization does.

However, there’s an associated cost for every security boundary you build. If security boundaries are causing your teams waste and friction, those boundaries deserve a fresh look to ensure they generate ROI. You can be the most secure organization in the world, but if you can’t release software on time you’ll have a competitive disadvantage.

Balancing these priorities isn’t a new challenge, but it’s time for a fresh and honest look at the various processes and silos your organization has built. Teams should all be focused on business value over individual goals.

The Release Pipeline

The release pipeline is where your code is born with version control, then travels through various environments and is eventually released to production. Along the way, you perform automated build and testing. The pipeline should be in a state where moving changes to production is transparent, repeatable, reliable and fast. This will no doubt involve automation. The release pipeline might also include provisioning the application host environment.

Your release pipeline might not be optimized if these factors are present:

  • Tool and process mismatches, whereby you have different tools and processes in place per environment. (For example, the dev teams deploy with one tool and ops deploy with another.)
  • Manual steps can introduce error, so avoid them.
  • Re-building just to deploy to the next environment.
  • You lack traceability and have issues understanding which versions have been released.
  • Release cycles are lengthy, even for hotfixes.

Provisioning

Provisioning containers is sometimes considered an optional part of a release pipeline. A classic on-premises scenario often exists in which an environment is already running to host a Web application. The IIS Web server or other host and back-end SQL Server have been running through numerous iterations. Rapid releases into these environments deploy only the application code and subsequent SQL schema and data changes needed to move the appropriate update levels. In this case, you’re not provisioning fresh infrastructure (both IIS and SQL) to host the application. You’re using a release pipeline that disregards provisioning and focuses only on the application code itself.

There are other scenarios in which you might want to change various container configuration settings. You might need to tweak some app pool settings in IIS. You could implement that as part of the release pipeline or handle it manually. Then you may opt to track those changes in some type of versioning system with an Infrastructure-as-Code (IaC) strategy.

There are several other scenarios in which you would want to provision as part of an automated release pipeline. For example, early in development cycles, you might wish to tear down and rebuild new SQL databases for each release to fully and automatically test the environment.

Cloud computing platforms such as Azure let you pay only for what you need. Using automated setup and tear down can be cost-effective. By automating provisioning and environmental changes, you can avoid error and control the entire application environment. Scenarios like these make it compelling to include provisioning as part of a holistic release management system.

There are many options and techniques for including provisioning as part of your release pipeline. These will differ based on the types of applications you’re hosting and where you host them. One example is hosting a classic ASP.NET Web application versus an Azure Web app or some other Platform-as-a-Service (PaaS) application such as Azure Cloud Services. The containers for those applications are different and require different tooling techniques to support the provisioning steps.

Infrastructure as Code

One popular provisioning technique is IaC. An application is an executable that can be compiled code, scripts and so on combined with an operational environment. You’ll find this environment yields many benefits.

Microsoft recently had Forrester Research Inc. conduct a research study on the impact of IaC (see bit.ly/1IiGRk1). The research showed IaC is a critical DevOp component. It also showed provisioning and configuration is a major point of friction for teams delivering software. You’ll need to leverage automation and IaC techniques if you intend to completely fulfill your DevOps goals.

One of the traditional operational challenges is automating the ability to provide appropriate environments in which to execute applications and services, and keeping those environments in known good states. Virtualization and other automation techniques are beneficial, but still have problems keeping nodes in sync and managing configuration drift. Operations and development teams continue to struggle with different toolsets, expertise and processes.

IaC is based on the premise that we should be able to describe, version, execute and test our infrastructure code via an automated release pipeline. For example, you can easily create a Windows virtual machine (VM) configured with IIS using a simple Windows PowerShell script. Operations should be able to use the same ALM tools to script, version and test the infrastructure.

Other benefits include being able to spin up and tear down known versions of your environments. You can avoid troublesome issues because of environmental differences between development and production. You can express the application environment-specific dependencies in code and carry them along in version control. In short, you can eliminate manual processes and ensure you’ve tested reliable automated environment containers for your applications. Development and operations can use common scripting languages and tools and achieve those efficiencies.

The application type and intended host location will dictate the tooling involved for executing your infrastructure code. There are several tools gaining popularity to support these techniques, including Desired State Configuration (DSC), Puppet, Chef and more. Each helps you achieve similar goals based on the scenario at hand.

The code piece of IaC could be one of several things. It could simply be Windows PowerShell scripts that provision resources. Again, the application types and hosting environment will dictate your choices here.

For Azure, you can use Cloud Deployment Projects that leverage Azure Resource Management APIs to create and manage Azure Resource Groups. This lets you describe your environments with JSON. Azure Resource Goups also let you manage group-related resources together, such as Web sites and SQL databases. With cloud deployment projects, you can store your provisioning requirements in version control and perform Azure provisioning as part of an automated release pipeline. Here are the sections that make up the basic structure of a provisioning template:

Figure 3 Separate Configuration Data Within a DCS Script
Configuration InstallWebSite
{
  Node $AllNodes.NodeName
  {
    WindowsFeature InstallIIS
    {
      Ensure = "Present"
      Name = "Web-Server"
    }
  }
}
InstallWebSite –ConfigurationData .\config.ps1
Where config.ps1 contains
$ConfigData = @{
  AllNodes = @(
  @{
    NodeName = “localhost”
  })
}

For more information on templates, go to bit.ly/1RQ3gvg, and for more on cloud deployment projects, check out bit.ly/1flDH3m.

The scripting languages and tooling are only part of the changes needed to successfully adopt an IaC strategy. Development and operations teams must work together to integrate their work streams toward a common set of goals. This can be challenging because historically operations teams have focused on keeping environments stable and development teams are more focused on introducing new features into those environments. Sophisticated technologies are emerging, but the foundation of a successful IaC implementation will depend on the ability of the operations and development teams to effectively collaborate.

Release Orchestration

Release Management is a technology in the Visual Studio ALM stack. It’s really more of a concept whereby you can orchestrate the various objects and tasks that encompass a software release.  A few of these artifacts include the payload or package produced by a build system, the automated testing that happens as part of a release pipeline, approval workflows, notifications and security governance to control environments closer to production.

You can use technologies such as DSC, Windows PowerShell scripts, Azure Resource Manager, Chef, and so on to manage environment state and install software and dependencies into running environments. In terms of tooling provided by Visual Studio ALM, think of Release Management as the service that wraps around whatever technologies and tools you need to execute the deployments. Release Management might leverage simple command-line or Windows PowerShell scripts, use DSC, or even execute your own custom tools. You should aim to use the simplest solution possible to execute your releases.

It’s also a good practice to rely on Windows PowerShell because it’s ubiquitous. For example, you can use Windows PowerShell scripts as part of a release pipeline to deploy Azure Cloud Services. There are a lot of out-of-the-box tools with Release Management (see Figure 2), but you also have the flexibility to create your own.

IC826496[1]
Figure 2 Tools and Options Available for Release Management

Release Management can help you elegantly create an automated release pipeline and produce reliable automated application releases. You can also opt to include provisioning.  The Release Management tooling with Visual Studio and Team Foundation Server can help you orchestrate these artifacts into the overall release transaction. It also provides rich dashboard-style views into your current and historical states. There’s also rich integration with Team Foundation Server and Visual Studio Online.

Where Does DSC Fit In?

There has been a lot of press about DSC lately. DSC is not, however, some all-encompassing tool that can handle everything. You’ll use DSC as one of the tools in your DevOps structure, not the only tool.

You can use DSC in pull or push modes. Then you can use the “make it so” phase to control the server state. Controlling that state can be as simple as ensuring a file or directory exists, or something more complex such as modifying the registry, stopping or starting services, or running scripts to deploy an application. You can do this repeatedly without error. You can also define your own DSC resources or leverage a large number of built-in resources.

DSC is implemented as a Local Configuration Manager (LCM), running on a target node, accepting a Management Object File (MOF) configuration file and using it to apply configuration to the node itself.  So there’s no hard-coupled tool. You don’t even have to use Windows PowerShell to produce the MOF.

To start using DSC, simply produce the MOF file. That will eventually describe the various resources to execute, which end up written mostly in Windows PowerShell. One of the big advantages of DSC on Windows Server-based systems is the LCM is native to the OS, giving you the concept of a built-in agent. There are also scenarios for leveraging DSC with Linux. See Figure 3 for an example of separating the configuration data for the DSC script.

Figure 3 Separate Configuration Data Within a DCS Script
Configuration InstallWebSite
{
  Node $AllNodes.NodeName
  {
    WindowsFeature InstallIIS
    {
      Ensure = "Present"
      Name = "Web-Server"
    }
  }
}
InstallWebSite –ConfigurationData .\config.ps1
Where config.ps1 contains
$ConfigData = @{
  AllNodes = @(
  @{
    NodeName = “localhost”
  })
}

DSC can be an important piece of a release pipeline if it has the resources available to help support your deployment. With on-premises or IaaS applications, DSC is an excellent choice to help control the environment configuration and support your deployment scenarios.

Still DSC isn’t meant to be used for every scenario. To put this in context, if you’re deploying Azure PaaS resources, it’s recommended you use Azure Resource Manager to get the VMs started and the networking configured. This isn’t something DSC is designed for. Once the VMs are running, you can use DSC to get the local configuration the way you want it and ensure the configuration elements you care about don’t change.

 Monitor with Application Insights

Once an application and environment is in production, it’s critical to collect data and monitor the operational health. You also need to understand usage patterns. This data is critical to managing a healthy service. Collecting and monitoring this data is an important piece of DevOps. For example, Microsoft has used production data to improve the Visual Studio Online teams. This rich data helps the Visual Studio Online teams ensure service availability, demonstrates to them how developers are using the service and informs decisions on feature prioritization. You can read more about the Microsoft DevOps journey at bit.ly/1AzDL9V.

Visual Studio Application Insights adds an SDK to your appli­cation and sends telemetry to the Azure Portal. It supports many different platforms and languages, including iOS, Android, ASP.NET and Java. You can capture performance data, application uptime and various usage analytics. You can show this rich data to decision makers and stakeholders to help make better decisions, detect issues and continuously improve your applications. You can read more about Application Insights at bit.ly/1IbRnrF.

Figure 4 and Figure 5 show examples of the types of data collected by Application Insights.

IC826495[1]Figure 4 Application Insights Can Provide Data on Users and Page Views

IC826494[1]Figure 5 Application Insights Also Monitors Web Tests

Wrapping Up

DevOps helps teams drive toward continuous delivery and leverage data from running applications to help make better-informed decisions. This article has examined various prominent Microsoft technologies you can use to achieve these goals:

  • Release Management lets you use any technology to drive deployments. These technologies include simple Windows PowerShell scripts, DSC configurations or even third-party tools such as Puppet.
  • Infrastructure-as-Code strategies help development and operations teams efficiently work together.
  • Visual Studio Application Insights gives you a mechanism to capture rich data from running applications, to help stakeholders understand application health and examine usage patterns to drive informed decision making.

These technologies can help you greatly improve your DevOps maturity. You’ll also need to blend an appropriate set of technologies while working to overcome cultural barriers.

Additional Resources

  • To learn more about Infrastructure as Code, listen to Brian Keller’s discussion on Channel 9 at bit.ly/1IiNqmr.
  • To learn more about Azure Resource Group Deployment Projects, check out bit.ly/1flDH3m.
  • To learn more about TFS Planning, Disaster Avoidance and Recovery, and TFS on Azure IaaS, check out the guide at vsarplanningguide.codeplex.com.
  • To learn more about Config as Code for DevOps and ALM practitioners, check out vsardevops.codeplex.com.

Micheal Learned is a Visual Studio ALM Ranger currently focused on DevOps and Microsoft Azure. He has worked on numerous software projects inside and outside of Microsoft for more than 15 years. He lives in central Illinois and devotes his free time to helping the community, as well as relaxing with his daughter, two sons and wife. Reach him on Twitter at twitter.com/mlhoop.

Thanks to the following technical experts for reviewing this article: Donovan Brown (Microsoft), Wouter de Kort (Independent Developer), Marcus Fernandez (Microsoft), Richard Hundhausen (Accentient), Willy-Peter Schaub (Microsoft) and Giulio Vian (Independent Developer)

Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft

Update: Gates Says He’s Very Happy With Microsoft’s Nadella [Bloomberg TV, Oct 2, 2014] + Bill Gates is trying to make Microsoft Office ‘dramatically better’ [The Verge, Oct 3, 2014]

This is the essence of Microsoft Fiscal Year 2014 Fourth Quarter Earnings Conference Call(see also the Press Release and Download Files) for me, as the new, extremely encouraging, overall setup of Microsoft in strategic terms (the below table is mine based on what Satya Nadella told on the conference call):

image

These are extremely encouraging strategic advancements vis–à–vis previously publicized ones here in the following, Microsoft related posts of mine:

I see, however, particularly challenging the continuation of the Lumia story with the above strategy, as with the previous, combined Ballmer/Elop(Nokia) strategy the results were extremely weak:

image

Worthwhile to include here the videos Bloomberg was publishing simultaneously with Microsoft Fourth Quarter Earnings Conference Call:

Inside Microsoft’s Secret Surface Labs [Bloomberg News, July 22, 2014]

July 22 (Bloomberg) — When Microsoft CEO Satya Nadella defined the future of his company in a memo to his 127,100 employees, he singled out the struggling Surface tablet as key to a future built around the cloud and productivity. Microsoft assembled an elite team of designers, engineers, and programmers to spend years holed up in Redmond, Washington to come up with a tablet to take on Apple, Samsung, and Amazon. Bloomberg’s Cory Johnson got an inside look at the Surface labs.

Will Microsoft Kinect Be a Medical Game-Changer? [Bloomberg News, July 22, 2014]

July 23 (Bloomberg) — Microsoft’s motion detecting camera was thought to be a game changer for the video gaming world when it was launched in 2010. While appetite for it has since decreased, Microsoft sees the technology as vital in its broader offering as it explores other sectors like 3d mapping and live surgery. (Source: Bloomberg

Why Microsoft Puts GPS In Meat For Alligators [Bloomberg News, July 22, 2014]

July 23 (Bloomberg) — At the Microsoft Research Lab in Cambridge, scientists track animals and map climate change all on the off chance they’ll stumble across the next big thing. (Source: Bloomberg)

To this it is important to add: How Pier 1 is using the Microsoft Cloud to build a better relationship with their customers [Microsoft Server and Cloud YouTube channel, July 21, 2014]

In this video, Pier 1 Imports discuss how they are using Microsoft Cloud technologies such as Azure Machine Learning to to predict which the product the customer might want to purchase next, helping to build a better relationship with their customers. Learn more: http://www.azure.com/ml

as well as:
Microsoft Surface Pro 3 vs. MacBook Air 13″ 2014 [CNET YouTube channel, July 21, 2014]

http://cnet.co/1nOygqh Microsoft made a direct comparison between the Surface Pro 3 and the MacBook Air 13″, so we’re throwing them into the Prizefight Ring to settle the score once and for all. Let’s get it on!

Surface Pro 3 vs. MacBook Air (2014) [CTNtechnologynews YouTube channel, July 1, 2014]

The Surface Pro 3 may not be the perfect laptop. But Apple’s MacBook Air is pretty boring. Let’s see which is the better device!

In addition here are some explanatory quotes (for the new overall setup of Microsoft) worth to include here from the Q&A part of Microsoft’s (MSFT) CEO Satya Nadella on Q4 2014 Results – Earnings Call Transcript [Seeking Alpha, Jul. 22, 2014 10:59 PM ET]

Mark Moerdler – Sanford Bernstein

Thank you. And Amy one quick question, we saw a significant acceleration this quarter in cloud revenue, or I guess Amy or Satya. You saw acceleration in cloud revenue year-over-year what’s – is this Office for the iPad, is this Azure, what’s driving the acceleration and how long do you think we can keep this going?

Amy Hood

Mark, I will take it and if Satya wants to add, obviously, he should do that. In general, I wouldn’t point to one product area. It was across Office 365, Azure and even CRM online. I think some of the important dynamics that you could point to particularly in Office 365; I really think over the course of the year, we saw an acceleration in moving the product down the market into increasing what we would call the mid-market and even small business at a pace. That’s a particular place I would tie back to some of the things Satya mentioned in the answer to your first question.

Improvements to analytics, improvements to understanding the use scenarios, improving the product in real-time, understanding trial ease of use, ease of sign-up all of these things actually can afford us the ability to go to different categories, go to different geos into different segments. And in addition, I think what you will see more as we initially moved many of our customers to Office 365, it came on one workload. And I think what we’ve increasingly seen is our ability to add more workloads and sell the entirety of the suite through that process. I also mentioned in Azure, our increased ability to sell some of these higher value services. So while, I can speak broadly but all of them, I think I would generally think about the strength of being both completion of our product suite ability to enter new segments and ability to sell new workloads.

Satya Nadella

The only thing I would add is it’s the combination of our SaaS like Dynamics in Office 365, a public cloud offering in Azure. But also our private and hybrid cloud infrastructure which also benefits, because they run on our servers, cloud runs on our servers. So it’s that combination which makes us both unique and reinforcing. And the best example is what we are doing with Azure active directory, the fact that somebody gets on-boarded to Office 365 means that tenant information is in Azure AD that fact that the tenant information is in Azure AD is what makes EMS or our Enterprise Mobility Suite more attractive to a customer manager iOS, Android or Windows devices. That network effect is really now helping us a lot across all of our cloud efforts.

Keith Weiss – Morgan Stanley

Excellent, thank you for the question and a very nice quarter. First, I think to talk a little bit about the growth strategy of Nokia, you guys look to cut expenses pretty aggressively there, but this is – particularly smartphones is a very competitive marketplace, can you tell us a little bit about sort of the strategy to how you actually start to gain share with Lumia on a going forward basis? And may be give us an idea of what levels of share or what levels of kind unit volumes are you going to need to hit to get to that breakeven in FY16?

Satya Nadella

Let me start and Amy you can even add. So overall, we are very focused on I would say thinking about mobility share across the entire Windows family. I already talked about in my remarks about how mobility for us even goes beyond devices, but for this specific question I would even say that, we want to think about mobility not just one form factor of a mobile device because I think that’s where the ultimate price is.

But that said, we are even year-over-year basis seen increased volume for Lumia, it’s coming at the low end in the entry smartphone market and we are pleased with it. It’s come in many markets we now have over 10% that’s the first market I would sort of say that we need to track country-by-country. And the key places where we are going to differentiate is looking at productivity scenarios or the digital work and life scenario that we can light up on our phone in unique ways.

When I can take my Office Lens App use the camera on the phone take a picture of anything and have it automatically OCR recognized and into OneNote in searchable fashion that’s the unique scenario. What we have done with Surface and PPI shows us the way that there is a lot more we can do with phones by broadly thinking about productivity. So this is not about just a Word or Excel on your phone, it is about thinking about Cortana and Office Lens and those kinds of scenarios in compelling ways. And that’s what at the end of the day is going to drive our differentiation and higher end Lumia phones.

Amy Hood

And Keith to answer your specific question, regarding FY16, I think we’ve made the difficult choices to get the cost base to a place where we can deliver, on the exact scenario Satya as outlined, and we do assume that we continue to grow our units through the year and into 2016 in order to get to breakeven.

Rick Sherlund – Nomura

Thanks. I’m wondering if you could talk about the Office for a moment. I’m curious whether you think we’ve seen the worst for Office here with the consumer fall off. In Office 365 growth in margins expanding their – just sort of if you can look through the dynamics and give us a sense, do you think you are actually turned the corner there and we may be seeing the worse in terms of Office growth and margins?

Satya Nadella

Rick, let me just start qualitatively in terms of how I view Office, the category and how it relates to productivity broadly and then I’ll have Amy even specifically talk about margins and what we are seeing in terms of I’m assuming Office renewals is that probably the question. First of all, I believe the category that Office is in, which is productivity broadly for people, the group as well as organization is something that we are investing significantly and seeing significant growth in.

On one end you have new things that we are doing like Cortana. This is for individuals on new form factors like the phones where it’s not about anything that application, but an intelligent agent that knows everything about my calendar, everything about my life and tries to help me with my everyday task.

On the other end, it’s something like Delve which is a completely new tool that’s taking some – what is enterprise search and making it more like the Facebook news feed where it has a graph of all my artifacts, all my people, all my group and uses that graph to give me relevant information and discover. Same thing with Power Q&A and Power BI, it’s a part of Office 365. So we have a pretty expansive view of how we look at Office and what it can do. So that’s the growth strategy and now specifically on Office renewals.

Amy Hood

And I would say in general, let me make two comments. In terms of Office on the consumer side between what we sold on prem as well as the Home and Personal we feel quite good with attach continuing to grow and increasing the value prop. So I think that’s to address the consumer portion.

On the commercial portion, we actually saw Office grow as you said this quarter; I think the broader definition that Satya spoke to the Office value prop and we continued to see Office renewed in our enterprise agreement. So in general, I think I feel like we’re in a growth phase for that franchise.

Walter Pritchard – Citigroup

Hi, thanks. Satya, I wanted to ask you about two statements that you made, one around responsibly making the market for Windows Phone, just kind of following on Keith’s question here. And that’s a – it’s a really competitive market it feels like ultimately you need to be a very, very meaningful share player in that market to have value for developer to leverage the universal apps that you’re talking about in terms of presentations you’ve given and build in and so forth.

And I’m trying to understand how you can do both of those things once and in terms of responsibly making the market for Windows Phone, it feels difficult given your nearest competitors there are doing things that you might argue or irresponsible in terms of making their market given that they monetize it in different ways?

Satya Nadella

Yes. One of beauties of universal Windows app is, it aggregates for the first time for us all of our Windows volume. The fact that even what is an app that runs with a mouse and keyboard on the desktop can be in the store and you can have the same app run in the touch-first on a mobile-first way gives developers the entire volume of Windows which is 300 plus million units as opposed to just our 4% share of mobile in the U.S. or 10% in some country.

So that’s really the reason why we are actively making sure that universal Windows apps is available and developers are taking advantage of it, we have great tooling. Because that’s the way we are going to be able to create the broadest opportunity to your very point about developers getting an ROI for building to Windows. For that’s how I think we will do it in a responsible way.

Heather Bellini – Goldman Sachs

Great. Thank you so much for your time. I wanted to ask a question about – Satya your comments about combining the next version of Windows and to one for all devices and just wondering if you look out, I mean you’ve got kind of different SKU segmentations right now, you’ve got enterprise, you’ve got consumer less than 9 inches for free, the offering that you mentioned earlier that you recently announced. How do we think about when you come out with this one version for all devices, how do you see this changing kind of the go-to-market and also kind of a traditional SKU segmentation and pricing that we’ve seen in the past?

Satya Nadella

Yes. My statement Heather was more to do with just even the engineering approach. The reality is that we actually did not have one Windows; we had multiple Windows operating systems inside of Microsoft. We had one for phone, one for tablets and PCs, one for Xbox, one for even embedded. So we had many, many of these efforts. So now we have one team with the layered architecture that enables us to in fact one for developers bring that collective opportunity with one store, one commerce system, one discoverability mechanism. It also allows us to scale the UI across all screen sizes; it allows us to create this notion of universal Windows apps and being coherent there.

So that’s what more I was referencing and our SKU strategy will remain by segment, we will have multiple SKUs for enterprises, we will have for OEM, we will have for end-users. And so we will – be disclosing and talking about our SKUs as we get further along, but this my statement was more to do with how we are bringing teams together to approach Windows as one ecosystem very differently than we ourselves have done in the past.

Ed Maguire – CLSA

Hi, good afternoon. Satya you made some comments about harmonizing some of the different products across consumer and enterprise and I was curious what your approach is to viewing your different hardware offerings both in phone and with Surface, how you’re go-to-market may change around that and also since you decided to make the operating system for sub 9-inch devices free, how you see the value proposition and your ability to monetize that user base evolving over time?

Satya Nadella

Yes. The statement I made about bringing together our productivity applications across work and life is to really reflect the notion of dual use because when I think about productivity it doesn’t separate out what I use as a tool for communication with my family and what I use to collaborate at work. So that’s why having this one team that thinks about outlook.com as well as Exchange helps us think about those dual use. Same thing with files and OneDrive and OneDrive for business because we want to have the software have the smart about separating out the state carrying about IT control and data protection while me as an end user get to have the experiences that I want. That’s how we are thinking about harmonizing those digital life and work experiences.

On the hardware side, we would continue to build hardware that fits with these experiences if I understand your question right, which is how will be differentiate our first party hardware, we will build first party hardware that’s creating category, a good example is what we have done with Surface Pro 3. And in other places where we have really changed the Windows business model to encourage a plethora of OEMs to build great hardware and we are seeing that in fact in this holiday season, I think you will see a lot of value notebooks, you will see clamshells. So we will have the full price range of our hardware offering enabled by this new windows business model.

And I think the last part was how will we monetize? Of course, we will again have a combination, we will have our OEM monetization and some of these new business models are about monetizing on the backend with Bing integration as well as our services attached and that’s the reason fundamentally why we have these zero-priced Windows SKUs today.

Justin Rosenstein of Asana: Be happy in a project-oriented teamwork environment made free of e-mail based communication hassle

Get Organized: Using Asana in Business [PCMag YouTube channel, Febr 24, 2014]

Tired of “work about work?” Some businesses are using Asana to streamline their communication and workflow. Here’s a bit about the tool and how it works

Steven Sinofsky, former head of Microsoft Office and (later) Windows at Microsoft:

We’ve all seen examples of the collaborative process playing out poorly by using email. There’s too much email and no ability to track and manage the overall work using the tool. Despite calls to ban the process, what is really needed is a new tool. So Asana is one of many companies working to build tools that are better suited to the work than one we currently all collectively seem to complain about.
in Don’t ban email—change how you work! [Learning by Shipping, Jan 31, 2014]

Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
in You’re doing it wrong [Learning by Shipping, April 10, 2014] and Shipping is a Feature: Some Guiding Principles for People That Build Things [Learning by Shipping, April 17, 2014]

image
Making e-mail communication easier [Fox Business Video]
May. 06, 2014 – 3:22 – Asana co-founder Justin Rosenstein weighs in on his new email business.

How To Collaborate Effectively With Asana [Forbes YouTube channel, Feb 26, 2013]

Collaboration tool Asana provides a shared task list for companies to get work done. That means taking on the challenge of slaying the email inbox.

Dustin Moskovitz: How Asana Gets Work Done [Forbes YouTube channel, Feb 26, 2013]

Asana cofounder Dustin Moskovitz, who previously cofounded Facebook, talks about Asana’s company culture, which includes an emphasis on transparency, a company-wide one week strategy session every four months and employee perks.

Do Great Things: Keynote by Justin Rosenstein of Asana | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]

Asana’s Justin Rosenstein thinks we’re poised to make the greatest change possible for the largest number of people: what are we going to do with that potential? What should we do? For the full interview click here: http://techcrunch.com/video/do-great-things-keynote-by-justin-rosenstein-of-asana/518220046/

Asana’s Justin Rosenstein: “I Flew Coach Here.” | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]

At the end of their chat, Asana’s Justin Rosenstein and TechCrunch’s Alex Wilhelm failed to reconcile their views, but manage to land a high five, Click here to watch the full interview: http://techcrunch.com/video/do-great-things-keynote-by-justin-rosenstein-of-asana/518220046/

How we use Asana [asana blog, Oct 9, 2013]

We love to push the boundaries of what Asana can do. From creating meeting agendas to tracking bugs to maintaining snacks in the refrigerator, the Asana product is (unsurprisingly) integral to everything we do at Asana. We find many customers are also pushing the boundaries of Asana to fit their teams’ needs and processes. Since Asana was created to be flexible and powerful enough for every team, nothing makes us more excited than hearing about these unique use cases.

Recently, we invited some of our Bay Area-based customers to our San Francisco HQ to share best practices with one another and hear from our cofounder Justin Rosenstein about the ways we use Asana at Asana. We’re excited to pass on this knowledge through some video highlights from the event. You can watch the entire video here: The Asana Way to Coordinate Ambitious Projects with Less Effort

Capture steps in a Project
“The first thing we always do is create a Project that names what we’re trying to accomplish. Then we’ll get together as a team and think of, ‘What is every single thing we need to accomplish between now and the completion of that Project?’ Over the course of the Project, all of the Tasks end up getting assigned.”

Organize yourself
“Typically when I start my day, I’ll start by looking at all the things that are assigned to me. I’ll choose a few that I want to work on today. I try to be as realistic as possible, which means adding half as many things as I am tempted to add. After putting those into my ‘Today’ view, there are often a couple of other things I need to do. I just hit enter and add a few more tasks.”

Forward emails to Asana
“Because I want Asana to be the source of truth for everything I do, I want to put emails into my task list and prioritize them. I’ll just take the email and forward it to x@mail.asana.com. We chose ‘x’ so it wouldn’t conflict with anything else in your address book. Once I send that, it will show up in Asana with the attachments and everything right intact.”

Run great meetings
“We maintain one Project per meeting. If I’m looking at my Task list and see a Task I want to discuss at the meeting, I’ll just use Quick Add (tab + Q) to put the Task into the correct Project. Then when the meeting comes around, everything that everyone wants to talk about has already been constructed ahead of time.”

Track responsibility
“Often a problem comes up and someone asks, ‘Who’s responsible for that?’ So instead, we’ve built out a list of areas of responsibility (AoRs), which is all the things that someone at the company has to be responsible for. By having AoRs, we distribute responsibility. We can allow managers to focus on things that are more specific to management and empower everyone at the company to be a leader in their own field.”


Background on https://asana.com/

Asana

About Us

Connect

Support

How it all started and progressed?

asana demo & vision talk [Robert Marquardt YouTube channel, Feb 15, 2011]

“First public demo of Asana and deep-dived into the nuances of the product, the long-term mission that drives us, how the beta’s going, and more. We were really excited to be able to share what we’ve been working on and why we’re so passionate about it, and hope you enjoy this video of the talk.” http://blog.asana.com/2011/02/asana-demo-vision-talk/

The Asana Vision & Demo [asana blog, Feb 7, 2011]

We recently hosted an open house at our offices in San Francisco, where we showed the first public demo of Asana and deep-dived into the nuances of the product, the long-term mission that drives us, how the beta’s going, and more. We were really excited to be able to share what we’ve been working on and why we’re so passionate about it, and hope you enjoy this the above video of the talk:

Asana will be available more broadly later this year. In the meantime,

  • if you’re interested in participating in the beta program, sign up here.
  • if these sound like problems you’d like to help tackle, we’re hiring.
  • and if you’d just like to receive updates about Asana going forward, use the form in the upper right of this page.

Introducing Asana: The Modern Way to Work Together [asana blog, Nov 2, 2011]


Asana is a modern web application that keeps teams in sync, a shared task list where everyone can capture, organize, track, and communicate what they are working on in service of their common goal. Rather than trying to stay organized through the tedious grind of emails and meetings, teams using Asana can move faster and do more — or even take on bigger and more interesting goals.

How Asana Works:

Asana re-imagines the way we work together by putting the fundamental unit of productivity – the task – at the center. Breaking down ambitious goals into small pieces, assigning ownership of those tasks, and tracking them to completion is how things get built, from software to skyscrapers. With Asana, you can:

  • capture everything your team is planning and doing in one place. When tasks and the conversations about them are collected together, instead of spread around emails, documents, whiteboards, and notebooks, they become the shared, trusted, collective memory for your organization.
  • keep your team in sync on the priorities, and what everyone is working on. When you have a single shared view of a project’s priorities, along with an accurate view into what each person is working on and when, everyone on the team knows exactly what matters, and what work remains between here and the goal.
  • get the right information at the right time. Follow tasks, and you’ll receive emails as their status evolves. Search, and you’ll see the full activity feed of all the discussions and changes to a task over its history. Now, it’s easy to stay on top of the details — without asking people to forward you a bunch of email threads.

Building tools for teamwork [asana blog, Nov 22, 2013]

Our co-founder, Justin, recently wrote in Wired about why we need to rethink the tools we use to work together. The article generated a lot of interesting comments, from ideas on knowledge management to fatigue with the “meeting lifestyle,” to this protest on the typical office culture:

“Isn’t the root of this problem that, within our own organizations, we fiercely guard information and our decision-making processes? Email exchanges and invite-only meetings shut out others– forcing the need for follow-up conversations, summary reports, and a trail of other status/staff meetings to relay content already covered some place/some time before.”

To reach its goals, we think a team needs clarity of purpose, plan and responsibility. Technology and tools can help us reach that kind of clarity, but only if they target the right problem. From their roles at Facebook, Asana’s founders have extensive knowledge of social networks, and the social graph technology they rely on. But Asana isn’t a social network. Why? Because, as Justin outlines, the social graph doesn’t target the problem of work:

image

Our personal and professional lives, even if they overlap, have two distinct goals — and they require different “graphs.”

For our personal lives, the goal is love (authentic interpersonal connection), and that requires a social graph with people at the center. For our work lives, the goal is creation (working together to realize our collective potential), and that requires a work graph, with the work at the center.

Don’t get me wrong: Human connection is valuable within a business. But it should be in service to the organizational function of getting work done, and doesn’t need to be the center of the graph.

So, how does this change the experience for you and your teammates? A work graph means having all the information you need when you need it. Instead of blasting messages at the whole team, like “Hey, has anyone started working on this yet?”, you should be able to efficiently find out exactly who’s working on that task and how much progress they’ve made. That’s the target Asana is aiming for. Read Justin’s full Wired article.

Organizations in Asana [asana blog, May 1, 2013]

Today, we’re excited to be launching a collection of new features aimed at helping companies use and support Asana across their entire enterprise. We call it Organizations.

Since we began, Asana has been on a mission to help great teams achieve more ambitious goals. We started 18 months ago with our free service, targeted at smaller teams and even individuals – helping them get and stay organized.

When we launched our first premium tiers six months later, we enabled medium sized teams and companies – think 10s to 100s of people – to go further with Asana. In the year between then and now, we’ve been continuously amazed by all the places and ways Asana is being used to organize a team: in industries as diverse as education, healthcare, finance, technology, and manufacturing; in companies from two-person partnerships to Fortune 100 enterprises; and in dozens of countries representing every continent but the frozen one. There’s a lot of important work being organized in Asana.

But we’re still just getting started – there remain teams that we haven’t been ready to support: the largest teams, those that grow from 100s to 1,000s of people. While it would be remarkable if it only took a small number of coworkers to design and manufacture electric cars, synthesize DNA, or deliver healthcare to villages across the globe – these missions are complex, and require more people to be involved in them to succeed. Many of the teams using Asana today are inside these bigger organizations, and they’ve been asking for Asana to work at enterprise-scale. So for the past several months, we’ve been working on just that.

Stories from our first year [asana blog, Nov 12, 2012]

… When we launched a year go, we had an ambitious mission: to create a shared task management platform that empowers teams of like-minded people to do great things. … In the course of our first year, tens of thousands of teams looking for a better way to work together have adopted Asana. …

… we collected three of these stories from three distinct kinds of teams:
– a tech startup [Foursquare],
– a fast-growing organic food company [Bare Fruit & Sundia] and
– a leading Pacific Coast aquarium [Aquarium of the Bay].

Foursquare Launches 5.0

imageRight around the time Foursquare passed 100 employees over the last year, we started building Foursquare 5.0. This update was a big deal: we were overhauling Foursquare’s core mechanics, evolving from check-ins towards the spontaneous discovery of local businesses. As we built the new app, we needed a way to gather feedback from the entire team.

We tried what felt like every collaboration tool around. Group emails were a mess. Google Docs was impossible to parse. We’d heard about Asana and decided to give it a shot.

Using Asana, we were easily able to collect product feedback and bugs from everyone in the company, then parse, discuss, distribute and prioritize the work. It became an indispensable group communication tool.

Foursquare 5.0 was a giant success, and we couldn’t have done it without Asana.

Noah Weiss, Product Manager

Then, Of Course, There Is Us

It’s an understatement to say that we rely on Asana. We use our own product to manage every function of our business. Asana is where we plan, capture ideas, build meeting agendas, prioritize our product roadmap, document which bugs to fix and list the snacks to buy. It’s our CRM, our editorial calendar, our Applicant Tracking System, and our new-hire orientation system. Every team in the company – from product, design, and engineering to sales and marketing to recruiting and user operations – relies on the product we are building to stay in sync, connect our individual tasks to the bigger picture and accomplish our collective goals.

Q&A: Rising Realty Partners builds their business with Asana [asana blog, Feb 7, 2014]

…The  Los Angeles development firm Rising Realty Partners, shared with us how they used Asana, and our integration with Dropbox, to close a massive ten-property deal.

As our business expanded, we found ourselves relying heavily on email, faxes, and even FedEx to communicate with each other and collaborate with outside parties. We needed a better way to organize, prioritize and communicate around our work, and we found the answer in Asana.


I can’t image how complex our communications would have been if we weren’t using Asana. We had dozens of people internally, and more than 50 people externally, all involved in making this deal happen. Having all of that communication in Asana significantly cut down on the craziness.

Because of Asana’s Dropbox integration, our workflow is now fast, intuitive and organized — something that was impossible to achieve over email. For the acquisition, we used Asana and Dropbox simultaneously to keep track of everything; from what each team member was doing, to the current status of each transaction, to keeping a history of all related documents. We had more than 18,000 items in Dropbox that we would link to in Asana instead of attaching them in email. We removed more than 30 gigabytes of information per recipient from our inboxes and everything was neatly organized around the work we were doing in Asana. This meant that the whole team always had the latest and most relevant information.
 

For this entire project, maybe one percent of our total internal communication was happening in email. With Asana, anyone in the company could look at any aspect of the project, see where it stood, and add their input. No one had to remember to cc’ or ‘reply all’.

….
The success of this deal was largely due to Asana and we plan to use it in future acquisitions –Asana has become essential to our team’s success.
….

Our iPhone App Levels Up [asana blog, Sept 6, 2012]

Until recently, we’ve focused most of our energy on the browser-based version of Asana. But, in the last few months, even as we’ve launched major new features in our web application, we’ve been putting much more time into improving the mobile experience. In June, we made several meaningful architectural improvements to pave the way for bigger and better things and hinted that these changes were in the works.

Today, we’ve taken the next step in that direction: Version 2.0 of our iPhone app is in the App Store now. We are really proud of this effort – almost everyone at Asana played a part in this release. This new version is a top-to-bottom redesign that really puts the power of the desktop web version of Asana right in your pocket.

Asana comes to Android [asana blog, Feb 28, 2013]

Five months ago, we launched our first bonafide mobile app, for the iPhone, and we’ve been steadily improving it ever since. Focusing on a single platform at first allowed us to be meticulous about our mobile experience, adding new features and honing the design until we knew it was something people loved. After strong positive feedback from our customers and a solid rating in the iTunes App Store, we knew it was time.

Today, we are happy to announce that Asana for Android is here. You can get it right now in the Google Play store


As of today (May 8, 2014) there are 70 employees and 15 open positions. The company has 4 investors: Benchmark Capital, Andreessen-Horowitz, Founders Fund and Peter Thiel. The first two put $9 million in November 2009. Then Founders Fund and Peter Thiel added to that $28 million in July 2012. Reuters reported that with Facebook alumni line up $28 million for workplace app Asana [July 23, 2012]:

Asana, a Silicon Valley start-up, has lined up $28 million in a financing round led by PayPal co-founder Peter Thiel and his Founders Fund, the company said.

The funding round values the workplace-collaboration company at $280 million, a person familiar with the matter said.

“This investment allows us to attract the best and brightest designers and engineers,” said Asana co-founder Justin Rosenstein, who said that in turn would help the company build on its goal of making interaction among its client-companies’ employees easier.

Asana launched the free version last year of its company management software that makes it easier to collaborate on projects. It introduced a paid, premium service earlier this year. It declined to give revenue figures, but said “hundreds” of customers had upgraded to the premium version.

Although Rosenstein and co-founder Dustin Moskovitz are alumni of social-network Facebook– Moskovitz co-founded the service with his Harvard roommate Mark Zuckerberg – they were quick to distance Asana from social networking.

Instead, they say, they view the company as an alternative to email, in-person meetings, physical whiteboards, and spreadsheets.

“That’s what we see as our competition,” said Rosenstein. “Replacing those technologies.”

With its latest funding round, Asana has now raised a total of $38 million from investors including Benchmark Capital and Andreessen Horowitz.

Thiel, who got to know Moskovitz and Rosenstein thanks to his early backing of Facebook, had already invested in Asana when it raised its “angel” round in early 2009. Now, his high-profile Founders Fund is investing and Thiel is joining Asana’s board.

Facebook has 901 million monthly users and revenue last year of $3.7 billion. But its May initial public offering disappointed many investors after it priced at $38 per share and then quickly fell. It closed on Friday at $28.76.

Many investors speculate that start-ups will have to accept lower valuations in the wake of the Facebook IPO. The Asana co-founders said the terms of their latest funding round were set before Facebook debuted on public markets.

A few of Facebook’s longtime employees have gone on to work on their own ventures.

Bret Taylor, formerly chief technology officer, said last month he was leaving to start his own company.

Dave Morin, who joined Facebook in 2008 from Apple, left in 2010 to found social network Path. Facebook alumni Adam D’Angelo and Charlie Cheever left in 2009 to start Quora, their question-and-answer company, which is also backed by Thiel.

Another former roommate of Zuckerberg’s, Chris Hughes, also left a few years ago and coordinated online organizing for Barack Obama’s 2008 presidential campaign. Now, he is publisher of the New Republic magazine.

Matt Cohler, who joined Facebook from LinkedIn early in 2005, joined venture capital firm Benchmark Capital in 2008. His investments there include Asana and Quora.

Core technology used

Luna, our in-house framework for writing great web apps really quickly [asana blog, Feb 2, 2010]

At Asana, we’re building a Collaborative Information Manager that we believe will make it radically easier for groups of people to get work done. Writing a complex web application, we experienced pain all too familiar to authors of “Web 2.0″ software (and interactive software in general): there were all kinds of extremely difficult programming tasks that we were doing over and over again for every feature we wanted to write. So we’re developing Lunascript — an in-house programming language for writing rich web applications in about 10% of the time and code you can today.

Check out the video we made » 
[rather an article about Luna as of Nov 2, 2011]

Update: For now we’ve tabled using the custom DSL syntax in favor of a set of Javascript idioms and conventions on top of the “Luna” runtime. So while the contents of this post still accurately present the motivation and capabilities of the Luna framework, we’re using a slightly more cumbersome (JavaScript) syntax than what you see below, in exchange for having more control over the “object code” (primarily for hand-tuning performance).

Release the Kraken! An open-source pub/sub server for the real-time web [asana blog, March 5, 2013]

Today, we are releasing Kraken, the distributed pub/sub server we wrote to handle the performance and scalability demands of real-time web apps like Asana.

Before building Kraken, we searched for an existing open-source pub/sub solution that would satisfy our needs. At the time, we discovered that most solutions in this space were designed to solve a much wider set of problems than we had, and yet none were particularly well-suited to solve the specific requirements of real-time apps like Asana. Our team had experience writing routing-based infrastructure and ultimately decided to build a custom service that did exactly what we needed – and nothing more.

The decision to build Kraken paid off. For the last three years, Kraken has been fearlessly routing messages between our servers to keep your team in sync. During this time, it has yet to crash even once. We’re excited to finally release Kraken to the community!

Issues Moving to Amazon’s Elastic Load Balancer [asana blog, June 5, 2012]


Asana’s infrastructure runs almost entirely on top of Amazon Web Services (AWS). AWS provides us with the ability to launch managed production infrastructure in minutes with simple API calls. We use AWS for servers, databases, monitoring, and more. In general, we’ve been very happy with AWS. A month ago, we decided to use Amazon’s Elastic Load Balancer service to balance traffic between our own software load balancers.

Announcing the Asana API [asana blog, April 19, 2012]

Today we are excited to share that you can now add and access Asana data programmatically using our simple REST API.

The Asana API lets you build a variety of applications and scripts to integrate Asana with your business systems, show Asana data in other contexts, and create tasks from various locations.

Here are some examples of the things you can build:

  • Source Control Integration to mark a Task as complete and add a link to the code submission as a comment when submitting code.
  • A desktop app that shows the Tasks assigned to you
  • A dashboard page that shows a visual representation of complete and incomplete Tasks in a project

Asana comes to Internet Explorer [asana blog, Oct 16, 2013]


Asana is a fast and versatile web-based application that pushes the boundaries of what’s possible inside a browser. Our sophisticated Javascript app requires a modern browser platform, and up until now we could only provide the right user experience on Chrome, Firefox, and Safari. With IE10, Internet Explorer has drastically improved their developer tools and made a marked improvement in standards compliance. With these improvements, we were able to confidently develop Asana for IE10, and we’ve been pleasantly surprised by the process. Check out the blog post on our developer site to see what we learned during this project.

Microsoft BUILD 2014 Day 2: “rebranding” to Microsoft Azure and moving toward a comprehensive set of fully-integrated backend services

  1. “Rebranding” into Microsoft Azure from the previous Windows Azure
  2. Microsoft Azure Momentum on the Market
  3. The new Azure Management Portal (preview)
  4. New Azure features: IaaS, web, mobile and data announcements

Microsoft Announces New Features for Cloud Computing Service [CCTV America YouTube channel, April 3, 2014]

Day two of the Microsoft Build developer conference in San Francisco wrapped up with the company announcing 44 new services. Most of those are based on Microsoft Azure – it’s cloud computing platform that manages applications across data centers. CCTV’s Mark Niu reports from San Francisco.

Watch the first 10 minutes of this presentation for a brief summary of the latest state of Microsoft Azure: #ChefConf 2014: Mark Russinovich, “Microsoft Azure Group” [Chef YouTube channel, April 16, 2014]

Mark Russinovich is a Technical Fellow in the Windows Azure Group at Microsoft working on Microsoft’s cloud platform. He is a widely recognized expert in operating systems, distributed systems, and cybersecurity. In this keynote from #ChefConf 2014, he gives an overview of Microsoft Azure and a demonstration of the integration between Azure and Chef

Then here is a fast talk and Q&A on Azure with Scott Guthrie after his keynote preseantation at BUILD 2014:
Cloud Cover Live – Ask the Gu! [jlongo62 YouTube channel, published on April 21, 2014]

With Scott Guthrie, Executive Vice President Microsoft Cloud and Enterprise group

The original: Cloud Cover Live – Ask the Gu! [Channel 9, April 3, 2014]

Details:

  1. “Rebranding” into Microsoft Azure from the previous Windows Azure
  2. Microsoft Azure Momentum on the Market
  3. The new Azure Management Portal (preview)
  4. New Azure features: IaaS, web, mobile and data announcements

[2:45:47] long video record of the Microsoft Build Conference 2014 Day 2 Keynote [MSFT Technology News YouTube channel, recorded on April 3, published on April 7, 2014]

Keynote – April 2-4, 2014 San Francisco, CA 8:30AM to 11:30AM

The original video record on Channel 9
Day 2 Keynote transcript by Microsoft


1. “Rebranding” into Microsoft Azure from the previous Windows Azure

Yes, you’ve noticed right: the Windows prefix has gone, and the full name is now only Microsoft Azure! The change happened on April 3 as evidenced by change of the cover photo on the Facebook site, now also called Microsoft Azure:

image

from this cover photo used from July 23, 2013 on:

image

And it happened without any announcement or explanation as even the last, April 1 Microsoft video carried the Windows prefix: Tuesdays with Corey //build Edition

We can’t believe he said that! This week, Corey gets us in trouble by spilling all sorts of //build secrets. Check it out!

as well as the last, March 14 video ad: Get Your Big Bad Wolf On (Extended)

Go get your big bad wolf on, today: http://po.st/01rkCL


2. Microsoft Azure Momentum on the Market

The day began with Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group, touting Microsoft progress with Azure for the last 18 months when:

… we talked about our new strategy with Azure and our new approach, a strategy that enables me to use both infrastructure as a service and platform as a service capabilities together, a strategy that enables developers to use the best of the Windows ecosystem and the best of the Linux ecosystem together, and one that delivers unparalleled developer productivity and enables you to build great applications and services that work with every device

  • Last year … shipped more than 300 significant new features and releases
  • … we’ve also been hard at work expanding the footprint of Azure around the world. The green circles you see on the slide here represent Azure regions, which are clusters of datacenters close together, and where you can go ahead and run your application code. Just last week, we opened two new regions, one in Shanghai and one in Beijing. Today, we’re the only global, major cloud provider that operates in mainland China. And by the end of the year, we’ll have more than 16 public regions available around the world, enabling you to run your applications closer to your customers than ever before.
  • More than 57 percent of the Fortune 500 companies are now deployed on Azure.
  • Customers run more than 250,000 public-facing websites on Azure, and we now host more than 1 million SQL databases on Azure.
  • More than 20 trillion objects are now stored in the Azure storage system. We have more than 300 million users, many of them — most of them, actually, enterprise users, registered with Azure Active Directory, and we process now more than 13 billion authentications per week.
  • We have now more than 1 million developers registered with our Visual Studio Online service, which is a new service we launched just last November.

Let’s go beyond the big numbers, though, and look at some of the great experiences that have recently launched and are using the full power of Azure and the cloud.

Titanfall” was one of the most eagerly anticipated games of the year, and had a very successful launch a few weeks ago. “Titanfall” delivers an unparalleled multiplayer gaming experience, powered using Azure.

Let’s see a video of it in action, and hear what the developers who built it have to say.

[Titanfall and the Power of the Cloud [xbox YouTube channel, April 3, 2014]]

‘Developers from Respawn Studios and Xbox discuss how cloud computing helps take Titanfall to the next level.

One of the key bets the developers of “Titanfall” made was for all game sessions on the cloud. In fact, you can’t play the game without the cloud, and that bet really paid off.

As you heard in the video, it enables much, much richer gaming experiences. Much richer AI experiences. And the ability to tune and adapt the game as more users use it.

To give you a taste of the scale, “Titanfall” had more than 100,000 virtual machines deployed and running on Azure on launch day. Which is sort of an unparalleled size in terms of a game launch experience, and the reviews of the game have been absolutely phenomenal.

Another amazing experience that recently launched and was powered using Azure was the Sochi Olympics delivered by NBC Sports.

NBC used Azure to stream all of the games both live and on demand to both Web and mobile devices. This was the first large-scale live event that was delivered entirely in the cloud with all of the streaming and encoding happening using Azure.

Traditionally, with live encoding, you typically run in an on-premises environment because it’s so latency dependent. With the Sochi Olympics, Azure enabled NBC to not only live encode in the cloud, but also do it across multiple Azure regions to deliver high-availability redundancy.

More than 100 million people watched the online experience, and more than 2.1 million viewers alone watched it concurrently during the U.S. versus Canada men’s hockey match, a new world record for online HD streaming.

RICK CORDELLA [Senior Vice President and General Manager of NBC Sports Digital]: The company bets about $1 billion on the Olympics each time it goes off. And we have 17 days to recoup that investment. Needless to say, there is no safety net when it comes to putting this content out there for America to enjoy. We need to make sure that content is out there, that it’s quality, that our advertisers and advertisements are being delivered to it. There really is no going back if something goes wrong.

The decision for that was taken more than a year ago: Windows Azure Teams Up With NBC Sports Group [Microsoft Azure YouTube channel, April 9, 2013]

Rick Cordella, senior vice president and general manager of digital media at NBC Sports Group discusses how they use Windows Azure across their digital platforms


3. The new Azure Management Portal (preview)

But in fact a new way of providing a comprehensive set of fully-integrated backend services had significantly bigger impact on the audience of developers. According to Microsoft announces new cloud experience and tools to deliver the cloud without complexity [The Official Microsoft Blog, April 3, 2014]

The following post is from Scott Guthrie, Executive Vice President, Cloud and Enterprise Group, Microsoft.

On Thursday at Build in San Francisco, we took an important step by unveiling a first-of-its kind cloud environment within Microsoft Azure that provides a fully integrated cloud experience – bringing together cross-platform technologies, services and tools that enable developers and businesses to innovate with enterprise-grade scalability at startup speed. Announced today, our new Microsoft Azure Preview [Management]Portal is an important step forward in delivering our promise of the cloud without complexity.

image

When cloud computing was born, it was hailed as the solution that developers and business had been waiting for – the promise of a quick and easy way to get more from your business-critical apps without the hassle and cost of infrastructure. But as the industry transitions toward mobile-first, cloud-first business models and scenarios, the promise of “quick and easy” is now at stake. There’s no question that developing for a world that is both mobile-first and cloud-first is complicated. Developers are managing thousands of virtual machines, cobbling together management and automation solutions, and working in unfamiliar environments just to make their apps work in the cloud – driving down productivity as a result.

Many cloud vendors tout the ease and cost savings of the cloud, but they leave customers without the tools or capabilities to navigate the complex realities of cloud computing. That’s why today we are continuing down a path of rapid innovation. In addition to our groundbreaking new Microsoft Azure Preview [Management] Portal, we announced several enhancements our customers need to fully tap into the power of the cloud. These include:

  • Dozens of enhancements to our Azure services across Web, mobile, data and our infrastructure services
  • Further commitment to building the most open and flexible cloud with Azure support for automation software from Puppet Labs and Chef.
  • We’ve removed the throttle off our Application Insights preview, making it easier for all developers to build, manage and iterate on their apps in the cloud with seamless integration into the IDE

<For details see the separate section 4. New Azure features: IaaS, web, mobile and data announcements>

Here is a brief presentation by a Brazilian specialist: Microsoft Azure [Management] Portal First Touch [Bruno Vieira YouTube channel, April 3, 2014]

From Microsoft evolves the cloud experience for customers [press release, April 3, 2014]

… Thursday at Build 2014, Microsoft Corp. announced a first-of-its-kind cloud experience that brings together cross-platform technologies, services and tools, enabling developers and businesses to innovate at startup speed via a new Microsoft Azure Preview [Management] Portal.

In addition, the company announced several new milestones in Visual Studio Online and .NET that give developers access to the most complete platform and tools for building in the cloud. Thursday’s announcements are part of Microsoft’s broader vision to erase the boundaries of cloud development and operational management for customers.

“Developing for a mobile-first, cloud-first world is complicated, and Microsoft is working to simplify this world without sacrificing speed, choice, cost or quality,” said Scott Guthrie, executive vice president at Microsoft. “Imagine a world where infrastructure and platform services blend together in one seamless experience, so developers and IT professionals no longer have to work in disparate environments in the cloud. Microsoft has been rapidly innovating to solve this problem, and we have taken a big step toward that vision today.”

One simplified cloud experience

The new Microsoft Azure Preview [Management] Portal provides a fully integrated experience that will enable customers to develop and manage an application in one place, using the platform and tools of their choice. The new portal combines all the components of a cloud application into a single development and management experience. New components include the following:

  • Simplified Resource Management. Rather than managing standalone resources such as Microsoft Azure Web Sites, Visual Studio Projects or databases, customers can now create, manage and analyze their entire application as a single resource group in a unified, customized experience, greatly reducing complexity while enabling scale. Today, the new Azure Manager is also being released through the latest Azure SDK for customers to automate their deployment and management from any client or device.

  • Integrated billing. A new integrated billing experience enables developers and IT pros to take control of their costs and optimize their resources for maximum business advantage.

  • Gallery. A rich gallery of application and services from Microsoft and the open source community, this integrated marketplace of free and paid services enables customers to leverage the ecosystem to be more agile and productive.

  • Visual Studio Online. Microsoft announced key enhancements through the Microsoft Azure Preview [Management] Portal, available Thursday. This includes Team Projects supporting greater agility for application lifecycle management and the lightweight editor code-named “Monaco” for modifying and committing Web project code changes without leaving Azure. Also included is Application Insights, an analytics solution that collects telemetry data such as availability, performance and usage information to track an application’s health. Visual Studio integration enables developers to surface this data from new applications with a single click.

Building an open cloud ecosystem

Showcasing Microsoft’s commitment to choice and flexibility, the company announced new open source partnerships with Chef and Puppet Labs to run configuration management technologies in Azure Virtual Machines. Using these community-driven technologies, customers will now be able to more easily deploy and configure in the cloud. In addition, today Microsoft announced the release of Java Applications to Microsoft Azure Web Sites, giving Microsoft even broader support for Web applications.

From BUILD Day 2: Keynote Summary [by Steve Fox – DPE (MSFT) on MSDN Blogs, April 3, 2014]

….
Bill Staples then came on stage to show off the new Azure [management] portal design and features. Bill walked through a number of the new innovations in the portal, such as improved UX, app insights, “blade” views [the “blade” term is used for the dropdown that allows a drilldown], etc. A screen shot of the new portal is shown below.

image

image

Bill also walked through the comprehensive analytics (such as compute and billing) that are now available on the portal. He also walked through “Application Insights,” which is a great way to instrument your code in both the portal and in your code with easy-to-use, pre-defined code snippets. He completed his demo walkthrough by showing the Azure [management] portal as a “NOC” [Network Operations Center] view on a big-screen TV.

image

The above image is at the [1:44:24] point in time of the keynote video record on Channel 9 and it is giving more information if we provide here the part of transcript around it:

BILL STAPLES at [1:43:39]: Now, to conclude the operations part of this demo, I wanted to show you an experience for how the new Azure Portal works on a different device. You’ve seen it on the desktop, but it works equally well on a tablet device, that is really touch friendly. Check it out on your Surface or your iPad, it works great on both devices.

But we’re thinking as well if you’ve got a big-screen TV or a projector lying around your team room, you might want to think about putting the Microsoft Azure portal as your own personal NOC.

In this case, I’ve asked the Office developer team if we could have access to their live site log. So they made me promise, do not hit the stop button or the delete button, which I promised to do.

[1:44:24] This is actually the Office developer log site. And you can see it’s got almost 10 million hits already today running on Azure Websites. So very high traffic.

They’ve customized it to show off the browser usage on their website. Imagine we’re in a team Scrum with the Office developer guys and we check out, you know, how is the website doing? We’ve got some interesting trends here.

In fact, there was a spike of sessions it looks like going on about a week ago. And page views, that’s kind of a small part. It would be nice to know which page it was that spiked a week ago. Let’s go ahead and customize that.

This screen is kind of special because it has touch screen. So I can go ahead and let’s make that automatically expand there. Now we see a bigger view. Wow, that was a really big spike last week. What page was that? We can click into it. We get the full navigation experience, same on the desktop, as well as, oh, look at that. There’s a really popular blog post that happened about a week ago. What was that? Something about announcing Office on the iPad you love. Makes sense, huh? So we can see the Azure Portal in action here as the Office developer team might imagine it. [1:45:44]

The last thing I want to show is the Azure Gallery.

image

We populated the gallery with all of the first-party Microsoft Azure services, as well as the [services from] great partners that we’ve worked with so far in creating this gallery.

image

And what you’re seeing right here is just the beginning. We’ve got the core set of DevOps experiences built out, as well as websites, SQL, and MySQL support. But over the coming months, we’ll be integrating all of the developer and IT services in Microsoft as well as the partner services into this experience.

Let me just conclude by reminding us what we’ve seen. We’ve seen a first-of-its-kind experience from Microsoft that fuses our world-class developer services together with Azure to provide an amazing dev-ops experience where you can enjoy the entire lifecycle from development, deployment, operations, gathering analytics, and iterating right here in one experience.

We’ve seen an application-centric experience that brings together all the dev platform and infrastructure services you know and love into one common shell. And we’ve seen a new application model that you can describe declaratively. And through the command line or programmatically, build out services in the cloud with tremendous ease. [1:47:12]

More information on the new Azure [Management] Portal:

Today, at Build, we unveiled a new Azure [Management] Portal experience we are building.  I want to give you some insights into the work that VS Online team is doing to help with it.  I’m not on the Azure team and am no expert on how they’d like to describe to the world, so please take any comments I make here about the new Azure portal as my perspective on it and not necessarily an official one.

Bill Staples first presented to me almost a year ago an idea of creating a new portal experience for Azure designed to be an optimal experience for DevOps.  It would provide everything a DevOps team needs to do modern cloud based development.  Capabilities to provision dev and test resources, development and collaboration capabilities, build, release and deployment capabilities, application telemetry and management capabilities and more.  Pretty quickly it became clear to me that if we could do it, it would be awesome.  An incredibly productive and easy way for devs to do soup to nuts app development.

What we demoed today (and made available via http://portal.azure.com”) is the first incarnation of that.  My team (the VS Online Team) has worked very hard over the past many months with the Azure team to build the beginnings of the experience we hope to bring to you.  It’s very early and it’s nowhere near done but it’s definitely something we’d love to start getting some feedback on.

For now, it’s limited to Azure websites, SQL databases and a subset of the VS Online capabilities.  If you are a VS Online/TFS user, think of this as a companion to Visual Studio, Visual Studio Online and all of the tools you are used to.  When you create a team project in the Azure portal, it’s a VS Online Team Project like any other and is accessible from the Azure portal, the VS Online web UI, Visual Studio, Eclipse and all the other ways your Visual Studio Online assets are available.  For now, though, there are a few limitations – which we are working hard to address.  We are in the middle of adding Azure Active Directory support to Visual Studio Online and, for a variety of reasons, chose to limit the new portal to only work with VS Online accounts linked to Azure Active Directory.

The best way to ensure this is just to create a new Team Project and a new VS Online account from within the new Azure portal.  You will need to be logged in to the Azure portal with an identity known to your Azure Active Directory tenant and to add new users, rather than add them directly in Visual Studio Online, you will add them through Azure Active directory.  One of the ramifications of this, for now, is that you can’t use an existing VS Online account in the new portal – you must create a new one.  Clearly that’s a big limitation and one we are working hard to remove.  We will enable you to link existing VS Online accounts to Active Directory we just don’t have it yet – stay tuned.

I’ll do a very simple tour.  You can also watch Brian Keller’s Channel9 video.

Brian Keller talks with Jonah Sterling and Vishal Joshi about the new Microsoft Azure portal preview. This Preview portal is a big step forward in the journey toward integrated DevOps tools, technologies, and cloud services. See how you can deliver and scale business-ready apps for every platform more easily and rapidly—using what you already know and whatever toolset you like most

Further information:


4. New Azure features: IaaS, web, mobile and data announcements

According to Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group:

image

[IaaS] First up, let’s look at some of the improvements we’re making with our infrastructure features and some of the great things we’re enabling with virtual machines.

Azure enables you to run both Windows and Linux virtual machines in the cloud. You can run them as stand-alone servers, or join them together to a virtual network, including one that you can optionally bridge to an on-premises networking environment.

This week, we’re making it even easier for developers to create and manage virtual machines in Visual Studio without having to leave the VS IDE: You can now create, destroy, manage and debug any number of VMs in the cloud. (Applause.)

Prior to today, it was possible to create reusable VM image templates, but you had to write scripts and manually attach things like storage drives to them. Today, we’re releasing support that makes it super-easy to capture images that can contain any number of storage drives. Once you have this image, you can then very easily take it and create any number of VM instances from it, really fast, and really easy. (Applause.)

Starting today, you can also now easily configure VM images using popular frameworks like Puppet, Chef, and our own PowerShell and VSD tools. These tools enable you to avoid having to create and manage lots of separate VM images. Instead, you can define common settings and functionality using modules that can cut across every type of VM you use.

You can also create modules that define role-specific behavior, and all these modules can be checked into source control and they can also then be deployed to a Puppet Master or Chef server.

And one of the things we’re doing this week is making it incredibly easy within Azure to basically spin up a server farm and be able to automatically deploy, provision and manage all of these machines using these popular tools.

We’re also excited to announce the general availability of our auto-scale service, as well as a bunch of great virtual networking capabilities including point-to-site VPN support going GA, new dynamic routing, subnet migration, as well as static internal IP address. And we think the combination of this really gives you a very flexible environment, as you saw, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud.

So we think infrastructure as a service is super-flexible, and it really kind of enables you to manage your environments however you want.


We also, though, provide prebuilt services and runtime environments that you can use to assemble your applications as well, and we call these platform as a service [PaaS] capabilities.

One of the benefits of these prebuilt services is that they enable you to focus on your application and not have to worry about the infrastructure underneath it.

We handle patching, load balancing, high availability and auto scale for you. And this enables you to work faster and do more.

What I want to do is just spend a little bit of time talking through some of these platform as a service capabilities, so we’re going to start talking about our Web functionality here today.

image

[Web] One of the most popular PaaS services that we now have on Windows Azure is something we call the Azure Website Service. This enables you to very easily deploy Web applications written in a variety of different languages and host them in the cloud. We support .NET, NOJS, PHP, Python, and we’re excited this week to also announce that we’re adding Java language support as well.

image

This enables you as a developer to basically push any type of application into Azure into our runtime environment, and basically host it to any number of users in the cloud.

Couple of the great features we have with Azure include auto-scale capability. What this means is you can start off running your application, for example, in a single VM. As more load increases to it, we can then automatically scale up multiple VMs for you without you having to write any script or take any action yourself. And if you get a lot of load, we can scale up even more.

You can basically configure how many VMs you maximally want to use, as well as what the burn-down rate is. And as your traffic — and this is great because it enables you to not only handle large traffic spikes and make sure that your apps are always responsive, but the nice thing about auto scale is that when the traffic drops off, or maybe during the night when it’s a little bit less, we can automatically scale down the number of machines that you need, which means that you end up saving money and not having to pay as much.

One of the really cool features that we’ve recently introduced with websites is something we call our staging support. This solves kind of a pretty common problem with any Web app today, which is there’s always someone hitting it. And how do you stage the deployments of new code that you roll out so that you don’t ever have a site in an intermediate state and that you can actually deploy with confidence at any point in the day?

And what staging support enables inside of Azure is for you to create a new staging version of your Web app with a private URL that you can access and use to test. And this allows you to basically deploy your application to the staging environment, get it ready, test it out before you finally send users to it, and then basically you can push one button or send a single command called swap where we’ll basically rotate the incoming traffic from the old production site to the new staged version.

What’s nice is we still keep your old version around. So if you discover once you go live you still have a bug that you missed, you can always swap back to the previous state. Again, this allows you to deploy with a lot of confidence and make sure that your users are always seeing a consistent experience when they hit your app.

Another cool feature that we’ve recently introduced is a feature we call Web Jobs. And this enables you to run background tasks that are non-HTTP responsive that you can actually run in the background. So if it takes a while to run it, this is a great way you can offload that work so that you’re not stalling your actual request response thread pool.

Basically, you know, common scenario we see for a lot of people is if they want to process something in the background, when someone submits something, for example, to the website, they can go ahead and simply drop an item into a queue or into the storage account, respond back down to the user, and then with one of these Web jobs, you can very easily run background code that can pull that queue message and actually process it in an offline way.

And what’s nice about Web jobs is you can run them now in the same virtual machines that host your websites. What that means is you don’t have to spin up your own separate set of virtual machines, and again, enables you to save money and provides a really nice management experience for it.

The last cool feature that we’ve recently introduced is something we call traffic manager support. With Traffic Manager, you can take advantage of the fact that Azure runs around the world, and you can spin up multiple instances of your website in multiple different regions around the world with Azure.

What you can then do is use Traffic Manager so you can have a single DNS entry that you then map to the different instances around the world. And what Traffic Manager does is gives you a really nice way that you can actually automatically, for example, route all your North America users to one of the North American versions of your app, while people in Europe will go routed to the European version of your app. That gives you better performance, response and latency.

Traffic Manager is also smart enough so that if you ever have an issue with one of the instances of your app, it can automatically remove it from those rotations and send users to one of the other active apps within the system. So this gives you also a nice way you can fail over in the event of an outage.

And the great thing about Traffic Manager, now, is you can use it not just for virtual machines and cloud services, but we’ve also now enabled it to work fully with websites.

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Scott then invited Mads Kristensen on stage to walk through a few of the features that Scott discussed at a higher level. Specifically, he walked through the new ASP.NET templates emphasizing the creation of the DB layer and then showing PowerShell integration to manage your web site. He then showed Angular integration with Azure Web sites, emphasizing easy and dynamic ways to update your site showing  deep browser and Visual Studio integration (Browser Link), showing updates that are made in the browser show up in the code in Visual Studio. Very cool!!
image
He also showed how you can manage staging and production sites by using the “swap” functionality built into the Azure Web sites service. He also showed Web Jobs to show how you can also run background jobs and Traffic Manager functionality to ensure your customers have the best performing web site in their regions.

So as Mads showed, there are a lot of great features that we’re kind of unveiling this week. A lot of great announcements that go with it.

These include the general availability release of auto-scale support for websites, as well as the general availability release of our new Traffic Manager support for websites as well. As you saw there, we also have Web Job support, and one of the things that we didn’t get to demo which is also very cool is backup support so that automatically we can have both your content as well as your databases backed up when you run them in our Websites environment as well.

Lots of great improvements also coming in terms of from an offer perspective. One thing a lot of people have asked us for with Websites is the ability not only to use SSL, but to use SSL without having to pay for it. So one of the cool things that we’re adding with Websites and it goes live today is we’re including one IP address-based SSL certificate and five SNI-based SSL certificates at no additional cost to every Website instance. (Applause.)

Throughout the event here, you’re also going to hear a bunch of great sessions on some of the improvements we’re making to ASP.NET. In terms of from a Web framework perspective, we’ve got general availability release of ASP.NET MVC 5.1, Web API 2.1, Identity 2.0, as well as Web Pages 3.1 So a lot of great, new features to take advantage of.

As you saw Mads demo, a lot of great features inside Visual Studio including the ability every time you create an ASP.NET project now to automatically create an Azure Website as part of that flow. Remember, every Azure customer gets 10 free Azure Websites that you can use forever. So even if you’re not an MSDN customer, you can take advantage of that feature in order to set up a Web environment literally every time you create a new project. So pretty exciting stuff.

So that was one example of some of the PaaS capabilities that we have inside Azure.


[Mobile] I’m going to move now into the mobile space and talk about some of the great improvements that we’re making there as well.

One of the great things about Azure is the fact that it makes it really easy for you to build back ends for your mobile applications and devices. And one of the cool things you can do now is you can develop those back ends with both .NET as well as NOJS, and you can use Visual Studio or any other text editor on any other operating system to actually deploy those applications into Azure.

And once they’re deployed, we make it really easy for you to go ahead and connect them to any type of device out there in the world.

image

Now, some of the great things you can do with this is take advantage of some of the features that we have, which provide very flexible data handling. So we have built-in support for Azure storage, as well as our SQL database, which is our PaaS database offering for relational databases, as well as take advantage of things like MongoDB and other popular NoSQL solutions.

image

We support the ability not only to reply to messages that come to us, but also to push messages to devices as well. One of the cool features that Mobile Services can take advantage of — and it’s also available as a stand-alone feature — is something we call notification hubs. And this basically allows you to send a single message to a notification hub and then broadcast it to, in some cases, devices that might be registered to it.

We also support with Mobile Services a variety of flexible authentication options. So when we first launched mobile services, we added support for things like Facebook login, Google ID, Twitter ID, as well as Microsoft Accounts.

One of the things we’re excited to demo here today is Active Directory support as well. So this enables you to build new applications that you can target, for example, your employees or partners, to enable them to sign in using the same enterprise credentials that they use in an on-premises Active Directory environment.

What’s great is we’re using standard OAuth tokens as part of that. So once you authenticate, you can take that token, you can use it to also provide authorization access to your own custom back-end logic or data stores that you host inside Azure.

We’re also making it really easy so that you can also take that same token and you can use it to access Office 365 APIs and be able to integrate that user’s data as well as functionality inside your application as well.

The beauty about all of this is it works with any device. So whether it’s a Windows device or an iOS device or an Android device, you can go ahead and take advantage of this capability.

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Yavor Georgiev then came on stage to walk through a Mobile Services demo. He showed off a new Mobile Services Visual Studio template, test pages with API docs, local and remote debugging capabilities, and a LOB app that enables Facilities departments to manage service requests—this showed off a lot of the core ASP.NET/MVC features along with a quick publish service to your Mobile Services service in Azure. Through this app, he showed how to use Active Directory to build the app—which prompts you to log into the app with your corp/AD credentials to use the app. He then showed how the app integrates with SharePoint/O365 such that the request leverages the SharePoint REST APIs to publish a doc to a Facilities doc repository. He also showed how you can re-use the core code through Xamarin to repurpose the code for iOS.
The app is shown here native in Visual Studio.

image

This app view is the cross-platform build using Xamarin.

image

Kudos to Yavor! This was an awesome demo that showcases how far Mobile Services has come in a short period of time—love the extensibility and the cross-platform capabilities. Very nice!

One of the things that kind of Yavor showed there is just sort of how easy it is now to build enterprise-grade mobile applications using Azure and Visual Studio.

And one of the key kind of lynchpins in terms of from a technology standpoint that really makes this possible is our Azure Active Directory Service. This basically provides an Active Directory in the cloud that you can use to authenticate any device. What makes it powerful is the fact that you can synchronize it with your existing on-premises Active Directory. And we support both synch options, including back to Windows Server 2003 instances, so it doesn’t even require a relatively new Windows Server, it works with anything you’ve got.

We also support a federate option as well if you want to use ADFS. Once you set that environment up, then all your users are available to be authenticated in the cloud and what’s great is we ship SDKs that work with all different types of devices, and enables you to integrate authentication into those applications. And so you don’t everyone have to have your back end hosted on Azure, you can take advantage of this capability to enable single sign-on with any enterprise credential.

And what’s great is once you get that token, that same token can then be used to program against Office 365 APIs as well as the other services across Microsoft. So this provides a really great opportunity not only for building enterprise line-of-business apps, but also for ISVs that want to be able to build SaaS solutions as well as mobile device apps that integrate and target enterprise customers as well.

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Scott then invited Grant Peterson from DocuSign on stage to discuss how they are using Azure, who demoed AD integration with DocuSign’s iOS app. Nice!

image

image

This is really huge for those of you building apps that are cross-platform but have big investments in AD and also provides you as developers a way to reach enterprise audiences.

So I think one of the things that’s pretty cool about that scenario is both the opportunity it offers every developer that wants to reach an enterprise audience. The great thing is all of those 300 million users that are in Azure Active Directory today and the millions of enterprises that have already federated with it are now available for you to build both mobile and Web applications against and be able to offer to them an enterprise-grade solution to all of your ISV-based applications.

That really kind of changes one of the biggest concerns that people end up having with enterprise apps with SaaS into a real asset where you can make it super-easy for them to go ahead and integrate and be able to do it from any device.

And one of the things you might have noticed there in the code that Grant showed was that it was actually all done on the client using Objective-C, and that’s because we have a new Azure Active Directory iOS SDK as well as an Android SDK in addition to our Windows SDK. And so you can use and integrate with Azure Active Directory from any device, any language, any tool.

Here’s a quick summary of some of the great mobile announcements that we’re making today. Yavor showed we now have .NET backend support, single sign-on with Active Directory.

One of the features we didn’t get a chance to show, but you can learn more about in the breakout talk is offline data sync. So we also now have built into Mobile Services the ability to sync and handle disconnected states with data. And then, obviously, the Visual Studio and remote debugging capabilities as well.

We’ve got not only the Azure SDKs for Azure Active Directory, but we also now have Office 365 API integration. We’re also really excited to announce the general availability or our Azure AD Premium release. This provides enterprises management capabilities that they can actually also use and integrate with your applications, and enables IT to also feel like they can trust the applications and the SaaS solutions that their users are using.

And then we have a bunch of great improvements with notification hubs including Kindle support as well as Visual Studio integration.

So a lot of great features. You can learn about all of them in the breakout talks this week.

So we’ve talked about Web, we’ve talked about mobile when we talk about PaaS.


[Data] I want to switch gears now and talk a little bit about data, which is pretty fundamental and integral to building any type of application.

image

And with Azure, we support a variety of rich ways to handle data ranging from unstructured, semistructured, to relational. One of the most popular services you heard me talk about at the beginning of the talk is our SQL database story. We’ve got over a million SQL databases now hosted on Azure. And it’s a really easy way for you to spin up a database, and better yet, it’s a way that we then manage for you. So we do handle things like high availability and patching.

You don’t have to worry about that. Instead, you can focus on your application and really be productive.

We’ve got a whole bunch of great SQL improvements that we’re excited to announce this week. I’m going to walk through a couple of them real quickly.

One of them is we’re increasing the database size that we support with SQL databases. Previously, we only supported up to 150 gigs. We’re excited to announce that we’re increasing that to support 500 gigabytes going forward. And we’re also delivering a new 99.95 percent SLA as part of that. So this now enables you to run even bigger applications and be able to do it with high confidence in the cloud. (Applause.)

Another cool feature we’re adding is something we call Self-Service Restore. I don’t know if you ever worked on a database application where you’ve written code like this, hit go, and then suddenly had a very bad feeling because you realized you omitted the where clause and you just deleted your entire table. (Laughter.)

And sometimes you can go and hopefully you have backups. This is usually the point when you discover when you don’t have backups.

And one of the things that we built in as part of the Self-Service Restore feature is automatic backups for you. And we actually let you literally roll back the clock, and you can choose what time of the day you want to roll it back to. We save up to I think 31 days of backups. And you can basically rehydrate a new database based on whatever time of the day you wanted to actually restore from. And then, hopefully, your life ends up being a lot better than it started out.

This is just a built-in feature. You don’t have to turn it on. It’s just sort of built in, something you can take advantage of. (Applause.)

Another great feature that we’re building in is something we call active geo-replication. What this lets you do now is you can actually go ahead and run SQL databases in multiple Azure regions around the world. And you can set it up to automatically replicate your databases for you.

And this is basically an asynchronous replication. You can basically have your primary in rewrite mode, and then you can actually have your secondary and you can have multiple secondaries in read-only mode. So you can still actually be accessing the data in read-only mode elsewhere.

In the event that you have a catastrophic issue in, say, one region, say a natural disaster hits, you can go ahead and you can initiate the failover automatically to one of your secondary regions. This basically allows you to continue moving on without having to worry about data loss and gives you kind of a really nice, high-availability solution that you can take advantage of.

One of the things that’s nice about Azure’s regions is we try to make sure we have multiple regions in each geography. So, for example, we have two regions that are at least 500 miles away in Europe, and in North America, and similarly with Australia, Japan and China. And what that means is that you know if you do need to fail over, your data is never leaving the geo-political area that it’s based in. And if you’re hosted in Europe, you don’t have to worry about your data ever leaving Europe, similarly for the other geo-political entities that are out there.

So this gives you a way now with high confidence that you can store your data and know that you can fail over at any point in time.

In addition to some of these improvements with SQL databases, we also have a host of great improvements coming with HDInsight, which is our big data analytics engine. This runs standard Hadoop instance and runs it as a managed service, so we do all the patching and management for you.

We’re excited to announce the GA of Hadoop 2.2 support. We also have now .NET 4.5 installed and APIs available so you can now write your MapReduce jobs using .NET 4.5.

We’re also adding audit and operation history support, a bunch of great improvements with Hive, and we’re now Yarn-enabling the cluster so you can actually run more software on it as well.

And we’re also excited to announce a bunch of improvements in the storage space, including the general availability of our read-access geo-redundant storage option.

So we’ve kind of done a whole bunch of kind of deep dives into a whole bunch of the Azure features.

More information:

It has been a really busy last 10 days for the Azure team. This blog post quickly recaps a few of the significant enhancements we’ve made.  These include:

  • [Web] Web Sites: SSL included, Traffic Manager, Java Support, Basic Tier
  • [IaaS] Virtual Machines: Support for Chef and Puppet extensions, Basic Pricing tier for Compute Instances
  • [IaaS] Virtual Network: General Availability of DynamicRouting VPN Gateways and Point-to-Site VPN
  • [Mobile] Mobile Services: Preview of Visual Studio support for .NET, Azure Active Directory integration and Offline support;
  • [Mobile] Notification Hubs: Support for Kindle Fire devices and Visual Studio Server Explorer integration
  • [IaaS] [Web] Autoscale: General Availability release
  • [Data] Storage: General Availability release of Read Access Geo Redundant Storage
  • [Mobile] Active Directory Premium: General Availability release
  • Scheduler service: General Availability release
  • Automation: Preview release of new Azure Automation service

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

… With the April updates to Microsoft Azure, Azure Web Sites offers a new pricing tier called Basic.  The Basic pricing tier is designated for production sites, supporting smaller sites, as well as development and testing scenarios. … Which pricing tier is right for me? … The new pricing tier is a great benefit to many customers, offering some high-end features at a reasonable cost. We hope this new offering will enable a better deployment for all of you.

Microsoft is launching support for Java-based web sites on Azure Web Sites.  This capability is intended to satisfy many common Java scenarios combined with the manageability and easy scaling options from Azure Web Sites.

The addition of Java is available immediately on all tiers for no additional cost.  It offers new possibilities to host your pre-existing Java web applications.  New Java web site development on Azure is easy using the Java Azure SDK which provides integration with Azure services.

With the latest release of Azure Web Sites and the new Azure Portal Preview we are introducing a new concept: Web Hosting Plans. A Web Hosting Plan (WHP) allows you to group and scale sites independently within a subscription.

Microsoft Azure offers load balancing services for [IaaS] virtual machines (IaaS) and [Webcloud services (PaaS) hosted in the Microsoft Azure cloud. Load balancing allows your application to scale and provides resiliency to application failures among other benefits.

The load balancing services can be accessed by specifying input endpoints on your services either via the Microsoft Azure Portal or via the service model of your application. Once a hosted service with one or more input endpoints is deployed in Microsoft Azure, it automatically configures the load balancing services offered by Microsoft Azure platform. To get the benefit of resiliency / redundancy of your services, you need to have at least two virtual machines serving the same endpoint.

The web marches on, and so does Visual Studio and ASP.NET, with a renewed commitment to making a great IDE for web developers of all kinds. Join Scott & Scott for this dive into VS2013 Update 2 and beyond. We’ll see new features in ASP.NET, new ideas in front end web development, as well as a peek into ASP.NET’s future.

When creating a Azure Mobile Service, a Notification Hub is automatically created as well enabling large scale push notifications to devices across any mobile platform (Android, iOS, Windows Store apps, and Windows Phone). For a background on Notification Hubs, see this overview as well as these tutorials and guides, and Scott Guthrie’s blog Broadcast push notifications to millions of mobile devices using Windows Azure Notification Hubs.

Let’s look at how devices register for notification and how to send notifications to registered devices using the .NET backend.

New tiers improve customer experience and provide more business continuity options

To better serve your needs for more flexibility, Microsoft Azure SQL Database is adding new service tiers, Basic and Standard, to work alongside its Premium tier, which is currently in preview. Together these service tiers will help you more easily support the needs of database workloads and application patterns built on Microsoft Azure. … Previews for all three tiers are available today.

The Basic, Standard, and Premium tiers are designed to deliver more predictable performance for light-weight to heavy-weight transactional application demands. Additionally, the new tiers offer a spectrum of business continuity features, a [Data] stronger uptime SLA at 99.95%, and larger database sizes up to 500 GB for less cost. The new tiers will also help remove costly workarounds and offer an improved billing experience for you.

… [Data] Active Geo-Replication: …

… [Data] Self-service Restore: …

Stay tuned to the Azure blog for more details on SQL Database later this month!

Also, if you haven’t tried Azure SQL Database yet, it’s a great time to start and try the Premium tier! Learn more today!

Azure HDInsight now supports [Data] Hadoop 2.2 with HDInsight cluster version 3.0 and takes full advantage of these platform to provide a range of significant benefits to customers. These include, most notably:

  • Microsoft Avro Library: …
  • [Data] YARN: A new, general-purpose, distributed, application management framework that has replaced the classic Apache Hadoop MapReduce framework for processing data in Hadoop clusters. It effectively serves as the Hadoop operating system, and takes Hadoop from a single-use data platform for batch processing to a multi-use platform that enables batch, interactive, online and stream processing. This new management framework improves scalability and cluster utilization according to criteria such as capacity guarantees, fairness, and service-level agreements.

  • High Availability: …

  • [Data] Hive performance: Order of magnitude improvements to Hive query response times (up to 40x) and to data compression (up to 80%) using the Optimized Row Columnar (ORC) format.

  • Pig, Sqoop, Qozie, Ambari: …