Home » Cloud SW engineering
Category Archives: Cloud SW engineering
For information on OpenStack provided earlier on this blog see:
– Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others, ‘Experiencing the Cloud’, Dec 10, 2013
– Red Hat Enterprise Linux OpenStack Platform 4 delivery and Dell as the first company to OEM it co-engineered on Dell infrastructure with Red Hat, ‘Experiencing the Cloud’, Feb 19, 2014
To understand the OpenStack V4 level state-of-technology-development as of June 25, 2015:
– go to my homepage: https://lazure2.wordpress.com/
– or to the OpenStack related part of Microsoft Cloud state-of-the-art: Hyper-scale Azure with host SDN — IaaS 2.0 — Hybrid flexibility and freedom, ‘Experiencing the Cloud’, July 11, 2015
May 19, 2016:
With OpenStack in tow you’ll go far — be it your house, your bank, your city or your car.
Just look at all of the exciting places we’re going:
From the phone in your pocket
The telecom industry is undergoing a massive shift, away from hundreds of proprietary devices in thousands of central offices accumulated over decades, to a much more efficient and flexible software plus commodity hardware approach. While some carriers like AT&T have already begun routing traffic from the 4G networks over OpenStack powered clouds to millions of cellphone users, the major wave of adoption is coming with the move to 5G, including plans from AT&T, Telefonica, SK Telekom, and Verizon.
We are on the cusp of a revolution that will completely re-imagine what it means to provide services in the trillion dollar telecom industry, with billions of connected devices riding on OpenStack-powered infrastructure in just a few years.
To the living room socket
The titans of TV like Comcast, DirecTV, and Time Warner Cable all rely on OpenStack to bring the latest entertainment to our homes efficiently, and innovators like DigitalFilm Tree are producing that content faster than ever thanks to cloud-based production workflows.
Your car, too, will get smart
Speaking of going places, back here on earth many of the world’s top automakers, such as BMW and the Volkswagen group, which includes Audi, Lamborghini, and even Bentley, are designing the future of transportation using OpenStack and big data. The hottest trends to watch in the auto world are electric zero emissions cars and self-driving cars. Like the “smart city” mentioned above, a proliferation of sensors plus connectivity call for distributed systems to bring it all together, creating a huge opportunity for OpenStack.
And your bank will take part
Money moves faster than ever, with digital payments from startups and established players alike competing for consumer attention. Against this backdrop of enormous market change, banks must meet an increasingly rigid set of regulatory rules, not to mention growing security threats. To empower their developers to innovate while staying diligent on regs and security, financial leaders like PayPal, FICO, TD Bank, American Express, and Visa are adopting OpenStack.
Your city must keep the pace
Powering the world’s cities is a complex task and here OpenStack is again driving automation, this time in the energy sector. State Grid Corporation, the world’s largest electric utility, serves over 120 million customers in China while relying on OpenStack in production.
Looking to the future, cities will be transformed by the proliferation of fast networks combined with cheap sensors. Unlocking the power of this mix are distributed systems, including OpenStack, to process, store, and move data. Case in point: tcpcloud in Prague is helping introduce “smart city” technology by utilizing inexpensive Raspberry Pis embedded in street poles, backed by a distributed system based on Kubernetes and OpenStack. These systems give city planners insight into traffic flows of both pedestrians and cars, and even measure weather quality. By routing not just packets but people, cities are literally load balancing their way to lower congestion and pollution.
From inner to outer space
The greatest medical breakthroughs of the next decade will come from analyzing massive data sets, thanks to the proliferation of distributed systems that put supercomputer power into the hands of every scientist. And OpenStack has a huge role to play empowering researchers all over the globe: from Melbourne to Madrid, Chicago to Chennai, or Berkeley to Beijing, everywhere you look you’ll find OpenStack.
To explore this world, I recently visited the Texas Advanced Computing Center (TACC) at the University of Texas at Austin where I toured a facility that houses one of the top 10 supercomputers in the world, code named “Stampede
But what really got me excited about the future was the sight of two large OpenStack clusters: one called Chameleon, and the newest addition, Jetstream, which put the power of more than 1,000 nodes and more than 15,000 cores into the hands of scientists at 350 universities. In fact, the Chameleon cloud was recently used in a class at the University of Arizona by students looking to discover exoplanets. Perhaps the next Neil deGrasse Tyson is out there using OpenStack to find a planet to explore for NASA’s Jet Propulsion Laboratories.
Where should we go next?
Mark Collier is OpenStack co-founder, and currently the OpenStack Foundation COO. This article was first published in Superuser Magazine, distributed at the Austin Summit.
May 9, 2016:
From OpenStack Summit Austin, Part 1: Vendors digging in for long haul by Al Sadowski, 451 Research, LLC: This report provides highlights from the most recent OpenStack Summit
THE 451 TAKE OpenStack mindshare continues to grow for enterprises interested in deploying cloud-native applications in greenfield private cloud environments. However, its appeal is limited for legacy applications and enterprises sold on hyperscale multi-tenant cloud providers like AWS and Azure. There are several marquee enterprises with OpenStack as the central component of cloud transformations, but many are still leery of the perceived complexity of configuring, deploying and maintaining OpenStack-based architectures. Over the last few releases, processes for installation and upgrades, tooling, and API standardization across projects have improved as operators have become more vocal during the requirements phase. Community membership continues to grow on a global basis, and the supporting organization also depicts a similar geographic trend.
… Horizontal scaling of Nova is much improved, based on input from CERN and Rackspace. CERN, an early OpenStack adopter, demonstrated the ability for the open source platform to scale – it now has 165,000 cores running OpenStack. However, Walmart, PayPal and eBay are operating larger OpenStack environments.
May 18, 2015:
Walmart‘s Cloud Journey by Amandeep Singh Juneja
May 19, 2015:
OpenStack Update from eBay and PayPal by Subbu Allamaraju
May 18, 2015:
Architecting Organizational Change at TD Bank by Graeme Peacock, VP Engineering, TD Bank Group
TD Bank uses cloud as catalyst for cultural change in IT
May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:
While OpenStack may have been conceived as an open source multi-tenant IaaS, its future success will mainly come from hosted and on-premises private cloud deployments. Yes, there are many pockets of success with regional or vertical-focused public clouds based on OpenStack, but none with the scale of AWS or the growth of Microsoft Azure. Hewlett Packard Enterprise shuttered its OpenStack Helion-based public cloud, and Rackspace shifted engineering resources away from its own public cloud. Rackspace, the service provider with the largest share of OpenStack-related revenue, says its private cloud is growing in the ‘high double digits.’ Currently, 56% of OpenStack’s service-provider revenue total is public cloud-based, but we expect private cloud will account for a larger portion over the next few years.
October 21, 2015:
A new model to deliver public cloud by Bill Hill, SVP and GM, HP Cloud
December 1, 2015:
May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:
As of the Mitaka release, two new gold members were added: UnitedStack and EasyStack, both from China. Other service providers and vendors shared their customer momentum and product updates with 451 Research during the summit. Among the highlights are:
- AT&T has cobbled together a DevOps team from 67 different organizations, in order to transform into a software company.
- All of GoDaddy’s new servers are going into its OpenStack environment. It is also using the Ironic (bare metal) project and exploring containers on OpenStack.
- SwiftStack built a commercial product with an AWS-like consumption model using the Swift (object storage) project. It now has over 60 customers, including eBay, PayPal, Burton Snowboards and Ancestry.com.
- OVH is based in France and operates a predominately pan-Europe public cloud. It added Nova compute in 2014, and currently has 75PB on Swift storage.
- Unitas Global says OpenStack-related enterprise engagements are a large part of its 100% Y/Y growth. While it does not contribute code, it is helping to develop operational efficiencies and working with Canonical to deploy ‘vanilla’ OpenStack using Juju charms. Tableau Software is a client.
- DreamHost is operating an OpenStack public cloud, DreamCompute, and is a supporter of the Astara (network orchestration) project. It claims 2,000 customers for DreamCompute and 10,000 customers for its object storage product.
- Platform9 is a unique OpenStack in SaaS startup with 20 paying customers. Clients bring their own hardware, and the software provides the management functions and takes care of patching and upgrades.
- AppFormix is a software startup focused on cloud operators and application developers that has formed a licensing agreement with Rackspace. Its analytics and capacity-planning dashboard software will now be deployed on Rackspace’s OpenStack private cloud. The software also works with Azure and AWS.
- Tesora is leveraging the Trove project to offer DBaaS. The vendor built a plug-in for Mirantis’ Fuel installer. The collaboration claims to make commercial, open source relational and NoSQL databases easier for administrators to deploy.
April 25, 2016:
AT&T’s Cloud Journey with OpenStack by Sorabh Saxena SVP, Software Development & Engineering, AT&T
OpenStack + AT&T Innovation = AT&T Integrated Cloud.
AT&T’s network has experienced enormous growth in traffic in the last several years and the trend continues unabated. Our software defined network initiative addresses the escalating traffic demands and brings greater agility and velocity to delivering features to end customers. The underlying fabric of this software defined network is AT&T Integrated Cloud (AIC).
Sorabh Saxena, AT&T’s SVP of Software Development & Engineering, will share several use cases that will highlight a multi-dimensional strategy for delivering an enterprise & service provider scale cloud. The use cases will illustrate OpenStack as the foundational element of AIC, AT&T solutions that complement it, and how it’s integrated with the larger AT&T ecosystem.
As the Senior Vice President of Software Development and Engineering at AT&T, Sorabh Saxena is leading AT&T’s transformation to a software-based company. Towards that goal, he is leading the development of platforms that include AT&T’s Integrated Cloud (AIC), API, Data, and Business Functions. Additionally, he manages delivery and production support of AT&T’s software defined network.
Sorabh and his organization are also responsible for technology solutions and architecture for all IT projects, AT&T Operation Support Systems and software driven business transformation programs that are positioning AT&T to be a digital first, integrated communications company with a best in class cost structure. Sorabh is also championing a cultural shift with a focus on workforce development and software & technology skills development.
Through Sorabh and his team’s efforts associated with AIC, AT&T is implementing an industry leading, highly complex and massively scaled OpenStack cloud. He is an advocate of OpenStack and his organization contributes content to the community that represents the needs of large enterprises and communication services providers.
April 25, 2016: And the Superuser Award goes to… AT&T takes the fourth annual Superuser Award.
AUSTIN, Texas — The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.
NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.
AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list–read their full application here.The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.
April 1, 2016: Austin Superuser Awards Finalist: AT&T
The legacy telecom is in the top 20 percent for upstream contributions with plans to increase this significantly in 2016.
It’s time for the community to determine the winner of the Superuser Award to be presented at the OpenStack Austin Summit. Based on the nominations received, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to four finalists.
Now, it’s your turn.
The team from AT&T is one of the four finalists. Review the nomination criteria below, check out the other nominees and cast your vote before the deadline, Friday, April 8 at 11:59 p.m.Pacific Daylight Time. Voting is limited to one ballot per person.
How has OpenStack transformed your business?
AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond.
- Virtualization and virtual network functions (VNFs) are of critical importance to the Telecom industry to address growth and agility. AT&T’s Domain 2.0 Industry Whitepaper released in 2013 outlines the need as well as direction.
- AT&T chose OpenStack as the core foundation of their cloud and virtualization strategy
- OpenStack has reinforced AT&T’s open source strategy and strengthened our dedication to the community as we actively promote and invest resources in OpenStack
- AT&T is committing staff and resources to drive the vision and innovation in the OpenStack and OPNFV communities to help drive OpenStack as the default cloud orchestrator for the Telecom industry
- AT&T as a founding member of the ETSI ISG network functions virtualization (NFV) helped drive OpenStack as the cloud orchestrator in the NFV platform framework. OpenStack was positioned as the VIM – Virtual Infrastructure Manager. This accelerated the convergence of the Telco industry onto OpenStack.
OpenStack serves as a critical foundation for AT&T’s software-defined networking (SDN) and NFV future and we take pride in the following:
- AT&T has deployed 70+ OpenStack (Juno & Kilo based) clouds globally, which are currently operational. Of the 70+ clouds 57 are production application and network clouds.
- AT&T plans 90% growth, going to 100+ production application and network clouds by the end of 2016.
- AT&T connects more than 14 million wireless customers via virtualized networks, with significant subscriber cut-over planned again in 2016
- AT&T controls 5.7% of our network resources (29 Telco production grade VNFs) with OpenStack, with plans to reach 30% by the end of 2016 and 75% by 2020.
- AT&T trained more than 100 staff in OpenStack in 2015
AT&T plans to expand to expand its community team of 50+ employees in 2016 As the chosen cloud platform OpenStack enabled AT&T in the following SDN and NFV related initiatives:
- Our recently announced 5G field trials in Austin
- Re-launch of unlimited data to mobility customers
- Launch of AT&T Collaborate a next generation communication tool for enterprise
- Provisioning of a Network on Demand platform to more than 500 enterprise customers
- Connected Car and MVNO (Mobile Virtual Network Operator)
- Mobile Call Recording
- Internally we are virtualizing our control services like DNS, NAT, NTP, DHCP, radius, firewalls, load balancers and probes for fault and performance management.
Since 2012, AT&T has developed all of our significant new applications in a cloud native fashion hosted on OpenStack. We also architected OpenStack to support legacy apps.
- AT&T’s SilverLining Cloud (predecessor to AIC) leveraged the OpenStack Diablo release, dating as far back as 2011
- OpenStack currently resides on over 15,000 VMs worldwide, with the expectation of further, significant growth coming in 2016-17
- AT&T’s OpenStack integrated Orchestration framework has resulted in a 75% reduction in turnaround time for requests for virtual resources
- AT&T Plans to move 80% of our Legacy IT into the OpenStack based virtualized cloud environment within coming years
- Uniform set of APIs exposed by OpenStack allows AT&T business units to leverage a “develop-once-run-everywhere” set of tools OpenStack helps AT&T’s strategy to begin to adopt best of the breed solutions at five 9’s of reliability for:
- Internet-scale storage service
- Putting all AT&T’s workloads on one common platform Deployment Automation: OpenStack modules have enabled AT&T to cost-effectively manage the OpenStack configuration in an automated, holistic fashion.
- Using OpenStack Heat, AT&T pushed rolling updates and incremental changes across 70+ OpenStack clouds. Doing it manually would be take many more people and a much longer schedule.
- Using OpenStack Fuel as a pivotal component in its cloud deployments AT&T accelerates the otherwise consuming, complex, and error-prone process of deploying, testing, and maintaining various configuration flavors of OpenStack at scale. AT&T was a major contributor towards Fuel 7.0 and Fuel 8.0 requirements. OpenStack has been a pivotal driver of AT&T’s overall culture shift. AT&T as an organization is in the midst of a massive culture shift from a Legacy Telco to a company where new skills, techniques and solutions are embraced.
OpenStack has been a key driver of this transformation in the following ways:
- AT&T is now building 50 percent of all software on open source technologies
- Allowing for the adoption of a dev ops model that creates a more unified team working towards a better end product
- Development transitioned from a waterfall to cloud-native CICD methodologies
- Developers continue to support OpenStack and make their applications cloud-native whenever possible.
How has the organization participated in or contributed to the OpenStack community?
AT&T was the first U.S. telecom service provider to sign up for and adopt the then early stage NASA-spawned OpenStack cloud initiative, back in 2011.
- AT&T has been an active OpenStack contributor since the Bexar release.
- AT&T has been a Platinum Member of the OpenStack Foundation since its origins in 2012 after helping to create its bylaws.
- Toby Ford, AVP AT&T Cloud Technology has provided vision, technology leadership, and innovation to OpenStack ecosystem as an OpenStack Foundation board member since late 2012.
- AT&T is founding member of ETSI, and OPNFV.
- AT&T has invested in building an OpenStack upstream contribution team with 25 current employees and a target for 50+ employees by the end of 2016.
- During the early years of OpenStack, AT&T brought many important use-cases to the community. AT&T worked towards solving those use-cases by leveraging various OpenStack modules, in turn encouraging other enterprises to have confidence in the young ecosystem.
- AT&T drove these following Telco-grade blueprint contributions to past releases of OpenStack:
- VLAN aware VMs (i.e. Trunked vNICs) – Support for BGP VPN, and shared volumes between guest VMs
- Complex query support for statistics in Ceilometer
- Spell checker gate job
- Metering support for PCI/PCIe per VM tenant
- PCI passthrough measurement in Ceilometer – Coverage measurement gate job
- Nova using ephemeral storage with cinder
- Climate subscription mechanism
- Access switch port discovery for bare metal nodes
- SLA enforcement per vNIC – MPLS VPNaaS
- NIC-state aware scheduling
- Toby Ford has regularly been invited to present keynotes, sessions, and panel talks at a number of OpenStack summits. For instance: Role of OpenStack in a Telco: User case study – at Atlanta Summit May 2014 – Leveraging OpenStack to Solve Telco needs: Intro to SDN/NFV – Atlanta Summit May 2014 – Telco OpenStack Roadmap Panel Talk – Tokyo Summit October 2015 – OpenStack Roadmap Software Trajectory – Atlanta Summit May 2014 – Cloud Control to Major Telco – Paris Summit November 2014.
- Greg Stiegler, assistant vice president – AT&T cloud tools & development organization represented the AT&T technology development organization at the Tokyo Summit.
- AT&T Cloud and D2 Architecture team members were invited to present various keynote sessions, summit sessions and panel talks including: – Participation at the Women of OpenStack Event – Tokyo Summit 2015 – Empower Your Cloud Through Neutron Service Function Chaining – Tokyo Summit Oct 2015 – OPNFV Panel – Vancouver Summit May 2015 – OpenStack as a Platform for Innovation – Keynote at OpenStack Silicon Valley – Aug 2015 – Taking OpenStack From Zero to Production in a Fortune-500 – Tokyo Summit October 2015 – Operating at Web-scale: Containers and OpenStack Panel Talk – Tokyo Summit October 2015 * AT&T strives to collaborate with other leading industry partners in the OpenStack ecosystem. This has led to the entire community benefiting from AT&T’s innovation.
- Margaret Chiosi gives talks worldwide on AT&T’s D2.0 vision at many Telco conferences ranging from Optics (OFC) to SDN/NFV conferences advocating OpenStack as the de-facto cloud orchestrator.
- AT&T Entertainment Group (DirecTV) architected multi-hypervisor hybrid OpenStack cloud by designing Neutron ML2 plugin. This innovation helped achieve integration between legacy virtualization and OpenStack.
- AT&T is proud to drive OpenStack adoption by sharing knowledge back to the OpenStack community in the form of these summit sessions at the upcoming Austin summit:
- Telco Cloud Requirements: What VNFs Are Asking For
- Using a Service VM as an IPv6 vRouter
- Service Function Chaining
- Technology Analysis Perspective
- Deploying Lots of Teeny Tiny Telco Clouds
- Everything You Ever Wanted to Know about OpenStack At Scale
- Valet: Holistic Data Center Optimization for OpenStack
- Gluon: An Enabler for NFV
- Among the Cloud: Open Source NFV + SDN Deployment
- AT&T: Driving Enterprise Workloads on KVM and vCenter using OpenStack as the Unified Control Plane
- Striving for High-Performance NFV Grid on OpenStack. Why you, and every OpenStack community member should be excited about it
- OpenStack at Carrier Scale
- AT&T is the “first to market” with deployment of OpenStack supported carrier-grade Virtual Network Functions. We provide the community with integral data, information, and first-hand knowledge on the trials and tribulations experienced deploying NFV technology.
- AT&T ranks in the top 20 percent of all companies in terms of upstream contribution (code, documentation, blueprints), with plans to increase this significantly in 2016.
- Commits: 1200+
- Lines of Code: 116,566
- Change Requests: 618
- Patch Sets: 1490
- Draft Blueprints: 76
- Completed Blueprints: 30
- Filed Bugs: 350
- Resolved Bugs: 250
What is the scale of the OpenStack deployment?
- AT&T’s OpenStack based AIC is deployed at 70+ sites across the world. Of the 70+ 57 are production app and network clouds.
- AT&T plans 90% growth, going to 100+ production app and network clouds by end of 2016.
- AT&T connects more than 14 million of the 134.5 million wireless customers via virtualized networks with significant subscriber cutover planned again in 2016
- AT&T controls 5.7% of our network resources (29 Telco production grade VNF) with a goal of high 80s by end of 2016) on OpenStack.
- Production workloads also include AT&T’s Connected Car, Network on Demand, and AT&T Collaborate among many more.
How is this team innovating with OpenStack?
- AT&T and AT&T Labs are leveraging OpenStack to innovate with Containers and NFV technology.
- Containers are a key part of AT&Ts Cloud Native Architecture. AT&T chairs the Open Container Initiative (OCI) to drive the standardization around container formats.
- AT&T is leading the effort to improve Nova and Neutron’s interface to SDN controllers.
- Margaret Chiosi, an early design collaborator to Neutron, ETSI NFV, now serves as President of OPNFV. AT&T is utilizing its position with OPNFV to help shape the future of OpenStack / NFV. OpenStack has enabled AT&T to innovate extensively.
The following recent unique workloads would not be possible without the SDN and NFV capabilities which OpenStack enables: * Our recent announcements of 5G field trials in Austin * Re-launch of unlimited data to mobility customers * Launch of AT&T Collaborate * Network on Demand platform to more than 500 enterprise customers * Connected Car and MVNO (Mobile Virtual Network Operator) * Mobile Call Recording New services by AT&T Entertainment Group (DirecTV) that would use OpenStack based cloud infrastructure in coming years: * NFL Sunday Ticket with up to 8 simultaneous games * DirecTV Streaming Service Without Need For satellite dish
In summary – the innovation with OpenStack is not just our unique workloads, but also to support them together under the same framework, management systems, development/test, CI/CD pipelines, and deployment automation toolset(s).
Who are the team members?
- AT&T Cloud and D2 architecture team
- AT&T Integrated Cloud (AIC) Members: Margaret Chiosi, distinguished member of technical staff, president of OPNFV; Toby Ford, AVP – AT&T cloud technology & D2 architecture – strategy, architecture & pPlanning, and OpenStack Foundation Board Member; Sunil Jethwani – director, cloud & SDN architecture, AT&T Entertainment Group; Andrew Leasck – director – AT&T Integrated cloud development; Janet Morris – director – AT&T integrated cloud development; Sorabh Saxena, senior vice president – AT&T software development & engineering organization; Praful Shanghavi – director – AT&T integrated cloud development; Bryan Sullivan – director member of technical staff; Ryan Van Wyk – executive director – AT&T integrated cloud development.
- AT&T’s project teams top contributors: Paul Carver, Steve Wilkerson, John Tran, Joe D’andrea, Darren Shaw.
April 30, 2016: Swisscom in Production with OpenStack and Cloud Foundry
Swisscom has one of the largest in-production industry standard Platform as a Service built on OpenStack. Their offering is focused on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry, and PLUMgrid. With services such as MongoDB, MariaDB, RabbitMQ, ELK, and an object storage, the PaaS cloud offers what developers need to get started right away. Join this panel for take-away lessons on Swisscom’s journey, the technologies, partnerships, and developers who are building apps everyday on Swisscom’s OpenStack cloud.
May 23, 2016: How OpenStack public cloud + Cloud Foundry = a winning platform for telecoms interview on ‘OpenStack Superuser’ with Marcel Härry, chief architect, PaaS at Swisscom
Swisscom has one of the largest in-production industry standard platform-as-a-service built on OpenStack.
Their offering focuses on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom, Switzerland’s leading telecom provider, embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry and PLUMgrid.
Superuser interviewed Marcel Härry, chief architect, PaaS at Swisscom and member of theTechnical Advisory Board of the Cloud Foundry Foundation to find out more.
How are you using OpenStack?
OpenStack has allowed us to rapidly develop and deploy our Cloud Foundry-based PaaS offering, as well as to rapidly develop new features within SDN and containers. OpenStack is the true enabler for rapid development and delivery.
An example: after half a year from the initial design and setup, we already delivered two production instances of our PaaS offering built on multiple OpenStack installations on different sites. Today we are already running multiple production deployments for high-profile customers, who further develop their SaaS offerings using our platform. Additionally, we are providing the infrastructure for numerous lab and development instances. These environments allow us to harden and stabilize new features while maintaining a rapid pace of innovation, while still ensuring a solid environment.
We are running numerous OpenStack stacks, all limited – by design – to a single region, and single availability zone. Their size ranges from a handful of compute nodes, to multiple dozens of compute nodes, scaled based on the needs of the specific workloads. Our intention is not to build overly large deployments, but rather to build multiple smaller stacks, hosting workloads that can be migrated between environments. These stacks are hosting thousands of VMs, which in turn are hosting tens of thousands of containers to run production applications or service instances for our customers.
What kinds of applications or workloads are you currently running on OpenStack?
We’ve been using OpenStack for almost three years now as our infrastructure orchestrator. Swisscom built its Elastic Cloud on top of OpenStack. On top of this we run Swisscom’s Application Cloud, or PaaS, built on Cloud Foundry with PLUMgrid as the SDN layer. Together, the company’s clouds deliver IaaS to IT architects, SaaS to end users and PaaS to app developers among other services and applications. We mainly run our PaaS/Cloud Foundry environment on OpenStack as well as the correlated managed services (i.e. a kind of DBaaS, Message Service aaS etc.) which are running themselves in Docker containers.
What challenges have you faced in your organization regarding OpenStack, and how did you overcome them?
The learning curve for OpenStack is pretty steep. When we started three years ago almost no reference architectures were available, especially none with enterprise-grade requirements such as dual-site, high availability (HA) capabilities on various levels and so forth. In addition, we went directly into the SDN, SDS levels of implementation which was a big, but very successful step at the end of the day.
What were your major milestones?
Swisscom’s go-live for its first beta environment was in spring of 2014, go live for an internal development (at Swisscom) was spring of 2015, and the go-live for its public Cloud Foundry environment fully hosted on OpenStack was in the fall of 2015. The go-live date for enterprise-grade and business-critical workloads on top of our stack from various multinational companies in verticals like finance or industry is spring, 2016, and Swisscom recently announced Swiss Re as one of its first large enterprise cloud customers.
What have been the biggest benefits to your organization as a result of using OpenStack?
Pluggability and multi-vendor interoperability (for instance with SDN like PLUMgrid or SDS like ScaleIO) to avoid vendor lock in and create a seamless system. OpenStack enabled Swisscom to experiment with deployments utilizing a DevOps model and environment to deploy and develop applications faster. It simplified the move from PoC to production environments and enabled us to easily scale out services utilizing a distributed cluster-based architecture.
What advice do you have for companies considering a move to OpenStack?
It’s hard in the beginning but it’s really worth it. Be wise when you select your partners and vendors, this will help you to be online in a very short amount of time. Think about driving your internal organization towards a dev-ops model to be ready for the first deployments, as well as enabling your firm to change deployment models (e.g. going cloud-native) for your workloads when needed.
How do you participate in the community?
This year’s Austin event was our second OpenStack Summit where we provided insights into our deployment and architecture, contributing back to the community in terms of best practices, as well as providing real-world production use-cases. Furthermore, we directly contribute patches and improvements to various OpenStack projects. Some of these patches have already been accepted, while a few are in the pipeline to be further polished for publishing. Additionally, we are working very closely together with our vendors – RedHat, EMC, ClusterHQ/Flocker, PLUMgrid as well as the Cloud Foundry Foundation – and work together to further improve their integration and stability within the OpenStack project. For example, we worked closely together with Flocker for their cinder-based driver to orchestrate persistency among containers. Furthermore, we have provided many bug reports through our vendors and have worked together with them on fixes which then have made their way back into the OpenStack community.
We have a perfect solution for non-persistent container workloads for our customers. We are constantly evolving this product and are working especially hard to meet the enterprise- and finance-verticals requirements when it comes to the infrastructure orchestration of OpenStack.
Härry spoke about OpenStack in production at the recent Austin Summit, along with Pere Monclus of PLUMgrid, Chip Childers of the Cloud Foundry Foundation, Chris Wright of Red Hat and analyst Rosalyn Roseboro.
May 10, 2016: Lenovo‘s Highly-Available OpenStack Enterprise Cloud Platform Practice with EasyStack press release by EasyStack
Microsoft and partners to capitalize on Continuum for Phones instead of the exited Microsoft phone business
With The Nokia phone business is to be relaunched via a $500M private startup with Android smartphones and tablets in addition to the feature phones for which manufacturing, sales and distribution, would be acquired from Microsoft by a subsidiary of Foxconn published on this same ‘Experiencing the Cloud’ blog on May 20, 2016 I now dare to publish this follow-up post to the original message which was already available on October 13, 2015 under the title “Windows 10 enhancements for tablets and phones to achieve a powerful PC experience” (that original content see in the final part of this post) and with a statement for the start:
These are significant capabilities with which (although not only with these but with quite a number of other innovations) Microsoft—first time in its history—was able to beat Apple in its own game. You couldn’t believe it?
Unfortunately I’d felt a growing uncertainty about the future of the Microsoft Device business and therefore decided to wait till the picture gets clear. With the following Terry Myerson video appearing on the HP Business YouTube channel I’ve now felt certain to make the original information available in this curent post:
June 2, 2016: HP Elite x3 and Windows 10: Terry Myerson
http://www.hp.com/go/elitex3 –Terry Myerson, Executive Vice President at Microsoft, talks about the collaboration between HP and Microsoft that brings to life the new HP Elite x3 with Windows 10 for business, pioneer in the 3-in-1 category.
My certainty was also supported by the Microsoft decision to exit the phone business as it had been acquired from Nokia:
Microsoft Corp. on Wednesday announced plans to streamline the company’s smartphone hardware business, which will impact up to 1,850 jobs. As a result, the company will record an impairment and restructuring charge of approximately $950 million, of which approximately $200 million will relate to severance payments.
“We are focusing our phone efforts where we have differentiation — with enterprises that value security, manageability and our Continuum capability, and consumers who value the same,” said Satya Nadella, chief executive officer of Microsoft. “We will continue to innovate across devices and on our cloud services across all mobile platforms.”
Microsoft anticipates this will result in the reduction of up to 1,350 jobs at Microsoft Mobile Oy in Finland, as well as up to 500 additional jobs globally. Employees working for Microsoft Oy, a separate Microsoft sales subsidiary based in Espoo, are not in scope for the planned reductions.
As a result of the action, Microsoft will record a charge in the fourth quarter of fiscal 2016 for the impairment of assets in its More Personal Computing segment, related to these phone decisions.
The actions associated with today’s announcement are expected to be substantially complete by the end of the calendar year and fully completed by July 2017, the end of the company’s next fiscal year.
More information about these charges will be provided in Microsoft’s fourth-quarter earnings announcement on July 19, 2016, and in the company’s 2016 Annual Report on Form 10-K.
In addition to the following sentence in the previous Microsoft selling feature phone business to FIH Mobile Ltd. and HMD Global, Oy press release on May 18, 2016:
Microsoft will continue to develop Windows 10 Mobile and support Lumia phones such as the Lumia 650, Lumia 950 and Lumia 950 XL, and phones from OEM partners like Acer, Alcatel, HP, Trinity and VAIO.
That last statement was not enough for me at that time, just 3 weeks ago as I had a truly shocking experience with upgrading my wife’s Lumia 640 XL to the Windows 10 Mobile version which had been released for that type of earlier Lumia phones last March. The software was so much buggy that I had’seen in my life any time before. I’d got so much angry that immediately bought an Android based Samsung Galaxy J5 for her. However, I became again confident in the future of Window 10 Mobile based phones after her bad experience with that Android software in terms of functionality (e.g. too many steps needed for some vital functions vs. that needed on Lumia) and the success of restoring the earlier 8.5 release on the 640 XL.
Several other videos which appeared on the same HP Business YouTube channel a little earlier gave me the final assurance:
May 27, 2016: HP Elite x3 turned heads at Mobile World Congress 2016
http://www.hp.com/go/elitex3 -HP Elite x3 made a powerful first impression at Mobile World Congress 2016 in Barcelona, winning 24 awards and positive reviews from industry experts. Meet the new HP Elite x3 the one device that’s every device.
June 2, 2016: Reinventing mobility: Dion Weisler
http://www.hp.com/go/elitex3 -Dion Weisler President and Chief Executive Officer for HP Inc. introduces to the revolution of mobility. Meet the new HP Elite x3 pioneer in the 3-in-1 category; the next generation of computing, designed specifically for business.
June 2, 2016: The new HP Elite x3: Michael Park
http://www.hp.com/go/elitex3 –Michael Park, Vice President for Commercial Mobility & Software division at HP Inc., introduces the new HP Elite x3, pioneer in the 3-in-1 category that will transform business mobility.
June 2, 2016: HP Elite x3 and Qualcomm: Steve Mollenkopf
http://www.hp.com/go/elitex3 -Steve Mollenkopf, Chief Executive Officer of Qualcomm Incorporated, presents the power of Snapdragon 820 processor in HP Elite x3, as part of the recent collaboration with HP. Meet the new HP Elite x3, pioneer in the 3-in-1 category; the next generation of computing, designed specifically for business.
Now a brief retrospective for the start:
From the full text of Q&A part of the Transcript of Microsoft Nokia Transaction Conference Call: Steve Ballmer, Stephen Elop, Brad Smith, Terry Myerson, Amy Hood; September 3, 2013 [Microsoft, Sept 3, 2013]
OPERATOR: Walter Pritchard, Citigroup, your line is open.
WALTER PRITCHARD: Great. Thanks for taking the question. Steve Ballmer, on the tablet side, obviously, we could say many of the same things as you’ve put into this slide deck as rationale for doing an acquisition on the phone side as we could say about the tablet side including picking up more gross margin.
I’m wondering how this transaction impacts the strategy going forward in tablets and whether or not you need to, in a sense, double down further on first-party hardware in the tablet market. And then just have one follow up.
STEVE BALLMER: Okay. Terry, do you want to talk a little bit about that? That would be great.
TERRY MYERSON: Well, phones and tablets are definitely a continuum. You know, we see the phone products growing up, the screen sizes and the user experience we have on the phones. We’ve now made that available in our Windows tablets, our application platform spans from phone to tablet. And I think it’s fair to say that our customers are expecting us to offer great tablets that look and feel and act in every way like our phones. We’ll be pursuing a strategy along those lines.
More information: Microsoft answers to the questions about Nokia devices and services acquisition: tablets, Windows downscaling, reorg effects, Windows Phone OEMs, cost rationalization, ‘One Microsoft’ empowerment, and supporting developers for an aggressive growth in market share ‘Experiencing the Cloud’, September 4, 2013
From the Microsoft Q4 2015 Earning Call Transcript by CEO Satya Nadella on July 21, 2015:
I am thrilled we are just days away from the start of Windows 10. It’s the first step towards our goal of 1 billion Windows 10 active devices in the fiscal year 2018. Our aspiration with Windows 10 is to move people from meeting to choosing to loving Windows. Based on feedback from more than 5 million people who have been using Windows 10, we believe people will love the familiarity of Windows 10 and the innovation. It’s safe, secure, and always up to date. Windows 10 is more personal and more productive with Cortana, Office, universal apps, and Continuum. And Windows 10 will deliver innovative new experiences like Inking on Microsoft Edge and gaming across Xbox and PCs, and also opens up entirely new device categories such as Hololens.
From Windows 10 available in 190 countries as a free upgrade Microsoft news release on July 28, 2015:
Windows 10 is more personal and productive, with voice, pen and gesture inputs for natural interaction with PCs. It’s designed to work with Office and Skype and allows you to switch between apps and stay organized with Snap and Task View. Windows 10 offers many innovative experiences and devices, including the following:
- Cortana, the personal digital assistant, makes it easy to find the right information at the right time.
- New Microsoft Edge browser lets people quickly browse, read, and mark up and share the Web.
- The integrated Xbox app delivers the Xbox experience to Windows 10, bringing together friends, games and accomplishments across Xbox One and Windows 10 devices.
- Continuum optimizes apps and experiences beautifully across touch and desktop modes.
- Built-in apps including Photos; Maps; Microsoft’s new music app, Groove; and Movies & TV offer entertainment and productivity options. With OneDrive, files can be easily shared and kept up-to-date across all devices.
- A Microsoft Phone Companion app enables iPhones, Android or Windows phones to work seamlessly with Windows 10 devices.
- The all new Office Mobile apps for Windows 10 tablets are available today in the Windows Store.4 Built for work on-the-go, the Word, Excel and PowerPoint apps offer a consistent, touch-first experience for small tablets. For digital note-taking needs, the full-featured OneNote app comes pre-installed with Windows 10. The upcoming release of the Office desktop apps (Office 2016) will offer the richest feature set for professional content creation. Designed for the precision of a keyboard and mouse, these apps will be optimized for large-screen PCs, laptops and 2-in-1 devices such as the Surface Pro.
More information around the above 2 excerpts:
Windows 10 is here to help regain Microsoft’s leading position in ICT ‘Experiencing the Cloud’, July 31, 2015
From 2015 Annual Report>The ambitions that drive us on July 31, 2015:
Create more personal computing
Windows 10 is the cornerstone of our ambition to usher in an era of more personal computing. We see the launch of Windows 10 in July 2015 as a critical, transformative moment for the Company because we will move from an operating system that runs on a PC to a service that can power the full spectrum of devices in our customers’ lives. We developed Windows 10 not only to be familiar to our users, but more safe and secure, and always up-to-date. We believe Windows 10 is more personal and productive, working seamlessly with functionality such as Cortana, Office, Continuum, and universal applications. We designed Windows 10 to foster innovation – from us, our partners and developers – through experiences such as our new browser Microsoft Edge, across the range of existing devices, and into entirely new device categories.
Our future opportunity
There are several distinct areas of technology that we aim to drive forward. Our goal is to lead the industry in these areas over the long-term, which we expect will translate to sustained growth. We are investing significant resources in:
- Delivering new productivity, entertainment, and business processes to improve how people communicate, collaborate, learn, work, play, and interact with one another.
- Establishing the Windows platform across the PC, tablet, phone, server, other devices, and the cloud to drive a thriving ecosystem of developers,unify the cross-device user experience, and increase agility when bringing new advances to market.
- Building and running cloud-based services in ways that unleash new experiences and opportunities for businesses and individuals.
- Developing new devices that have increasingly natural ways to interact with them, including speech, pen, gesture, and augmented reality holograms.
- Applying machine learning to make technology more intuitive and able to act on our behalf, instead of at our command.
January 14, 2016: Continuum for Phones: Making the Phone Work Like a PC by Keri Moran / Principal Program Manager Lead
Imagine having a phone that works like a PC. Continuum for Phones makes this a reality, enabling Windows customers to get things done like never before.
Check out the ways this capability comes alive. You’ll be able to travel and leave your laptop at home, knowing you’re still equipped to complete your most common tasks. Walk into a meeting with just your smartphone – you’re fully equipped for seamlessly projecting PowerPoint presentations to a larger screen. Or take a seat in a business center where you plug your phone into a monitor and keyboard – you’ve instantly gained PC-like productivity using Office apps and the Microsoft Edge browser.
How it all started
The road to Continuum began three years ago with a simple observation: we take our phones everywhere, we depend on them, and we feel lost without them. Yet, when the time comes to do “real work,” we reach for a laptop or desktop PC. So we end up carrying our phones plus our laptops, or we wait until we are at our desks to do the heavy lifting.
The thing is, today’s phones have more than enough processing power to handle our most common tasks and activities. We knew this was especially true in emerging markets where people rely only on their mobile phones to get online. So — with these thoughts top of mind — we set out on our mission to help people get real work done with just their phone.
Who are we? We are the small team of people who built Continuum for Phones with a passion to change the future of personal productivity.
What people want
We started by talking to customers to understand what they needed. We spoke to people around the globe – from Chicago to Shanghai – and found that most people wanted the same thing: a phone that did more. Here are the main insights from the research:
- “My most important device”: people universally describe their smartphone as the center of their connected life.
- Connect to a bigger screen: people rely on their laptops and desktops because their phone lacks a large screen, keyboard and mouse. They want to easily connect to larger screens for both work and entertainment.
- Tech-savvy people expect more: as the processing power of phones has risen, so has the expectations of the tech-savvy.
- Many people around the world don’t have PCs: because they can’t afford a PC, people have a TV and a phone and that’s it. So any computing work gets done on their phone.
We realized that people embraced the idea of having a phone that could work like a PC.
Getting it done
So we started building Continuum, and we soon realized that we faced many technical and design challenges.
For example, there were two paradigms for connecting to a second screen: (1) mirroring your phone’s screen to a larger screen or (2) connecting your PC to multiple monitors. We needed to create a new design paradigm with two independent experiences – one on the phone and a separate one on the second screen. This was important because customers wanted to continue to use their phone as a phone, even while having a PC-like experience on the second screen. We spent months iterating with paper and software prototypes to arrive at an experience that was easy to understand and use.
The technical hurdles were just as big. For example, we had to build support for keyboard and mouse into Windows 10 Mobile. And many substantial architecture changes were needed in Windows to make Continuum work.
At the //Build conference in April 2015, we did our first live demo, and at the Windows 10 launch in July, we showed the full power of a phone running Office* apps on a second screen. The response – which exceeded our expectations — motivated us to keep going, working relentlessly with hundreds of colleagues around the world to deliver an integrated solution that required major changes to Windows, new capabilities in the phones, and creation of docks such as the Microsoft Display Dock.
So, with the debut of Continuum for Phones, you really can have something new in your pocket: a smartphone that has the power and ability to work like a PC. In the words of our CEO Satya Nadella: “This is the beginning of how we are going to change what the form and function of a phone is.”
Right now, this means that you can carry a smartphone – like the new Lumia 950 and Lumia 950XL – and use a small dock or wireless dongle to connect it to a keyboard, mouse and monitor for a familiar PC-like experience. Run Office* apps, browse the Web, edit photos, write email, and much more.
While you’re working on the larger screen, you won’t lose your phone’s unique abilities. Continuum multi-tasks flawlessly so you can keep using your phone as a phone for calls, emails, texts, or Candy Crush. Or if you don’t have a mouse, you can use your phone as the trackpad for the apps on the larger screen.
If you share my enthusiasm for Continuum for Phones, please check out all the details, including multiple usage scenarios, at windows.com.
* App experience may vary. Office 365 subscription required for some Office features.
June 4, 2016 snapshot: New features coming soon to Windows 10 Anniversary Update
This year’s Windows 10 Anniversary Update will have great new innovative features including:1
The pen just got even mightier.
Turn thoughts into action with Windows Ink – using the pen, your fingertip, or both at once.2 Pair it with Office apps to effortlessly edit documents. With Windows Ink, you’ll be able to access features like Sticky Notes with a simple click of the pen.3 When you start drawing a figure like a chart or graph, it’ll turn into the real thing right before your eyes. And because Windows Ink stays active when your device is locked, you’ll be able to jot down notes even when you don’t have time to enter a password.
Cortana’s got you covered.
No time to enter your password but need some quick help? No problem — just ask. Cortana4 will now be at your service, even before you login. Whether you want to make a note, play music or set a reminder, Cortana will have you covered.
The secret password is: you.
With Windows Hello, unlocking your PC and devices is as quick as looking or touching.5 But the new Windows Hello will also let you unlock your PC simply by tapping your Windows Hello enabled phone.6 Beyond the hardware, Windows Hello will also give you instant access to paired apps and protected websites on Microsoft Edge – all while maintaining enterprise-level security. Windows Hello lets you say goodbye to cumbersome passwords.
Got game? We’ll deliver.
Windows 10 will deliver incredible DirectX 12 games and Xbox Live features that will transform what you expect from PC gaming. Now you can play and connect with gamers across Xbox One and Windows 10 devices. From the best casual games to the next generation of PC releases, you’ll have more ways to play new games optimized for Windows.7
And that’s not all: Microsoft Studios is bringing a full portfolio of new games to Windows 10, including the forthcoming Forza Motorsport 6: Apex, which will be freefor Windows 10 users.
Ongoing progress reports (only two latest ones are summarised here):
June 1, 2016: Announcing Windows 10 Mobile Insider Preview Build 14356
- Cortana Improvements:
– Get notifications from your phone to your PC
– Send a photo from your phone to PC
– New listening animation
May 26, 2016: Announcing Windows 10 Insider Preview Build 14352
- Cortana Improvements:
– Cortana, Your Personal DJ
– Set a timer
- Windows Ink:
– Updated Sticky Notes
– Compass on the ruler
– General improvements to the Windows Ink experience
- Other items of note:
– Windows Game bar improved with full-screen support
– Feedback Hub will now show Microsoft responses
– Updated File Explorer icon
– Deploying Windows Enterprise edition gets easier
– Limited Period Scanning
– Introducing Hyper-V Containers (ADDED 5/31)
For more information see: https://blogs.windows.com/windowsexperience/tag/windows-insider-program/
Particularly relevant recent information from A change in leadership for the Windows Insider Program on June 1, 2016 by Gabe Aul / Corporate Vice President, Engineering Systems Team:
Since we first started the Windows Insider Program back in September 2014, Windows Insiders have helped us ship Windows 10 to over 300 million devices. We have released 35 PC builds and 22 Mobile builds to Insiders to date. This is a huge change from Windows 7 and Windows 8 which only had 2 and 3 public pre-release builds respectively. Windows Insiders have been more directly plugged in to our engineering processes for Windows than ever before, including participating in our first ever public Bug Bash this year. Windows Insiders contribute problem reports and suggestions which help us shape the platform, and are currently helping us get ready to ship the next major update to Windows 10 this summer – the Windows 10 Anniversary Update. This is just the beginning of the journey we’re on though. We really appreciate having such an amazing connection with our customers, and want Windows Insiders to continue to help shape Windows releases for years to come. With that in mind, I want to talk about a change to the Windows Insider Program going forward.
When I was introduced as leader of the Windows Insider Program over 18 months ago, I was responsible for the team that built our feedback and flighting systems for Windows. It made sense for me to be on the front lines talking with customers of the systems that my team was building to get Insider Preview Builds out and hear the feedback rolling in. In August of last year, I changed jobs to work on the Engineering Systems Team in WDG. In this role, I am responsible for the tools our engineers use to build Windows, including our planning and work management systems, source code management, build infrastructure, and test automation systems. …
Meet Dona Sarkar
I have worked with Dona for many years and think she is the perfect person to guide the Windows Insider Program forward. Her technical expertise, passion for customers, and commitment to listening to feedback is unmatched. …
You can follow Dona here on Twitter. Please welcome her as the new leader of the Windows Insider Program!
Get to know more about Dona here from Microsoft Stories!
Finally more as well as historic information on this subject which I’d originally put together on October 13, 2015 and intended to publish under the title:
Windows 10 enhancements for tablets and phones to achieve a powerful PC experience
These are significant capabilities with which (although not only with these but with quite a number of other innovations) Microsoft—first time in its history—was able to beat Apple in its own game. You couldn’t believe it?
First watch these two very short videos from CNNMoney presenting Microsoft’s “ultimate laptop” in terms of its device innovations:
Hands-on with Microsoft Surface Book
Then follow with the below information which is presenting one the most important Windows 10 software innovations, called Continuum (Continuum tablet mode for touch-capable devices) which makes that “ultimate laptop” an “ultimate tablet” as well.
Then get acquainted with a similar Windows 10 software innovation, called Continuum for Phones (it is rather for Mobile devices) which is allowing an entry level tablet or a premium phone to become a true PC with an extension to an external large size display after docking to it.
Note that while the “ultimate laptop/ultimate tablet” hybrid is for the premium client market, the second one is targeted at the entry level emerging markets as well. In that scenario Microsoft is hoping to capitalize on the availability of extremely low-cost tablets which could be enhanced to a PC-like experience with Continuum for Phones. When coupled with a similarly low-priced Windows 10 phone the emerging market user will have 2 devices for around $200 and a consistent Windows 10 experience easily dockable to a large size display, and with that easily achieving a true PC experience.
Suggested other information:
– July 30, 2015: Docking – Windows 10 hardware dev, Microsoft Hardware Dev Center
– March 28, 2015: Display – Windows 10 hardware dev, Microsoft Hardware Dev Center
– March 28, 2015: Graphics – Windows 10 hardware dev, Microsoft Hardware Dev Center
Continuum tablet mode for touch-capable devices
The Continuum feature of Windows 10 desktop edition adapts between tablet and PC modes when docking/undocking. More generally: “Continuum is available on all Windows 10 desktop editions by manually turning “tablet mode” on and off through the Action Center. Tablets and 2-in-1s with GPIO indicators or those that have a laptop and slate indicator will be able to be configured to enter ‘tablet mode’ automatically.” Source: Windows 10 Specifications, Microsoft, June 1, 2015
June 12, 2015: Continuum Overview – Windows 10 hardware dev, Microsoft Hardware Dev Center
Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. This document describes how to implement Continuum on 2-in-1 devices and tablets, specifically how to switch in and out of “tablet mode.”
Tablet Mode is a feature that switches your device experience from tablet mode to desktop mode and back. The primary way for a user to enter and exit “tablet mode” is manually through the Action Center. In addition, OEMs can report hardware transitions (for example, transformation of 2-in-1 device from clamshell to tablet and vice versa), enabling automatic switching between the two modes. However, a key promise of Continuum is that the user remains in control of their experience at all times, so these hardware transitions are surfaced through a toast prompt that must be confirmed by the user. The users also has the option to set the default response.
Tablets Detachables Convertibles Pure tablets and devices that can dock to external monitor + keyboard + mouse. Tablet-like devices with custom designed detachable keyboards. Laptop-like devices with keyboards that fold or swivel away.
When the device switches to tablet mode, the following occur:
- Start resizes across the entire screen, providing an immersive experience.
- The title bars of Store apps auto-hide to remove unnecessary chrome and let content shine through.
- Store apps and Win32 apps can optimize their layout to be touch-first when in Tablet Mode.
- The user can close apps, even Win32 apps, by swiping down from the top edge.
- The user can snap up to two apps side-by-side, including Win32 apps, and easily resize them simultaneously with their finger.
- The taskbar transforms into a navigation and status bar that’s more appropriate for tablets.
- The touch keyboard can be auto-invoked.
Of course, even in “tablet mode”, users can enjoy Windows 10 features such as Snap Assist, Task View and Action Center. On touch-enabled devices, customers have access to touch-friendly invocations for those features: they can swipe in from the left edge to bring up Task View, or swipe in from the right edge to bring up Action Center.
With “tablet mode”, Continuum gives customers the flexibility to use their device in a way that is most comfortable for them. For example, a customer might want to use their 8” tablet in “tablet mode” exclusively until they dock it to an external monitor, mouse, and keyboard. At that point the customer will exit “tablet mode” and use all their apps as traditional windows on the desktop—the same way they have in previous versions of Windows. Similarly, a user of a convertible 2-in-1 device might want enter and exit “tablet mode” as they use their device throughout the day (for example, commuting on a bus, sitting at a desk in their office), using signals from the hardware to suggest appropriate transition moments.
Imagine the overall smoothness of that combined laptop and tablet experience on the brand new Microsoft Surface Book announced just on October 6, 2015. Out of a plethora of videos reporting on that new device with quite an entusiasm I’ve selected the one which—in my view—just right with its judgement and very concise at the same time.
Surface Book hands-on: Microsoft’s first laptop is simply amazing by Mark Hachman, senior editor of the PCWorld: “No one expected the Surface Book, and what they got was a true flagship for the Windows ecosystem.“
And if you don’t need the leading edge ultrabook performance provided by the clever, “more power (GPU, longer batery life …) is in the detachable keyboard part” design of the Surface Book, then the 4th generation Surface Pro 4 may be more than sufficient for you to provide a state-of-the-art productivity work capability, including the best of the pen computing available on the market (which is also on the Surface Book, you could notice the same pen in the previous video), in addition to a new type cover for the tablet part. Here again the same source has been the best to present all that.
Surface Pro 4: Hands on with Microsoft’s category-creating productivity tablet by Mark Hachman, senior editor of the PCWorld
Continuum for phones
With Continuum for phones in Windows 10 Mobile edition, connecting a phone enables a screen to become like a PC. Additionally: “Continuum for phones limited to select premium phones at launch. External monitor must support HDMI input. Continuum-compatible accessories sold separately. App availability and experience varies by device and market. Office 365 subscription required for some features.” Source: Windows 10 Specifications, Microsoft, June 1, 2015
April 29, 2015: As part of the Universal Windows Platform Microsoft shared at Build 2015 how apps can scale using Continuum for phones, enabling people to use their phones like PCs for productivity or entertainment. With that your phone app can start using a full-sized monitor, mouse, and keyboard, giving you even more mileage from your universal app’s shared code and UI.
April 29, 2015: Windows Continuum for Phones See how new Windows Continuum functionality for mobile phones tailors the app experience across devices to transform a phone into a full-powered PC, TV or a Smart TV
[Sept 17, 2015]
[March 29, 2015]
Continuum for Phones
Continuum for Phones
Windows 10 Mobile
Windows 10 Mobile
Supported entry SoC
Supported premium SoC
1-2GB/8-32GB eMMC w/SD card
2-4GB / 32-64GB with SD slot
7” 480×800 or 1280×720 w/touch
4.5-5.5”+ / FHD-WQHD
<9mm & <.36kg
<7.5mm & <160g
2500+ mAh ( 1 day active use)
802.11ac+, 1 micro USB 2.0, mini HDMI, BT, LTE option
LTE/Cat 4+ /802.11b/g/n/ac 2×2, USB, 3.5mm jack, BT LE, NFC
Front camera, speakers, headphones
20MP with OIS/Flash; 5MP FFC
Oct 6, 2015: Windows 10 Continuum for Phones demo on Lumia 950 and Lumia 950 XL by Bryan Roper, Microsoft marketing manager, at Microsoft Windows 10 Devices Event 2015
When an open-source database written in Java that runs primarily in production on Linux becomes THE solution for the cloud platform from Microsoft (i.e. Azure) in the fully distributed, highly secure and “always on” transactional database space then we should take a special note of that. This is the case of DataStax:
July 15, 2015: Building the intelligent cloud Scott Guthrie’s keynote on the Microsoft Worldwide Partner Conference 2015, the DataStax related segment in 7 minutes only
SCOTT GUTHRIE, EVP of Microsoft Cloud and Enterprise: What I’d like to do is invite three different partners now on stage, one an ISV, one an SI, and one a managed service provider to talk about how they’re taking advantage of our cloud offerings to accelerate their businesses and make their customers even more successful.
First, and I think, you know, being able to take advantage of all of these different capabilities that we now offer.
Now, the first partner I want to bring on stage is DataStax. DataStax delivers an enterprise-grade NoSQL offering based on Apache Cassandra. And they enable customers to build solutions that can scale across literally thousands of servers, which is perfect for a hyper-scale cloud environment.
And one of the customers that they’re working with is First American, who are deploying a solution on Microsoft Azure to provide richer insurance and settlement services to their customers.
What I’d like to do is invite Billy Bosworth, the CEO of DataStax, on stage to join me to talk about the partnership that we’ve had and how some of the great solutions that we’re building together. Here’s Billy. (Applause.)
Well, thanks for joining me, Billy. And it’s great to have you here.
BILLY BOSWORTH, CEO of DataStax: Thank you. It’s a real privilege to be here today.
SCOTT GUTHRIE: So tell us a little bit about DataStax and the technology you guys build.
BILLY BOSWORTH: Sure. At DataStax, we deliver Apache Cassandra in a database platform that is really purpose-built for the new performance and availability demands that are being generated by today’s Web, mobile and IOT applications.
With DataStax Enterprise, we give our customers a fully distributed and highly secure transactional database platform.
Now, that probably sounds like a lot of other database vendors out there as well. But, Scott, we have something that’s really different and really important to us and our customers, and that’s the notion of being always on. And when you talk about “always on” and transactional databases, things can get pretty complicated pretty fast, as you well know.
The reason for that is in an always-on world, the datacenter itself becomes a single point of failure. And that means you have to build an architecture that is going to be comprehensive and include multiple datacenters. That’s tough enough with almost any other piece of the software stack. But for transactional databases, that is really problematic.
Fortunately, we have a masterless architecture in Apache Cassandra that allows us to have DataStax enterprise scale in a single datacenter or across multiple datacenters, and yet at the same time remain operationally simple. So that’s really the core of what we do.
SCOTT GUTHRIE: Is the always-on angle the key differentiator in terms of the customer fit with Azure?
BILLY BOSWORTH: So if you think about deployment to multiple datacenters, especially and including Azure, it creates an immediate benefit. Going back to your hybrid clouds comment, we see a lot of our customers that begin their journey on premises. So they take their local datacenter, they install DataStax Enterprise, it’s an active database up and running. And then they extend that database into Azure.
Now, when I say that, I don’t mean they do so for disaster recovery or failover, it is active everywhere. So it is taking full read-write requests on premises and in Azure at the same time.
So if you lose connectivity to your physical datacenter, then the Azure active nodes simply take over. And that’s great, and that solves the always-on problem.
But that’s not the only thing that Azure helps to solve. Our applications, because of their nature, tend to drive incredibly high throughput. So for us, hundreds of millions or even tens and hundreds of billions of transactions a day is actually quite common.
You guys are pretty good, Scott, but I don’t think you’ve changed the laws of physics yet. And so the way that you get that kind of throughput with unbelievable performance demands, because our customers demand millisecond and microsecond response times, is you push the data closer to the end points. You geographically distribute it.
Now, what our customers are realizing is they can try and build 19 datacenters across the world, which I’m sure was really cheap and easy to do, or they can just look at what you’ve already done and turn to a partnership like ours to say, “Help us understand how we do this with Azure.”
So not only do you get the always-on benefit, which is critical, but there’s also a very important performance element to this type of architecture as well.
SCOTT GUTHRIE: Can you tell us a little bit about the work you did with First American on Azure?
BILLY BOSWORTH: Yeah. First American is a leading name in the title insurance and settlement services businesses. In fact, they manage more titles on more properties than anybody in the world.
Every title comes with an associated set of metadata. And that metadata becomes very important in the new way that they want to do business because each element of that needs to be transacted, searched, and done in real-time analysis to provide better information back to the customer in real time.
And so for that on the database side, because of the type of data and because of the scale, they needed something like DataStax Enterprise, which we’ve delivered. But they didn’t want to fight all those battles of the architecture that we discussed on their own, and that’s where they turned to our partnership to incorporate Microsoft Azure as the infrastructure with DataStax Enterprise running on top.
And this is one of many engagements that you know we have going on in the field that are really, really exciting and indicative of the way customers are thinking about transforming their business.
SCOTT GUTHRIE: So what’s it like working with Microsoft as a partner?
BILLY BOSWORTH: I tell you, it’s unbelievable. Or, maybe put differently, highly improbable that you and I are on stage together. I want you guys to think about this. Here’s the type of company we are. We’re an open-source database written in Java that runs primarily in production on Linux.
Now, Scott, Microsoft has a couple of pretty good databases, of which I’m very familiar from my past, and open source and Java and Linux haven’t always been synonymous with Microsoft, right?
So I would say the odds of us being on stage were almost none. But over the past year or two, the way that you guys have opened up your aperture to include technologies like ours — and I don’t just say “include.” His team has embraced us in a way that is truly incredible. For a company the size of Microsoft to make us feel the way we do is just remarkable given the fact that none of our technologies have been something that Microsoft has traditionally said is part of their family.
So I want to thank you and your team for all the work you’ve done. It’s been a great experience, but we are architecting systems that are going to drive businesses for the coming decades. And that is super exciting to have a partner like you engaged with us.
SCOTT GUTHRIE: Fantastic. Well, thank you so much for joining us on stage.
BILLY BOSWORTH: Thanks, Scott. (Applause.)
The typical data framework capabilities of DataStax in all respects is best understood via the the following webinar which presents Apache Spark as well as the part of the complete data platform solution:
– Apache Cassandra is the leading distributed database in use at thousands of sites with the world’s most demanding scalability and availability requirements.
– Apache Spark is a distributed data analytics computing framework that has gained a lot of traction in processing large amounts of data in an efficient and user-friendly manner.
– The joining of both provides a powerful combination of real-time data collection with analytics.
After a brief overview of Cassandra and Spark, (Cassandra till 16:39, Spark till 19:25) this class will dive into various aspects of the integration (from 19:26).
August 19, 2015: Big Data Analytics with Cassandra and Spark by Brian Hess, Senior Product Manager of Analytics, DataStax
September 23, 2015: DataStax Announces Strategic Collaboration with Microsoft, company press release
- DataStax delivers a leading fully-distributed database for public and private cloud deployments
- DataStax Enterprise on Microsoft Azure enables developers to develop, deploy and monitor enterprise-ready IoT, Web and mobile applications spanning public and private clouds
- Scott Guthrie, EVP Cloud and Enterprise, Microsoft, to co-deliver Cassandra Summit 2015 keynote
SANTA CLARA, CA – September 23, 2015 – (Cassandra Summit 2015) DataStax, the company that delivers Apache Cassandra™ to the enterprise, today announced a strategic collaboration with Microsoft to deliver Internet of Things (IoT), Web and mobile applications in public, private or hybrid cloud environments. With DataStax Enterprise (DSE), a leading fully-distributed database platform, available on Azure, Microsoft’s cloud computing platform, enterprises can quickly build high-performance applications that can massively scale and remain operationally simple across public and private clouds, with ease and at lightning speed.
Click to Tweet: #DataStax Announces Strategic Collaboration with @Microsoft at #CassandraSummit bit.ly/1V8KY4D
PERSPECTIVES ON THE NEWS
“At Microsoft we’re focused on enabling customers to run their businesses more productively and successfully,” said Scott Guthrie, Executive Vice President, Cloud and Enterprise, Microsoft. “As more organizations build their critical business applications in the cloud, DataStax has proved to be a natural Azure partner through their ability to enable enterprises to build solutions that can scale across thousands of servers which is necessary in today’s hyper-scale cloud environment.”
“We are witnessing an increased adoption of DataStax Enterprise deployments in hybrid cloud environments, so closely aligning with Microsoft benefits any organization looking to quickly and easily build high-performance IoT, Web and mobile apps,” said Billy Bosworth, CEO, DataStax. “Working with a world-class organization like Microsoft has been an incredible experience and we look forward to continuing to work together to meet the needs of enterprises looking to successfully transition their business to the cloud.”
“As a leader in providing information and insight in critical areas that shape today’s business landscape, we knew it was critical to transform our back-end business processes to address scale and flexibility” said Graham Lammers, Director, IHS. “With DataStax Enterprise on Azure we are now able to create a next generation big data application to support the decision-making process of our customers across the globe.”
BUILD SIMPLE, SCALABLE AND ALWAY-ON APPS IN RECORD SPEED
To address the ever-increasing demands of modern businesses transitioning from on-premise to hybrid cloud environments, the DataStax Enterprise on Azure on-demand cloud database solution provides enterprises with both development and production ready Bring Your Own License (BYOL) DSE clusters that can be launched in minutes on theMicrosoft Azure Marketplace using Azure Resource Management (ARM) Templates. This enables the building of high-performance IoT, Web and mobile applications that can predictably scale across global Azure data centers with ease and at remarkable speed. Additional benefits include:
- Hybrid Deployment: Easily move DSE workloads between data centers, service providers and Azure, and build hybrid applications that leverage resources across all three.
- Simplicity: Easily manage, develop, deploy and monitor database clusters by eliminating data management complexities.
- Scalability: Quickly replicate online applications globally across multiple data centers into the cloud/hybrid cloud environment.
- Continuous Availability: DSE’s peer-to-peer architecture offers no single point of failure. DSE also provides maximum flexibility to distribute data where it’s needed most by replicating data across multiple data centers, the cloud and mixed cloud/on-premise environments.
MICROSOFT ENTERPRISE CLOUD ALLIANCE & FAST START PROGRAM
DataStax also announced it has joined Microsoft’s Enterprise Cloud Alliance, a collaboration that reinforces DataStax’scommitment to provide the best set of on-premise, hosted and public cloud database solutions in the industry. The goal of Microsoft’s Enterprise Cloud Alliance partner program is to create, nurture and grow a strong partner ecosystem across a broad set of Enterprise Cloud Products delivering the best on-premise, hosted and Public Cloud solutions in the industry. Through this alliance, DataStax and Microsoft are working together to create enhanced enterprise-grade offerings for the Azure Marketplace that reduce the complexities of deployment and provisioning through automated ARM scripting capabilities.
Additionally, as a member of Microsoft Azure’s Fast Start program, created to help users quickly deploy new cloud workloads, DataStax users receive immediate access to the DataStax Enterprise Sandbox on Azure for a hands-on experience testing out DSE on Azure capabilities. DataStax Enterprise Sandbox on Azure can be found here.
Cassandra Summit 2015, the world’s largest gathering of Cassandra users, is taking place this week and Microsoft Cloud and Enterprise Executive Vice President Scott Guthrie, DataStax CEO Billy Bosworth, and Apache Cassandra Project Chair and DataStax Co-founder and CTO Jonathan Ellis, will deliver the conference keynote at 10 a.m. PT on Wednesday, September 23. The keynote can be viewed at DataStax.com.
DataStax delivers Apache Cassandra™ in a database platform purpose-built for the performance and availability demands for IoT, Web and mobile applications. This gives enterprises a secure, always-on database technology that remains operationally simple when scaling in a single datacenter or across multiple datacenters and clouds.
With more than 500 customers in over 50 countries, DataStax is the database technology of choice for the world’s most innovative companies, such as Netflix, Safeway, ING, Adobe, Intuit and eBay. Based in Santa Clara, Calif., DataStax is backed by industry-leading investors including Comcast Ventures, Crosslink Capital, Lightspeed Venture Partners, Kleiner Perkins Caufield & Byers, Meritech Capital, Premji Invest and Scale Venture Partners. For more information, visit DataStax.com or follow us @DataStax.
September 30, 2014: Why Datastax’s increasing presence threatens Oracle’s database by Anne Shields at Market Realist
Must know: An in-depth review of Oracle’s 1Q15 earnings (Part 9 of 12)
Datastax databases are built on open-source technologies
Datastax is a California-based database management company. It offers an enterprise-grade NoSQL database that seamlessly and securely integrates real-time data with Apache Cassandra. Databases built on Apache Cassandra offer more flexibility than traditional databases. Even in case of calamities and uncertainties, like floods and earthquakes, data is available due to its replication at other data centers. NoSQL and Cassandra are open-source software.
Cassandra database was developed by Facebook (FB) to handle its enormous volumes of data. The technology behind Cassandra was developed by Amazon (AMZN) and Google (GOOGL). Oracle’s MySQL (ORCL), Microsoft’s SQL Server (MSFT), and IBM’s DB2 (IBM) are the traditional databases present in the market .
Huge amounts of funds raised in the open-source technology database space
Datastax raised $106 million in September 2014 to expand its database operations. MongoDB Inc. and Couchbase Inc.—both open-source NoSQL database developers—raised $231 million and $115 million, respectively, in 2014. According to Market Research Media, a consultancy firm, spending on NoSQL technology in 2013 was less than $1 billion. It’s expected to reach $3.4 billion by 2020. This explains why this segment is attracting such huge investments.
Oracle’s dominance in the database market is uncertain
Oracle claims it’s a market leader in the relational database market, with a revenue share of 48.3%. In 2013, it launched Oracle Database 12C. According to Oracle, “Oracle Database 12c introduces a new multitenant architecture that simplifies the process of consolidating databases onto the cloud; enabling customers to manage many databases as one — without changing their applications.” To know in detail about Database 12c, please click here .
In July 2013, DataStax announced that dozens of companies have migrated from Oracle databases to DataStax databases. Customers cited scalability, disaster avoidance, and cost savings as the reasons for shifting databases. Datastax databases’ rising popularity jeopardizes Oracle’s dominant position in the database market.
Browse this series on Market Realist:
September 24, 2014: Building a better experience for Azure and DataStax customers by Matt Rollender, VP Cloud Strategy, DataStax, Inc. on Microsoft Azure blog
Cassandra Summit is in high gear this week in Santa Clara, CA, representing the largest NoSQL event of its kind! This is the largest Cassandra Summit to date. With more than 7,000 attendees (both onsite and virtual), this is the first time the Summit is a three-day event with over 135 speaking sessions. This is also the first time DataStax will debut a formalized Apache Cassandra™ training and certification program in conjunction with O’Reilly Media. All incredibly exciting milestones!
We are excited to share another milestone. Yesterday, we announced our formal strategic collaboration with Microsoft. Dedicated DataStax and Microsoft teams have been collaborating closely behind the scenes for more than a year on product integration, QA testing, platform optimization, automated provisioning, and characterization of DataStax Enterprise (DSE) on Azure, and more to ensure product validation and a great customer experience for users of DataStax Enterprise on the Azure cloud. There is strong coordination across the two organizations – very close executive, field, and technical alignment – all critical components for a strong partnership.
This partnership is driven and shaped by our joint customers. Our customers oftentimes begin their journey with on-premise deployments of our database technology and then have a requirement to move to the cloud – Microsoft is a fantastic partner to help provide the flexibility of a true hybrid environment along with the ability to migrate to and scale applications in the cloud. Additionally, Microsoft has significant breadth regarding their data centers – customers can deploy in numerous Azure data centers around the globe, in order to be ‘closer’ to their end users. This is highly complementary to DataStax Enterprise software as we are a peer-to-peer distributed database and our customers need to be close to their end users with their always-on, always available enterprise applications.
To highlight a couple of joint customers and use cases we have First American Title and IHS, Inc. First American is a leading provider of title insurance and settlement services with revenue over $5B. They ingest and store the largest number (billions) of real estate property records in the industry. Accessing, searching and analyzing large data-sets to get relevant details quickly is the new way they want to do business – to provide better information back to their customers in real-time and allow end users to easily search through the property records on-line. They chose DSE and Azure because of the large data requirements and because of the need to continue to scale the application.
A second great customer and use case is IHS, Inc., a $2B revenue-company that provides information and analysis to support the decision-making process of businesses and governments. This is a transformational project for IHS as they are building out an ‘internet age’ parts catalog – it’s a next generation big data application, using NoSQL, non-relational technology and they want to deploy in the cloud to bring the application to market faster.
As you can see, we are enabling enterprises to engage their customer like never before with their always on, highly available and distributed applications. Stay tuned for more as we move forward together in the coming months!
For Additional information go to http://www.datastax.com/marketplace-microsoft-azure to try out Datastax Enterprise Sandbox on Azure.
See also DataStax Enterprise Cluster Production on Microsoft Azure Marketplace
September 23, 2015: Making Cassandra Do Azure, But Not Windows by Timothy Prickett Morgan Co-Editor, Co-Founder, The Next Platform
When Microsoft says that it is embracing Linux as a peer to Windows, it is not kidding. The company has created its own Linux distribution for switches used to build the Azure cloud, and it has embraced Spark in-memory processing and Cassandra as its data store for its first major open source big data project – in this case to help improve the quality of its Office365 user experience. And now, Microsoft is embracing Cassandra, the NoSQL data store originally created by Facebook when it could no longer scale the MySQL relational database to suit its needs, on the Azure public cloud.
Billy Bosworth, CEO at DataStax, the entity that took over steering development of and providing commercial support for Cassandra, tells The Next Platform that the deal with Microsoft has a number of facets, all of which should help boost the adoption of the enterprise-grade version of Cassandra. But the key one is that the Global 2000 customers that DataStax wants to sell support and services to are already quite familiar with both Windows Server in their datacenters and they are looking to burst out to the Azure cloud on a global scale.
“We are seeing a rapidly increasing number of our customers who need hybrid cloud, keeping pieces of our DataStax Enterprise on premise in their own datacenters and they also want to take pieces of that same live transactional data – not replication, but live data – and in the Azure cloud as well,” says Bosworth. “They have some unique capabilities, and one of the major requirements of customers is that even if they use cloud infrastructure, it still has to be distributed by the cloud provider. They can’t just run Cassandra in one availability zone in one region. They have to span data across the globe, and Microsoft has done a tremendous job of investing in its datacenters.”
With the Microsoft agreement, DataStax is now running its wares on the three big clouds, with Amazon Web Services and Google Compute Engine already certified able to run the production-grade Cassandra. And interestingly enough, Microsoft is supporting the DataStax implementation of Cassandra on top of Linux, not Windows. Bosworth says that while Cassandra can be run on Windows servers, DataStax does not recommend putting DataStax Enterprise (DSE), the commercial release, on Windows. (It does have a few customers who do, nonetheless, and it supports them.) Bosworth adds that DataStax and the Cassandra community have been “working diligently” for the past year to get a Windows port of DSE completed and that there has been “zero pressure” for the Microsoft Azure team to run DSE on anything other than Linux.
It is important to make the distinction between running Cassandra and other elements of DSE on Windows and having optimized drivers for Cassandra for the .NET programming environment for Windows.
“All we are really talking about is the ability to run the back-end Cassandra on Linux or Windows, and to the developer, it is irrelevant on what that back end is running,” explains Bosworth. This takes away some of that friction, and what we find is that on the back end, we just don’t find religious conviction about whether it should run on Windows or Linux, and this is different from five years ago. We sell mostly to enterprises, and we have not had one customer raise their hand and say they can’t use DSE because it does not run on Windows.”
What is more important is the ability to seamless put Cassandra on public clouds and spread transactional data around for performance and resiliency reasons – the same reasons that Facebook created Cassandra for in the first place.
What Is In The Stack, Who Uses It, And How
The DataStax Enterprise distribution does not just include the Apache Cassandra data store, but has an integrated search engine that is API compatible with the open source Solr search engine and in-memory extensions that can speed up data accesses by anywhere from 30X to 100X compared to server clusters using flash SSDs or disk drives. The Cassandra data store can be used to underpin Hadoop, allowing it to be queried by MapReduce, Hive, Pig, and Mahout, and it can also underpin Spark and Spark Streaming as their data stores if customers decide to not go with the Hadoop Distributed File System that is commonly packaged with a Hadoop distribution.
It is hard to say for sure how many organizations are running Cassandra today, but Bosworth reckons that it is on the order of tens of thousands worldwide, based on a number of factors. DataStax does not do any tracking of its DataStax Community edition because it wants a “frictionless download” like many open source projects have. (Developers don’t want software companies to see what tools they are playing with, even though they might love open source code.) DataStax provides free training for Cassandra, however, where it does keep track, and developers are consuming over 10,000 units of this training per month, so that probably indicates that the Cassandra installed base (including tests, prototypes, and production) is in the five figures.
DataStax itself has over 500 paying customers – now including Microsoft after its partner tried to build its own Spark-Cassandra cluster using open source code and decided that the supported versions were better thanks to the extra goodies that DataStax puts into its distro. DataStax has 30 of the Fortune 100 using its distribution of Cassandra in one form or another, and it is always for transactional, rather than batch analytic, jobs and in most cases also for distributed data stores that make use of the “eventual consistency” features of Cassandra to replicate data across multiple clusters. The company has another 600 firms participating in its startup program, which gives young companies freebie support on the DSE distro until they hit a certain size and can afford to start kicking some cash into the kitty.
The largest installation of Cassandra is running at Apple, which as we previously reported has over 75,000 nodes, with clusters ranging in size from hundreds to over 1,000 nodes and with a total capacity in the petabytes range. Netflix, which used to employ the open source Cassandra, switched to DSE last May and had over 80 clusters with more than 2,500 nodes supporting various aspects of its video distribution business. In both cases, Cassandra is very likely housing user session state data as well as feeding product or play lists and recommendations or doing faceted search for their online customers.
We are always intrigued to learn how customers are actually deploying tools such as Cassandra in production and how they scale it. Bosworth says that it is not uncommon to run a prototype project on as few as ten nodes, and when the project goes into production, to see it grow to dozens to hundreds of nodes. The midrange DSE clusters range from maybe 500 to 1,000 nodes and there are some that get well over 1,000 nodes for large-scale workloads like those running at Apple.
In general, Cassandra does not, like Hadoop, run on disk-heavy nodes. Remember, the system was designed to support hot transactional data, not to become a lake with a mix of warm and cold data that would be sifted in batch mode as is still done with MapReduce running atop Hadoop.
The typical node configuration has changed as Cassandra has evolved and improved, says Robin Schumacher, vice president of products at DataStax. But before getting into feeds and speeds, Schumacher offered this advice. “There are two golden rules for Cassandra. First, get your data model right, and second, get your storage system right. If you get those two things right, you can do a lot wrong with your configuration or your hardware and Cassandra will still treat you right. Whenever we have to dive in and help someone out, it is because they have just moved over a relational data model or they have hooked their servers up to a NAS or a SAN or something like that, which is absolutely not recommended.”
Only four years ago, because of the limitations in Cassandra (which like Hadoop and many other analytics tools is coded in Java), the rule of thumb was to put no more than 512 GB of disk capacity onto a single node. (It is hard to imagine such small disk capacities these days, with 8 TB and 10 TB disks.) The typical Cassandra node has two processors, with somewhere between 12 and 24 cores, and has between 64 GB and 128 GB of main memory. Customers who want the best performance tend to go with flash SSDs, although you can do all-disk setups, too.
Fast forward to today, and Cassandra can make use of a server node with maybe 5 TB of capacity for a mix of reads and writes, and if you have a write intensive application, then you can push that up to 20 TB. (DataStax has done this in its labs, says Schumacher, without any performance degradation.) Pushing the capacity up is important because it helps reduce server node count for a given amount of storage, which cuts hardware and software licensing and support costs. Incidentally, only a quarter of DSE customers surveyed said they were using spinning disks, but disk drives are fine for certain kinds of log data. SSDs are used for most transactional data, but the bits that are most latency sensitive should use DSE to store data on PCI-Express flash cards, which have lower latency.
Schumacher says that in most cases, the commercial-grade DSE Cassandra is used for a Web or mobile application, and a DSE cluster is not set up for hosting multiple applications, but rather companies have a different cluster for each use case. (As you can see is the case with Apple and Netflix.) Most of the DSE shops to make use of the eventual consistency replication features of Cassandra to span multiple datacenters with their data stores, and span anywhere from eight to twelve datacenters with their transactional data.
Here’s where it gets interesting, and why Microsoft is relevant to DataStax. Only about 30 percent of the DSE installations are running on premises. The remaining 70 percent are running on public clouds. About half of DSE customers are running on Amazon Web Services, with the remaining 20 percent split more or less evenly between Google Compute Engine and Microsoft Azure. If DataStax wants to grow its business, the easiest way to do that is to grow along with AWS, Compute Engine, and Azure.
So Microsoft and DataStax are sharing their roadmaps and coordinating development of their respective wares, and will be doing product validation, benchmarking, and optimization. The two will be working on demand generation and marketing together, too, and aligning their compensation to sell DSE on top of Azure and, eventually, on top of Windows Server for those who want to run it on premises.
In addition to announcing the Microsoft partnership at the Cassandra Summit this week, DataStax is also releasing its DSE 4.8 stack, which includes certification for Cassandra to be used as the back end for the new Spark 1.4 in-memory analytics tool. DSE Search has a performance boosts for live indexing, and running DSE instances inside of Docker containers has been improved. The stack also includes Titan 1.0, the graph database overlay for Cassandra, HBase, and BerkeleyDB that DataStax got through its acquisition of Aurelius back in February. DataStax is also previewing Cassandra 3.0, which will include support for JSON documents, role-based access control, and a lot of little tweaks that will make the storage more efficient, DataStax says. It is expected to ship later this year.
Justin Rosenstein of Asana: Be happy in a project-oriented teamwork environment made free of e-mail based communication hassle
Get Organized: Using Asana in Business [PCMag YouTube channel, Febr 24, 2014]
Steven Sinofsky, former head of Microsoft Office and (later) Windows at Microsoft:
We’ve all seen examples of the collaborative process playing out poorly by using email. There’s too much email and no ability to track and manage the overall work using the tool. Despite calls to ban the process, what is really needed is a new tool. So Asana is one of many companies working to build tools that are better suited to the work than one we currently all collectively seem to complain about.
in Don’t ban email—change how you work! [Learning by Shipping, Jan 31, 2014]
Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
in You’re doing it wrong [Learning by Shipping, April 10, 2014] and Shipping is a Feature: Some Guiding Principles for People That Build Things [Learning by Shipping, April 17, 2014]
Making e-mail communication easier [Fox Business Video]
May. 06, 2014 – 3:22 – Asana co-founder Justin Rosenstein weighs in on his new email business.
How To Collaborate Effectively With Asana [Forbes YouTube channel, Feb 26, 2013]
Dustin Moskovitz: How Asana Gets Work Done [Forbes YouTube channel, Feb 26, 2013]
Do Great Things: Keynote by Justin Rosenstein of Asana | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]
Asana’s Justin Rosenstein: “I Flew Coach Here.” | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]
How we use Asana [asana blog, Oct 9, 2013]
We love to push the boundaries of what Asana can do. From creating meeting agendas to tracking bugs to maintaining snacks in the refrigerator, the Asana product is (unsurprisingly) integral to everything we do at Asana. We find many customers are also pushing the boundaries of Asana to fit their teams’ needs and processes. Since Asana was created to be flexible and powerful enough for every team, nothing makes us more excited than hearing about these unique use cases.
Recently, we invited some of our Bay Area-based customers to our San Francisco HQ to share best practices with one another and hear from our cofounder Justin Rosenstein about the ways we use Asana at Asana. We’re excited to pass on this knowledge through some video highlights from the event. You can watch the entire video here: The Asana Way to Coordinate Ambitious Projects with Less Effort
Capture steps in a Project
“The first thing we always do is create a Project that names what we’re trying to accomplish. Then we’ll get together as a team and think of, ‘What is every single thing we need to accomplish between now and the completion of that Project?’ Over the course of the Project, all of the Tasks end up getting assigned.”
“Typically when I start my day, I’ll start by looking at all the things that are assigned to me. I’ll choose a few that I want to work on today. I try to be as realistic as possible, which means adding half as many things as I am tempted to add. After putting those into my ‘Today’ view, there are often a couple of other things I need to do. I just hit enter and add a few more tasks.”
Forward emails to Asana
“Because I want Asana to be the source of truth for everything I do, I want to put emails into my task list and prioritize them. I’ll just take the email and forward it to email@example.com. We chose ‘x’ so it wouldn’t conflict with anything else in your address book. Once I send that, it will show up in Asana with the attachments and everything right intact.”
Run great meetings
“We maintain one Project per meeting. If I’m looking at my Task list and see a Task I want to discuss at the meeting, I’ll just use Quick Add (tab + Q) to put the Task into the correct Project. Then when the meeting comes around, everything that everyone wants to talk about has already been constructed ahead of time.”
“Often a problem comes up and someone asks, ‘Who’s responsible for that?’ So instead, we’ve built out a list of areas of responsibility (AoRs), which is all the things that someone at the company has to be responsible for. By having AoRs, we distribute responsibility. We can allow managers to focus on things that are more specific to management and empower everyone at the company to be a leader in their own field.”
Background on https://asana.com/
How it all started and progressed?
asana demo & vision talk [Robert Marquardt YouTube channel, Feb 15, 2011]
The Asana Vision & Demo [asana blog, Feb 7, 2011]
We recently hosted an open house at our offices in San Francisco, where we showed the first public demo of Asana and deep-dived into the nuances of the product, the long-term mission that drives us, how the beta’s going, and more. We were really excited to be able to share what we’ve been working on and why we’re so passionate about it, and hope you enjoy
thisthe above video of the talk:
Asana will be available more broadly later this year. In the meantime,
Introducing Asana: The Modern Way to Work Together [asana blog, Nov 2, 2011]
Asana is a modern web application that keeps teams in sync, a shared task list where everyone can capture, organize, track, and communicate what they are working on in service of their common goal. Rather than trying to stay organized through the tedious grind of emails and meetings, teams using Asana can move faster and do more — or even take on bigger and more interesting goals.
How Asana Works:
Asana re-imagines the way we work together by putting the fundamental unit of productivity – the task – at the center. Breaking down ambitious goals into small pieces, assigning ownership of those tasks, and tracking them to completion is how things get built, from software to skyscrapers. With Asana, you can:
- capture everything your team is planning and doing in one place. When tasks and the conversations about them are collected together, instead of spread around emails, documents, whiteboards, and notebooks, they become the shared, trusted, collective memory for your organization.
- keep your team in sync on the priorities, and what everyone is working on. When you have a single shared view of a project’s priorities, along with an accurate view into what each person is working on and when, everyone on the team knows exactly what matters, and what work remains between here and the goal.
- get the right information at the right time. Follow tasks, and you’ll receive emails as their status evolves. Search, and you’ll see the full activity feed of all the discussions and changes to a task over its history. Now, it’s easy to stay on top of the details — without asking people to forward you a bunch of email threads.
Building tools for teamwork [asana blog, Nov 22, 2013]
Our co-founder, Justin, recently wrote in Wired about why we need to rethink the tools we use to work together. The article generated a lot of interesting comments, from ideas on knowledge management to fatigue with the “meeting lifestyle,” to this protest on the typical office culture:
“Isn’t the root of this problem that, within our own organizations, we fiercely guard information and our decision-making processes? Email exchanges and invite-only meetings shut out others– forcing the need for follow-up conversations, summary reports, and a trail of other status/staff meetings to relay content already covered some place/some time before.”
To reach its goals, we think a team needs clarity of purpose, plan and responsibility. Technology and tools can help us reach that kind of clarity, but only if they target the right problem. From their roles at Facebook, Asana’s founders have extensive knowledge of social networks, and the social graph technology they rely on. But Asana isn’t a social network. Why? Because, as Justin outlines, the social graph doesn’t target the problem of work:
Our personal and professional lives, even if they overlap, have two distinct goals — and they require different “graphs.”
For our personal lives, the goal is love (authentic interpersonal connection), and that requires a social graph with people at the center. For our work lives, the goal is creation (working together to realize our collective potential), and that requires a work graph, with the work at the center.
Don’t get me wrong: Human connection is valuable within a business. But it should be in service to the organizational function of getting work done, and doesn’t need to be the center of the graph.
So, how does this change the experience for you and your teammates? A work graph means having all the information you need when you need it. Instead of blasting messages at the whole team, like “Hey, has anyone started working on this yet?”, you should be able to efficiently find out exactly who’s working on that task and how much progress they’ve made. That’s the target Asana is aiming for. Read Justin’s full Wired article.
Organizations in Asana [asana blog, May 1, 2013]
Today, we’re excited to be launching a collection of new features aimed at helping companies use and support Asana across their entire enterprise. We call it Organizations.
Since we began, Asana has been on a mission to help great teams achieve more ambitious goals. We started 18 months ago with our free service, targeted at smaller teams and even individuals – helping them get and stay organized.
When we launched our first premium tiers six months later, we enabled medium sized teams and companies – think 10s to 100s of people – to go further with Asana. In the year between then and now, we’ve been continuously amazed by all the places and ways Asana is being used to organize a team: in industries as diverse as education, healthcare, finance, technology, and manufacturing; in companies from two-person partnerships to Fortune 100 enterprises; and in dozens of countries representing every continent but the frozen one. There’s a lot of important work being organized in Asana.
But we’re still just getting started – there remain teams that we haven’t been ready to support: the largest teams, those that grow from 100s to 1,000s of people. While it would be remarkable if it only took a small number of coworkers to design and manufacture electric cars, synthesize DNA, or deliver healthcare to villages across the globe – these missions are complex, and require more people to be involved in them to succeed. Many of the teams using Asana today are inside these bigger organizations, and they’ve been asking for Asana to work at enterprise-scale. So for the past several months, we’ve been working on just that.
Stories from our first year [asana blog, Nov 12, 2012]
… When we launched a year go, we had an ambitious mission: to create a shared task management platform that empowers teams of like-minded people to do great things. … In the course of our first year, tens of thousands of teams looking for a better way to work together have adopted Asana. …
… we collected three of these stories from three distinct kinds of teams:
– a tech startup [Foursquare],
– a fast-growing organic food company [Bare Fruit & Sundia] and
– a leading Pacific Coast aquarium [Aquarium of the Bay].
Foursquare Launches 5.0
Right around the time Foursquare passed 100 employees over the last year, we started building Foursquare 5.0. This update was a big deal: we were overhauling Foursquare’s core mechanics, evolving from check-ins towards the spontaneous discovery of local businesses. As we built the new app, we needed a way to gather feedback from the entire team.
We tried what felt like every collaboration tool around. Group emails were a mess. Google Docs was impossible to parse. We’d heard about Asana and decided to give it a shot.
Using Asana, we were easily able to collect product feedback and bugs from everyone in the company, then parse, discuss, distribute and prioritize the work. It became an indispensable group communication tool.
Foursquare 5.0 was a giant success, and we couldn’t have done it without Asana.
–Noah Weiss, Product Manager
Then, Of Course, There Is Us
It’s an understatement to say that we rely on Asana. We use our own product to manage every function of our business. Asana is where we plan, capture ideas, build meeting agendas, prioritize our product roadmap, document which bugs to fix and list the snacks to buy. It’s our CRM, our editorial calendar, our Applicant Tracking System, and our new-hire orientation system. Every team in the company – from product, design, and engineering to sales and marketing to recruiting and user operations – relies on the product we are building to stay in sync, connect our individual tasks to the bigger picture and accomplish our collective goals.
Q&A: Rising Realty Partners builds their business with Asana [asana blog, Feb 7, 2014]
As our business expanded, we found ourselves relying heavily on email, faxes, and even FedEx to communicate with each other and collaborate with outside parties. We needed a better way to organize, prioritize and communicate around our work, and we found the answer in Asana.
I can’t image how complex our communications would have been if we weren’t using Asana. We had dozens of people internally, and more than 50 people externally, all involved in making this deal happen. Having all of that communication in Asana significantly cut down on the craziness.
Because of Asana’s Dropbox integration, our workflow is now fast, intuitive and organized — something that was impossible to achieve over email. For the acquisition, we used Asana and Dropbox simultaneously to keep track of everything; from what each team member was doing, to the current status of each transaction, to keeping a history of all related documents. We had more than 18,000 items in Dropbox that we would link to in Asana instead of attaching them in email. We removed more than 30 gigabytes of information per recipient from our inboxes and everything was neatly organized around the work we were doing in Asana. This meant that the whole team always had the latest and most relevant information.
For this entire project, maybe one percent of our total internal communication was happening in email. With Asana, anyone in the company could look at any aspect of the project, see where it stood, and add their input. No one had to remember to cc’ or ‘reply all’.
The success of this deal was largely due to Asana and we plan to use it in future acquisitions –Asana has become essential to our team’s success.
Our iPhone App Levels Up [asana blog, Sept 6, 2012]
Until recently, we’ve focused most of our energy on the browser-based version of Asana. But, in the last few months, even as we’ve launched major new features in our web application, we’ve been putting much more time into improving the mobile experience. In June, we made several meaningful architectural improvements to pave the way for bigger and better things and hinted that these changes were in the works.
Today, we’ve taken the next step in that direction: Version 2.0 of our iPhone app is in the App Store now. We are really proud of this effort – almost everyone at Asana played a part in this release. This new version is a top-to-bottom redesign that really puts the power of the desktop web version of Asana right in your pocket.
Asana comes to Android [asana blog, Feb 28, 2013]
Five months ago, we launched our first bonafide mobile app, for the iPhone, and we’ve been steadily improving it ever since. Focusing on a single platform at first allowed us to be meticulous about our mobile experience, adding new features and honing the design until we knew it was something people loved. After strong positive feedback from our customers and a solid rating in the iTunes App Store, we knew it was time.
Today, we are happy to announce that Asana for Android is here. You can get it right now in the Google Play store
As of today (May 8, 2014) there are 70 employees and 15 open positions. The company has 4 investors: Benchmark Capital, Andreessen-Horowitz, Founders Fund and Peter Thiel. The first two put $9 million in November 2009. Then Founders Fund and Peter Thiel added to that $28 million in July 2012. Reuters reported that with Facebook alumni line up $28 million for workplace app Asana [July 23, 2012]:
Asana, a Silicon Valley start-up, has lined up $28 million in a financing round led by PayPal co-founder Peter Thiel and his Founders Fund, the company said.
The funding round values the workplace-collaboration company at $280 million, a person familiar with the matter said.
“This investment allows us to attract the best and brightest designers and engineers,” said Asana co-founder Justin Rosenstein, who said that in turn would help the company build on its goal of making interaction among its client-companies’ employees easier.
Asana launched the free version last year of its company management software that makes it easier to collaborate on projects. It introduced a paid, premium service earlier this year. It declined to give revenue figures, but said “hundreds” of customers had upgraded to the premium version.
Although Rosenstein and co-founder Dustin Moskovitz are alumni of social-network Facebook– Moskovitz co-founded the service with his Harvard roommate Mark Zuckerberg – they were quick to distance Asana from social networking.
Instead, they say, they view the company as an alternative to email, in-person meetings, physical whiteboards, and spreadsheets.
“That’s what we see as our competition,” said Rosenstein. “Replacing those technologies.”
With its latest funding round, Asana has now raised a total of $38 million from investors including Benchmark Capital and Andreessen Horowitz.
Thiel, who got to know Moskovitz and Rosenstein thanks to his early backing of Facebook, had already invested in Asana when it raised its “angel” round in early 2009. Now, his high-profile Founders Fund is investing and Thiel is joining Asana’s board.
Facebook has 901 million monthly users and revenue last year of $3.7 billion. But its May initial public offering disappointed many investors after it priced at $38 per share and then quickly fell. It closed on Friday at $28.76.
Many investors speculate that start-ups will have to accept lower valuations in the wake of the Facebook IPO. The Asana co-founders said the terms of their latest funding round were set before Facebook debuted on public markets.
A few of Facebook’s longtime employees have gone on to work on their own ventures.
Bret Taylor, formerly chief technology officer, said last month he was leaving to start his own company.
Dave Morin, who joined Facebook in 2008 from Apple, left in 2010 to found social network Path. Facebook alumni Adam D’Angelo and Charlie Cheever left in 2009 to start Quora, their question-and-answer company, which is also backed by Thiel.
Another former roommate of Zuckerberg’s, Chris Hughes, also left a few years ago and coordinated online organizing for Barack Obama’s 2008 presidential campaign. Now, he is publisher of the New Republic magazine.
Matt Cohler, who joined Facebook from LinkedIn early in 2005, joined venture capital firm Benchmark Capital in 2008. His investments there include Asana and Quora.
Core technology used
Luna, our in-house framework for writing great web apps really quickly [asana blog, Feb 2, 2010]
At Asana, we’re building a Collaborative Information Manager that we believe will make it radically easier for groups of people to get work done. Writing a complex web application, we experienced pain all too familiar to authors of “Web 2.0″ software (and interactive software in general): there were all kinds of extremely difficult programming tasks that we were doing over and over again for every feature we wanted to write. So we’re developing Lunascript — an in-house programming language for writing rich web applications in about 10% of the time and code you can today.
Check out the
videowe made »
[rather an article about Luna as of Nov 2, 2011]
Release the Kraken! An open-source pub/sub server for the real-time web [asana blog, March 5, 2013]
Today, we are releasing Kraken, the distributed pub/sub server we wrote to handle the performance and scalability demands of real-time web apps like Asana.
Before building Kraken, we searched for an existing open-source pub/sub solution that would satisfy our needs. At the time, we discovered that most solutions in this space were designed to solve a much wider set of problems than we had, and yet none were particularly well-suited to solve the specific requirements of real-time apps like Asana. Our team had experience writing routing-based infrastructure and ultimately decided to build a custom service that did exactly what we needed – and nothing more.
The decision to build Kraken paid off. For the last three years, Kraken has been fearlessly routing messages between our servers to keep your team in sync. During this time, it has yet to crash even once. We’re excited to finally release Kraken to the community!
Issues Moving to Amazon’s Elastic Load Balancer [asana blog, June 5, 2012]
Asana’s infrastructure runs almost entirely on top of Amazon Web Services (AWS). AWS provides us with the ability to launch managed production infrastructure in minutes with simple API calls. We use AWS for servers, databases, monitoring, and more. In general, we’ve been very happy with AWS. A month ago, we decided to use Amazon’s Elastic Load Balancer service to balance traffic between our own software load balancers.
Announcing the Asana API [asana blog, April 19, 2012]
Today we are excited to share that you can now add and access Asana data programmatically using our simple REST API.
The Asana API lets you build a variety of applications and scripts to integrate Asana with your business systems, show Asana data in other contexts, and create tasks from various locations.
Here are some examples of the things you can build:
- Source Control Integration to mark a Task as complete and add a link to the code submission as a comment when submitting code.
- A desktop app that shows the Tasks assigned to you
- A dashboard page that shows a visual representation of complete and incomplete Tasks in a project
Asana comes to Internet Explorer [asana blog, Oct 16, 2013]
Microsoft BUILD 2014 Day 2: “rebranding” to Microsoft Azure and moving toward a comprehensive set of fully-integrated backend services
- “Rebranding” into Microsoft Azure from the previous Windows Azure
- Microsoft Azure Momentum on the Market
- The new Azure Management Portal (preview)
- New Azure features: IaaS, web, mobile and data announcements
Microsoft Announces New Features for Cloud Computing Service [CCTV America YouTube channel, April 3, 2014]
Day two of the Microsoft Build developer conference in San Francisco wrapped up with the company announcing 44 new services. Most of those are based on Microsoft Azure – it’s cloud computing platform that manages applications across data centers. CCTV’s Mark Niu reports from San Francisco.
Watch the first 10 minutes of this presentation for a brief summary of the latest state of Microsoft Azure: #ChefConf 2014: Mark Russinovich, “Microsoft Azure Group” [Chef YouTube channel, April 16, 2014]
Then here is a fast talk and Q&A on Azure with Scott Guthrie after his keynote preseantation at BUILD 2014:
Cloud Cover Live – Ask the Gu! [jlongo62 YouTube channel, published on April 21, 2014]
The original: Cloud Cover Live – Ask the Gu! [Channel 9, April 3, 2014]
- “Rebranding” into Microsoft Azure from the previous Windows Azure
- Microsoft Azure Momentum on the Market
- The new Azure Management Portal (preview)
- New Azure features: IaaS, web, mobile and data announcements
[2:45:47] long video record of the Microsoft Build Conference 2014 Day 2 Keynote [MSFT Technology News YouTube channel, recorded on April 3, published on April 7, 2014]
1. “Rebranding” into Microsoft Azure from the previous Windows Azure
Yes, you’ve noticed right: the Windows prefix has gone, and the full name is now only Microsoft Azure! The change happened on April 3 as evidenced by change of the cover photo on the Facebook site, now also called Microsoft Azure:
And it happened without any announcement or explanation as even the last, April 1 Microsoft video carried the Windows prefix: Tuesdays with Corey //build Edition
as well as the last, March 14 video ad: Get Your Big Bad Wolf On (Extended)
2. Microsoft Azure Momentum on the Market
The day began with Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group, touting Microsoft progress with Azure for the last 18 months when:
… we talked about our new strategy with Azure and our new approach, a strategy that enables me to use both infrastructure as a service and platform as a service capabilities together, a strategy that enables developers to use the best of the Windows ecosystem and the best of the Linux ecosystem together, and one that delivers unparalleled developer productivity and enables you to build great applications and services that work with every device …
- Last year … shipped more than 300 significant new features and releases
- … we’ve also been hard at work expanding the footprint of Azure around the world. The green circles you see on the slide here represent Azure regions, which are clusters of datacenters close together, and where you can go ahead and run your application code. Just last week, we opened two new regions, one in Shanghai and one in Beijing. Today, we’re the only global, major cloud provider that operates in mainland China. And by the end of the year, we’ll have more than 16 public regions available around the world, enabling you to run your applications closer to your customers than ever before.
- More than 57 percent of the Fortune 500 companies are now deployed on Azure.
- Customers run more than 250,000 public-facing websites on Azure, and we now host more than 1 million SQL databases on Azure.
- More than 20 trillion objects are now stored in the Azure storage system. We have more than 300 million users, many of them — most of them, actually, enterprise users, registered with Azure Active Directory, and we process now more than 13 billion authentications per week.
- We have now more than 1 million developers registered with our Visual Studio Online service, which is a new service we launched just last November.
Let’s go beyond the big numbers, though, and look at some of the great experiences that have recently launched and are using the full power of Azure and the cloud.
“Titanfall” was one of the most eagerly anticipated games of the year, and had a very successful launch a few weeks ago. “Titanfall” delivers an unparalleled multiplayer gaming experience, powered using Azure.
Let’s see a video of it in action, and hear what the developers who built it have to say.
[Titanfall and the Power of the Cloud [xbox YouTube channel, April 3, 2014]]
One of the key bets the developers of “Titanfall” made was for all game sessions on the cloud. In fact, you can’t play the game without the cloud, and that bet really paid off.
As you heard in the video, it enables much, much richer gaming experiences. Much richer AI experiences. And the ability to tune and adapt the game as more users use it.
To give you a taste of the scale, “Titanfall” had more than 100,000 virtual machines deployed and running on Azure on launch day. Which is sort of an unparalleled size in terms of a game launch experience, and the reviews of the game have been absolutely phenomenal.
Another amazing experience that recently launched and was powered using Azure was the Sochi Olympics delivered by NBC Sports.
NBC used Azure to stream all of the games both live and on demand to both Web and mobile devices. This was the first large-scale live event that was delivered entirely in the cloud with all of the streaming and encoding happening using Azure.
Traditionally, with live encoding, you typically run in an on-premises environment because it’s so latency dependent. With the Sochi Olympics, Azure enabled NBC to not only live encode in the cloud, but also do it across multiple Azure regions to deliver high-availability redundancy.
More than 100 million people watched the online experience, and more than 2.1 million viewers alone watched it concurrently during the U.S. versus Canada men’s hockey match, a new world record for online HD streaming.
RICK CORDELLA [Senior Vice President and General Manager of NBC Sports Digital]: The company bets about $1 billion on the Olympics each time it goes off. And we have 17 days to recoup that investment. Needless to say, there is no safety net when it comes to putting this content out there for America to enjoy. We need to make sure that content is out there, that it’s quality, that our advertisers and advertisements are being delivered to it. There really is no going back if something goes wrong.
The decision for that was taken more than a year ago: Windows Azure Teams Up With NBC Sports Group [Microsoft Azure YouTube channel, April 9, 2013]
3. The new Azure Management Portal (preview)
But in fact a new way of providing a comprehensive set of fully-integrated backend services had significantly bigger impact on the audience of developers. According to Microsoft announces new cloud experience and tools to deliver the cloud without complexity [The Official Microsoft Blog, April 3, 2014]
The following post is from Scott Guthrie, Executive Vice President, Cloud and Enterprise Group, Microsoft.
On Thursday at Build in San Francisco, we took an important step by unveiling a first-of-its kind cloud environment within Microsoft Azure that provides a fully integrated cloud experience – bringing together cross-platform technologies, services and tools that enable developers and businesses to innovate with enterprise-grade scalability at startup speed. Announced today, our new Microsoft Azure Preview [Management]Portal is an important step forward in delivering our promise of the cloud without complexity.
When cloud computing was born, it was hailed as the solution that developers and business had been waiting for – the promise of a quick and easy way to get more from your business-critical apps without the hassle and cost of infrastructure. But as the industry transitions toward mobile-first, cloud-first business models and scenarios, the promise of “quick and easy” is now at stake. There’s no question that developing for a world that is both mobile-first and cloud-first is complicated. Developers are managing thousands of virtual machines, cobbling together management and automation solutions, and working in unfamiliar environments just to make their apps work in the cloud – driving down productivity as a result.
Many cloud vendors tout the ease and cost savings of the cloud, but they leave customers without the tools or capabilities to navigate the complex realities of cloud computing. That’s why today we are continuing down a path of rapid innovation. In addition to our groundbreaking new Microsoft Azure Preview [Management] Portal, we announced several enhancements our customers need to fully tap into the power of the cloud. These include:
- Dozens of enhancements to our Azure services across Web, mobile, data and our infrastructure services
- Further commitment to building the most open and flexible cloud with Azure support for automation software from Puppet Labs and Chef.
- We’ve removed the throttle off our Application Insights preview, making it easier for all developers to build, manage and iterate on their apps in the cloud with seamless integration into the IDE
<For details see the separate section 4. New Azure features: IaaS, web, mobile and data announcements>
Here is a brief presentation by a Brazilian specialist: Microsoft Azure [Management] Portal First Touch [Bruno Vieira YouTube channel, April 3, 2014]
From Microsoft evolves the cloud experience for customers [press release, April 3, 2014]
… Thursday at Build 2014, Microsoft Corp. announced a first-of-its-kind cloud experience that brings together cross-platform technologies, services and tools, enabling developers and businesses to innovate at startup speed via a new Microsoft Azure Preview [Management] Portal.
In addition, the company announced several new milestones in Visual Studio Online and .NET that give developers access to the most complete platform and tools for building in the cloud. Thursday’s announcements are part of Microsoft’s broader vision to erase the boundaries of cloud development and operational management for customers.
“Developing for a mobile-first, cloud-first world is complicated, and Microsoft is working to simplify this world without sacrificing speed, choice, cost or quality,” said Scott Guthrie, executive vice president at Microsoft. “Imagine a world where infrastructure and platform services blend together in one seamless experience, so developers and IT professionals no longer have to work in disparate environments in the cloud. Microsoft has been rapidly innovating to solve this problem, and we have taken a big step toward that vision today.”
One simplified cloud experience
The new Microsoft Azure Preview [Management] Portal provides a fully integrated experience that will enable customers to develop and manage an application in one place, using the platform and tools of their choice. The new portal combines all the components of a cloud application into a single development and management experience. New components include the following:
Simplified Resource Management. Rather than managing standalone resources such as Microsoft Azure Web Sites, Visual Studio Projects or databases, customers can now create, manage and analyze their entire application as a single resource group in a unified, customized experience, greatly reducing complexity while enabling scale. Today, the new Azure Manager is also being released through the latest Azure SDK for customers to automate their deployment and management from any client or device.
Integrated billing. A new integrated billing experience enables developers and IT pros to take control of their costs and optimize their resources for maximum business advantage.
Gallery. A rich gallery of application and services from Microsoft and the open source community, this integrated marketplace of free and paid services enables customers to leverage the ecosystem to be more agile and productive.
Visual Studio Online. Microsoft announced key enhancements through the Microsoft Azure Preview [Management] Portal, available Thursday. This includes Team Projects supporting greater agility for application lifecycle management and the lightweight editor code-named “Monaco” for modifying and committing Web project code changes without leaving Azure. Also included is Application Insights, an analytics solution that collects telemetry data such as availability, performance and usage information to track an application’s health. Visual Studio integration enables developers to surface this data from new applications with a single click.
Building an open cloud ecosystem
Showcasing Microsoft’s commitment to choice and flexibility, the company announced new open source partnerships with Chef and Puppet Labs to run configuration management technologies in Azure Virtual Machines. Using these community-driven technologies, customers will now be able to more easily deploy and configure in the cloud. In addition, today Microsoft announced the release of Java Applications to Microsoft Azure Web Sites, giving Microsoft even broader support for Web applications.
Bill Staples then came on stage to show off the new Azure [management] portal design and features. Bill walked through a number of the new innovations in the portal, such as improved UX, app insights, “blade” views [the “blade” term is used for the dropdown that allows a drilldown], etc. A screen shot of the new portal is shown below.
Bill also walked through the comprehensive analytics (such as compute and billing) that are now available on the portal. He also walked through “Application Insights,” which is a great way to instrument your code in both the portal and in your code with easy-to-use, pre-defined code snippets. He completed his demo walkthrough by showing the Azure [management] portal as a “NOC” [Network Operations Center] view on a big-screen TV.
BILL STAPLES at [1:43:39]: Now, to conclude the operations part of this demo, I wanted to show you an experience for how the new Azure Portal works on a different device. You’ve seen it on the desktop, but it works equally well on a tablet device, that is really touch friendly. Check it out on your Surface or your iPad, it works great on both devices.
But we’re thinking as well if you’ve got a big-screen TV or a projector lying around your team room, you might want to think about putting the Microsoft Azure portal as your own personal NOC.
In this case, I’ve asked the Office developer team if we could have access to their live site log. So they made me promise, do not hit the stop button or the delete button, which I promised to do.
[1:44:24] This is actually the Office developer log site. And you can see it’s got almost 10 million hits already today running on Azure Websites. So very high traffic.
They’ve customized it to show off the browser usage on their website. Imagine we’re in a team Scrum with the Office developer guys and we check out, you know, how is the website doing? We’ve got some interesting trends here.
In fact, there was a spike of sessions it looks like going on about a week ago. And page views, that’s kind of a small part. It would be nice to know which page it was that spiked a week ago. Let’s go ahead and customize that.
This screen is kind of special because it has touch screen. So I can go ahead and let’s make that automatically expand there. Now we see a bigger view. Wow, that was a really big spike last week. What page was that? We can click into it. We get the full navigation experience, same on the desktop, as well as, oh, look at that. There’s a really popular blog post that happened about a week ago. What was that? Something about announcing Office on the iPad you love. Makes sense, huh? So we can see the Azure Portal in action here as the Office developer team might imagine it. [1:45:44]
The last thing I want to show is the Azure Gallery.
We populated the gallery with all of the first-party Microsoft Azure services, as well as the [services from] great partners that we’ve worked with so far in creating this gallery.
And what you’re seeing right here is just the beginning. We’ve got the core set of DevOps experiences built out, as well as websites, SQL, and MySQL support. But over the coming months, we’ll be integrating all of the developer and IT services in Microsoft as well as the partner services into this experience.
Let me just conclude by reminding us what we’ve seen. We’ve seen a first-of-its-kind experience from Microsoft that fuses our world-class developer services together with Azure to provide an amazing dev-ops experience where you can enjoy the entire lifecycle from development, deployment, operations, gathering analytics, and iterating right here in one experience.
We’ve seen an application-centric experience that brings together all the dev platform and infrastructure services you know and love into one common shell. And we’ve seen a new application model that you can describe declaratively. And through the command line or programmatically, build out services in the cloud with tremendous ease. [1:47:12]
More information on the new Azure [Management] Portal:
- From Visual Studio Online Integration in the Azure [management] portal [by Brian Harry (MSFT) on MSDN Blogs, April 3, 2014]
Today, at Build, we unveiled a new Azure [Management] Portal experience we are building. I want to give you some insights into the work that VS Online team is doing to help with it. I’m not on the Azure team and am no expert on how they’d like to describe to the world, so please take any comments I make here about the new Azure portal as my perspective on it and not necessarily an official one.
Bill Staples first presented to me almost a year ago an idea of creating a new portal experience for Azure designed to be an optimal experience for DevOps. It would provide everything a DevOps team needs to do modern cloud based development. Capabilities to provision dev and test resources, development and collaboration capabilities, build, release and deployment capabilities, application telemetry and management capabilities and more. Pretty quickly it became clear to me that if we could do it, it would be awesome. An incredibly productive and easy way for devs to do soup to nuts app development.
What we demoed today (and made available via http://portal.azure.com”) is the first incarnation of that. My team (the VS Online Team) has worked very hard over the past many months with the Azure team to build the beginnings of the experience we hope to bring to you. It’s very early and it’s nowhere near done but it’s definitely something we’d love to start getting some feedback on.
For now, it’s limited to Azure websites, SQL databases and a subset of the VS Online capabilities. If you are a VS Online/TFS user, think of this as a companion to Visual Studio, Visual Studio Online and all of the tools you are used to. When you create a team project in the Azure portal, it’s a VS Online Team Project like any other and is accessible from the Azure portal, the VS Online web UI, Visual Studio, Eclipse and all the other ways your Visual Studio Online assets are available. For now, though, there are a few limitations – which we are working hard to address. We are in the middle of adding Azure Active Directory support to Visual Studio Online and, for a variety of reasons, chose to limit the new portal to only work with VS Online accounts linked to Azure Active Directory.
The best way to ensure this is just to create a new Team Project and a new VS Online account from within the new Azure portal. You will need to be logged in to the Azure portal with an identity known to your Azure Active Directory tenant and to add new users, rather than add them directly in Visual Studio Online, you will add them through Azure Active directory. One of the ramifications of this, for now, is that you can’t use an existing VS Online account in the new portal – you must create a new one. Clearly that’s a big limitation and one we are working hard to remove. We will enable you to link existing VS Online accounts to Active Directory we just don’t have it yet – stay tuned.
I’ll do a very simple tour. You can also watch Brian Keller’s Channel9 video.
- Enabling DevOps with Azure and Visual Studio Online [jlongo62 YouTube channel, published on April 21, 2014]
- Building your Dream DevOps Dashboard with the new Azure Preview Portal [by Brian Keller [MSFT] on MSDN Blogs, April 10, 2014]
- Azure [Management] Portal Preview and Visual Studio Online: Adding a user [by Buck Hodges (MSFT) on MSDN Blogs, April 3, 2014]
4. New Azure features: IaaS, web, mobile and data announcements
According to Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group:
[IaaS] First up, let’s look at some of the improvements we’re making with our infrastructure features and some of the great things we’re enabling with virtual machines.
Azure enables you to run both Windows and Linux virtual machines in the cloud. You can run them as stand-alone servers, or join them together to a virtual network, including one that you can optionally bridge to an on-premises networking environment.
This week, we’re making it even easier for developers to create and manage virtual machines in Visual Studio without having to leave the VS IDE: You can now create, destroy, manage and debug any number of VMs in the cloud. (Applause.)
Prior to today, it was possible to create reusable VM image templates, but you had to write scripts and manually attach things like storage drives to them. Today, we’re releasing support that makes it super-easy to capture images that can contain any number of storage drives. Once you have this image, you can then very easily take it and create any number of VM instances from it, really fast, and really easy. (Applause.)
Starting today, you can also now easily configure VM images using popular frameworks like Puppet, Chef, and our own PowerShell and VSD tools. These tools enable you to avoid having to create and manage lots of separate VM images. Instead, you can define common settings and functionality using modules that can cut across every type of VM you use.
You can also create modules that define role-specific behavior, and all these modules can be checked into source control and they can also then be deployed to a Puppet Master or Chef server.
And one of the things we’re doing this week is making it incredibly easy within Azure to basically spin up a server farm and be able to automatically deploy, provision and manage all of these machines using these popular tools.
We’re also excited to announce the general availability of our auto-scale service, as well as a bunch of great virtual networking capabilities including point-to-site VPN support going GA, new dynamic routing, subnet migration, as well as static internal IP address. And we think the combination of this really gives you a very flexible environment, as you saw, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud.
So we think infrastructure as a service is super-flexible, and it really kind of enables you to manage your environments however you want.
We also, though, provide prebuilt services and runtime environments that you can use to assemble your applications as well, and we call these platform as a service [PaaS] capabilities.
One of the benefits of these prebuilt services is that they enable you to focus on your application and not have to worry about the infrastructure underneath it.
We handle patching, load balancing, high availability and auto scale for you. And this enables you to work faster and do more.
What I want to do is just spend a little bit of time talking through some of these platform as a service capabilities, so we’re going to start talking about our Web functionality here today.
[Web] One of the most popular PaaS services that we now have on Windows Azure is something we call the Azure Website Service. This enables you to very easily deploy Web applications written in a variety of different languages and host them in the cloud. We support .NET, NOJS, PHP, Python, and we’re excited this week to also announce that we’re adding Java language support as well.
This enables you as a developer to basically push any type of application into Azure into our runtime environment, and basically host it to any number of users in the cloud.
Couple of the great features we have with Azure include auto-scale capability. What this means is you can start off running your application, for example, in a single VM. As more load increases to it, we can then automatically scale up multiple VMs for you without you having to write any script or take any action yourself. And if you get a lot of load, we can scale up even more.
You can basically configure how many VMs you maximally want to use, as well as what the burn-down rate is. And as your traffic — and this is great because it enables you to not only handle large traffic spikes and make sure that your apps are always responsive, but the nice thing about auto scale is that when the traffic drops off, or maybe during the night when it’s a little bit less, we can automatically scale down the number of machines that you need, which means that you end up saving money and not having to pay as much.
One of the really cool features that we’ve recently introduced with websites is something we call our staging support. This solves kind of a pretty common problem with any Web app today, which is there’s always someone hitting it. And how do you stage the deployments of new code that you roll out so that you don’t ever have a site in an intermediate state and that you can actually deploy with confidence at any point in the day?
And what staging support enables inside of Azure is for you to create a new staging version of your Web app with a private URL that you can access and use to test. And this allows you to basically deploy your application to the staging environment, get it ready, test it out before you finally send users to it, and then basically you can push one button or send a single command called swap where we’ll basically rotate the incoming traffic from the old production site to the new staged version.
What’s nice is we still keep your old version around. So if you discover once you go live you still have a bug that you missed, you can always swap back to the previous state. Again, this allows you to deploy with a lot of confidence and make sure that your users are always seeing a consistent experience when they hit your app.
Another cool feature that we’ve recently introduced is a feature we call Web Jobs. And this enables you to run background tasks that are non-HTTP responsive that you can actually run in the background. So if it takes a while to run it, this is a great way you can offload that work so that you’re not stalling your actual request response thread pool.
Basically, you know, common scenario we see for a lot of people is if they want to process something in the background, when someone submits something, for example, to the website, they can go ahead and simply drop an item into a queue or into the storage account, respond back down to the user, and then with one of these Web jobs, you can very easily run background code that can pull that queue message and actually process it in an offline way.
And what’s nice about Web jobs is you can run them now in the same virtual machines that host your websites. What that means is you don’t have to spin up your own separate set of virtual machines, and again, enables you to save money and provides a really nice management experience for it.
The last cool feature that we’ve recently introduced is something we call traffic manager support. With Traffic Manager, you can take advantage of the fact that Azure runs around the world, and you can spin up multiple instances of your website in multiple different regions around the world with Azure.
What you can then do is use Traffic Manager so you can have a single DNS entry that you then map to the different instances around the world. And what Traffic Manager does is gives you a really nice way that you can actually automatically, for example, route all your North America users to one of the North American versions of your app, while people in Europe will go routed to the European version of your app. That gives you better performance, response and latency.
Traffic Manager is also smart enough so that if you ever have an issue with one of the instances of your app, it can automatically remove it from those rotations and send users to one of the other active apps within the system. So this gives you also a nice way you can fail over in the event of an outage.
And the great thing about Traffic Manager, now, is you can use it not just for virtual machines and cloud services, but we’ve also now enabled it to work fully with websites.
[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Scott then invited Mads Kristensen on stage to walk through a few of the features that Scott discussed at a higher level. Specifically, he walked through the new ASP.NET templates emphasizing the creation of the DB layer and then showing PowerShell integration to manage your web site. He then showed Angular integration with Azure Web sites, emphasizing easy and dynamic ways to update your site showing deep browser and Visual Studio integration (Browser Link), showing updates that are made in the browser show up in the code in Visual Studio. Very cool!!
He also showed how you can manage staging and production sites by using the “swap” functionality built into the Azure Web sites service. He also showed Web Jobs to show how you can also run background jobs and Traffic Manager functionality to ensure your customers have the best performing web site in their regions.
So as Mads showed, there are a lot of great features that we’re kind of unveiling this week. A lot of great announcements that go with it.
These include the general availability release of auto-scale support for websites, as well as the general availability release of our new Traffic Manager support for websites as well. As you saw there, we also have Web Job support, and one of the things that we didn’t get to demo which is also very cool is backup support so that automatically we can have both your content as well as your databases backed up when you run them in our Websites environment as well.
Lots of great improvements also coming in terms of from an offer perspective. One thing a lot of people have asked us for with Websites is the ability not only to use SSL, but to use SSL without having to pay for it. So one of the cool things that we’re adding with Websites and it goes live today is we’re including one IP address-based SSL certificate and five SNI-based SSL certificates at no additional cost to every Website instance. (Applause.)
Throughout the event here, you’re also going to hear a bunch of great sessions on some of the improvements we’re making to ASP.NET. In terms of from a Web framework perspective, we’ve got general availability release of ASP.NET MVC 5.1, Web API 2.1, Identity 2.0, as well as Web Pages 3.1 So a lot of great, new features to take advantage of.
As you saw Mads demo, a lot of great features inside Visual Studio including the ability every time you create an ASP.NET project now to automatically create an Azure Website as part of that flow. Remember, every Azure customer gets 10 free Azure Websites that you can use forever. So even if you’re not an MSDN customer, you can take advantage of that feature in order to set up a Web environment literally every time you create a new project. So pretty exciting stuff.
So that was one example of some of the PaaS capabilities that we have inside Azure.
[Mobile] I’m going to move now into the mobile space and talk about some of the great improvements that we’re making there as well.
One of the great things about Azure is the fact that it makes it really easy for you to build back ends for your mobile applications and devices. And one of the cool things you can do now is you can develop those back ends with both .NET as well as NOJS, and you can use Visual Studio or any other text editor on any other operating system to actually deploy those applications into Azure.
And once they’re deployed, we make it really easy for you to go ahead and connect them to any type of device out there in the world.
Now, some of the great things you can do with this is take advantage of some of the features that we have, which provide very flexible data handling. So we have built-in support for Azure storage, as well as our SQL database, which is our PaaS database offering for relational databases, as well as take advantage of things like MongoDB and other popular NoSQL solutions.
We support the ability not only to reply to messages that come to us, but also to push messages to devices as well. One of the cool features that Mobile Services can take advantage of — and it’s also available as a stand-alone feature — is something we call notification hubs. And this basically allows you to send a single message to a notification hub and then broadcast it to, in some cases, devices that might be registered to it.
We also support with Mobile Services a variety of flexible authentication options. So when we first launched mobile services, we added support for things like Facebook login, Google ID, Twitter ID, as well as Microsoft Accounts.
One of the things we’re excited to demo here today is Active Directory support as well. So this enables you to build new applications that you can target, for example, your employees or partners, to enable them to sign in using the same enterprise credentials that they use in an on-premises Active Directory environment.
What’s great is we’re using standard OAuth tokens as part of that. So once you authenticate, you can take that token, you can use it to also provide authorization access to your own custom back-end logic or data stores that you host inside Azure.
We’re also making it really easy so that you can also take that same token and you can use it to access Office 365 APIs and be able to integrate that user’s data as well as functionality inside your application as well.
The beauty about all of this is it works with any device. So whether it’s a Windows device or an iOS device or an Android device, you can go ahead and take advantage of this capability.
[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Yavor Georgiev then came on stage to walk through a Mobile Services demo. He showed off a new Mobile Services Visual Studio template, test pages with API docs, local and remote debugging capabilities, and a LOB app that enables Facilities departments to manage service requests—this showed off a lot of the core ASP.NET/MVC features along with a quick publish service to your Mobile Services service in Azure. Through this app, he showed how to use Active Directory to build the app—which prompts you to log into the app with your corp/AD credentials to use the app. He then showed how the app integrates with SharePoint/O365 such that the request leverages the SharePoint REST APIs to publish a doc to a Facilities doc repository. He also showed how you can re-use the core code through Xamarin to repurpose the code for iOS.
The app is shown here native in Visual Studio.
This app view is the cross-platform build using Xamarin.
Kudos to Yavor! This was an awesome demo that showcases how far Mobile Services has come in a short period of time—love the extensibility and the cross-platform capabilities. Very nice!
One of the things that kind of Yavor showed there is just sort of how easy it is now to build enterprise-grade mobile applications using Azure and Visual Studio.
And one of the key kind of lynchpins in terms of from a technology standpoint that really makes this possible is our Azure Active Directory Service. This basically provides an Active Directory in the cloud that you can use to authenticate any device. What makes it powerful is the fact that you can synchronize it with your existing on-premises Active Directory. And we support both synch options, including back to Windows Server 2003 instances, so it doesn’t even require a relatively new Windows Server, it works with anything you’ve got.
We also support a federate option as well if you want to use ADFS. Once you set that environment up, then all your users are available to be authenticated in the cloud and what’s great is we ship SDKs that work with all different types of devices, and enables you to integrate authentication into those applications. And so you don’t everyone have to have your back end hosted on Azure, you can take advantage of this capability to enable single sign-on with any enterprise credential.
And what’s great is once you get that token, that same token can then be used to program against Office 365 APIs as well as the other services across Microsoft. So this provides a really great opportunity not only for building enterprise line-of-business apps, but also for ISVs that want to be able to build SaaS solutions as well as mobile device apps that integrate and target enterprise customers as well.
[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]
Scott then invited Grant Peterson from DocuSign on stage to discuss how they are using Azure, who demoed AD integration with DocuSign’s iOS app. Nice!
This is really huge for those of you building apps that are cross-platform but have big investments in AD and also provides you as developers a way to reach enterprise audiences.
So I think one of the things that’s pretty cool about that scenario is both the opportunity it offers every developer that wants to reach an enterprise audience. The great thing is all of those 300 million users that are in Azure Active Directory today and the millions of enterprises that have already federated with it are now available for you to build both mobile and Web applications against and be able to offer to them an enterprise-grade solution to all of your ISV-based applications.
That really kind of changes one of the biggest concerns that people end up having with enterprise apps with SaaS into a real asset where you can make it super-easy for them to go ahead and integrate and be able to do it from any device.
And one of the things you might have noticed there in the code that Grant showed was that it was actually all done on the client using Objective-C, and that’s because we have a new Azure Active Directory iOS SDK as well as an Android SDK in addition to our Windows SDK. And so you can use and integrate with Azure Active Directory from any device, any language, any tool.
Here’s a quick summary of some of the great mobile announcements that we’re making today. Yavor showed we now have .NET backend support, single sign-on with Active Directory.
One of the features we didn’t get a chance to show, but you can learn more about in the breakout talk is offline data sync. So we also now have built into Mobile Services the ability to sync and handle disconnected states with data. And then, obviously, the Visual Studio and remote debugging capabilities as well.
We’ve got not only the Azure SDKs for Azure Active Directory, but we also now have Office 365 API integration. We’re also really excited to announce the general availability or our Azure AD Premium release. This provides enterprises management capabilities that they can actually also use and integrate with your applications, and enables IT to also feel like they can trust the applications and the SaaS solutions that their users are using.
And then we have a bunch of great improvements with notification hubs including Kindle support as well as Visual Studio integration.
So a lot of great features. You can learn about all of them in the breakout talks this week.
So we’ve talked about Web, we’ve talked about mobile when we talk about PaaS.
[Data] I want to switch gears now and talk a little bit about data, which is pretty fundamental and integral to building any type of application.
And with Azure, we support a variety of rich ways to handle data ranging from unstructured, semistructured, to relational. One of the most popular services you heard me talk about at the beginning of the talk is our SQL database story. We’ve got over a million SQL databases now hosted on Azure. And it’s a really easy way for you to spin up a database, and better yet, it’s a way that we then manage for you. So we do handle things like high availability and patching.
You don’t have to worry about that. Instead, you can focus on your application and really be productive.
We’ve got a whole bunch of great SQL improvements that we’re excited to announce this week. I’m going to walk through a couple of them real quickly.
One of them is we’re increasing the database size that we support with SQL databases. Previously, we only supported up to 150 gigs. We’re excited to announce that we’re increasing that to support 500 gigabytes going forward. And we’re also delivering a new 99.95 percent SLA as part of that. So this now enables you to run even bigger applications and be able to do it with high confidence in the cloud. (Applause.)
Another cool feature we’re adding is something we call Self-Service Restore. I don’t know if you ever worked on a database application where you’ve written code like this, hit go, and then suddenly had a very bad feeling because you realized you omitted the where clause and you just deleted your entire table. (Laughter.)
And sometimes you can go and hopefully you have backups. This is usually the point when you discover when you don’t have backups.
And one of the things that we built in as part of the Self-Service Restore feature is automatic backups for you. And we actually let you literally roll back the clock, and you can choose what time of the day you want to roll it back to. We save up to I think 31 days of backups. And you can basically rehydrate a new database based on whatever time of the day you wanted to actually restore from. And then, hopefully, your life ends up being a lot better than it started out.
This is just a built-in feature. You don’t have to turn it on. It’s just sort of built in, something you can take advantage of. (Applause.)
Another great feature that we’re building in is something we call active geo-replication. What this lets you do now is you can actually go ahead and run SQL databases in multiple Azure regions around the world. And you can set it up to automatically replicate your databases for you.
And this is basically an asynchronous replication. You can basically have your primary in rewrite mode, and then you can actually have your secondary and you can have multiple secondaries in read-only mode. So you can still actually be accessing the data in read-only mode elsewhere.
In the event that you have a catastrophic issue in, say, one region, say a natural disaster hits, you can go ahead and you can initiate the failover automatically to one of your secondary regions. This basically allows you to continue moving on without having to worry about data loss and gives you kind of a really nice, high-availability solution that you can take advantage of.
One of the things that’s nice about Azure’s regions is we try to make sure we have multiple regions in each geography. So, for example, we have two regions that are at least 500 miles away in Europe, and in North America, and similarly with Australia, Japan and China. And what that means is that you know if you do need to fail over, your data is never leaving the geo-political area that it’s based in. And if you’re hosted in Europe, you don’t have to worry about your data ever leaving Europe, similarly for the other geo-political entities that are out there.
So this gives you a way now with high confidence that you can store your data and know that you can fail over at any point in time.
In addition to some of these improvements with SQL databases, we also have a host of great improvements coming with HDInsight, which is our big data analytics engine. This runs standard Hadoop instance and runs it as a managed service, so we do all the patching and management for you.
We’re excited to announce the GA of Hadoop 2.2 support. We also have now .NET 4.5 installed and APIs available so you can now write your MapReduce jobs using .NET 4.5.
We’re also adding audit and operation history support, a bunch of great improvements with Hive, and we’re now Yarn-enabling the cluster so you can actually run more software on it as well.
And we’re also excited to announce a bunch of improvements in the storage space, including the general availability of our read-access geo-redundant storage option.
So we’ve kind of done a whole bunch of kind of deep dives into a whole bunch of the Azure features.
- Announcing release of Visual Studio 2013 Update 2 RC and Azure SDK 2.3 [Windows Azure Blog, April 4, 2014]
- Deep dive: Visual Studio 2013 Update 2 RC and Azure SDK 2.3 [Windows Azure Blog, April 9, 2014]
- Azure Updates: Web Sites, VMs, Mobile Services, Notification Hubs, Storage, VNets, Scheduler, AutoScale and More [ScottGu’s Blog, April 14, 2014] [Data]
It has been a really busy last 10 days for the Azure team. This blog post quickly recaps a few of the significant enhancements we’ve made. These include:
- [Web] Web Sites: SSL included, Traffic Manager, Java Support, Basic Tier
- [IaaS] Virtual Machines: Support for Chef and Puppet extensions, Basic Pricing tier for Compute Instances
- [IaaS] Virtual Network: General Availability of DynamicRouting VPN Gateways and Point-to-Site VPN
- [Mobile] Mobile Services: Preview of Visual Studio support for .NET, Azure Active Directory integration and Offline support;
- [Mobile] Notification Hubs: Support for Kindle Fire devices and Visual Studio Server Explorer integration
- [IaaS] [Web] Autoscale: General Availability release
- [Data] Storage: General Availability release of Read Access Geo Redundant Storage
- [Mobile] Active Directory Premium: General Availability release
- Scheduler service: General Availability release
- Automation: Preview release of new Azure Automation service
All of these improvements are now available to use immediately (note that some features are still in preview). Below are more details about them:
- [Web] Azure Web Sites New Basic Pricing Tier [Windows Azure Blog, April 21, 2014]
… With the April updates to Microsoft Azure, Azure Web Sites offers a new pricing tier called Basic. The Basic pricing tier is designated for production sites, supporting smaller sites, as well as development and testing scenarios. … Which pricing tier is right for me? … The new pricing tier is a great benefit to many customers, offering some high-end features at a reasonable cost. We hope this new offering will enable a better deployment for all of you.
- [Web] Java on Azure Web Sites [Windows Azure Blog, April 4, 2014]
Microsoft is launching support for Java-based web sites on Azure Web Sites. This capability is intended to satisfy many common Java scenarios combined with the manageability and easy scaling options from Azure Web Sites.
The addition of Java is available immediately on all tiers for no additional cost. It offers new possibilities to host your pre-existing Java web applications. New Java web site development on Azure is easy using the Java Azure SDK which provides integration with Azure services.
- [Web] Introducing Web Hosting Plans for Azure Web Sites [Windows Azure Blog, April 4, 2014]
With the latest release of Azure Web Sites and the new Azure Portal Preview we are introducing a new concept: Web Hosting Plans. A Web Hosting Plan (WHP) allows you to group and scale sites independently within a subscription.
- Microsoft Azure Load Balancing Services [Windows Azure Blog, April 8, 2014]
Microsoft Azure offers load balancing services for [IaaS] virtual machines (IaaS) and [Web] cloud services (PaaS) hosted in the Microsoft Azure cloud. Load balancing allows your application to scale and provides resiliency to application failures among other benefits.
The load balancing services can be accessed by specifying input endpoints on your services either via the Microsoft Azure Portal or via the service model of your application. Once a hosted service with one or more input endpoints is deployed in Microsoft Azure, it automatically configures the load balancing services offered by Microsoft Azure platform. To get the benefit of resiliency / redundancy of your services, you need to have at least two virtual machines serving the same endpoint.
- [Web] What’s New for ASP.NET and Web in Visual Studio 2013 Update 2 and Beyond [jlongo62 YouTube channel, published on April 22, 2014]
- [Mobile] Push Notifications Using Notification Hub and .NET Backend [Azure Mobile Services Team Blog, April 8, 2014]
When creating a Azure Mobile Service, a Notification Hub is automatically created as well enabling large scale push notifications to devices across any mobile platform (Android, iOS, Windows Store apps, and Windows Phone). For a background on Notification Hubs, see this overview as well as these tutorials and guides, and Scott Guthrie’s blog Broadcast push notifications to millions of mobile devices using Windows Azure Notification Hubs.
Let’s look at how devices register for notification and how to send notifications to registered devices using the .NET backend.
- [Mobile] How to create Universal applications with Azure Mobile Services that leverage push notifications and database insertion and data retrieval [by Bruno Terkalay on MSDN Blogs, April 13, 2014]
- Azure SQL Database introduces new service tiers [Windows Azure Blog, April 24, 2014]
New tiers improve customer experience and provide more business continuity options
To better serve your needs for more flexibility, Microsoft Azure SQL Database is adding new service tiers, Basic and Standard, to work alongside its Premium tier, which is currently in preview. Together these service tiers will help you more easily support the needs of database workloads and application patterns built on Microsoft Azure. … Previews for all three tiers are available today.
The Basic, Standard, and Premium tiers are designed to deliver more predictable performance for light-weight to heavy-weight transactional application demands. Additionally, the new tiers offer a spectrum of business continuity features, a [Data] stronger uptime SLA at 99.95%, and larger database sizes up to 500 GB for less cost. The new tiers will also help remove costly workarounds and offer an improved billing experience for you.
- SQL Database updates coming soon to the Premium preview [Windows Azure Blog, April 4, 2014]
… [Data] Active Geo-Replication: …
… [Data] Self-service Restore: …
Stay tuned to the Azure blog for more details on SQL Database later this month!
Also, if you haven’t tried Azure SQL Database yet, it’s a great time to start and try the Premium tier! Learn more today!
- What’s new in the cluster versions provided by HDInsight? [Azure article, April 3, 2014]
Azure HDInsight now supports [Data] Hadoop 2.2 with HDInsight cluster version 3.0 and takes full advantage of these platform to provide a range of significant benefits to customers. These include, most notably:
- Microsoft Avro Library: …
[Data] YARN: A new, general-purpose, distributed, application management framework that has replaced the classic Apache Hadoop MapReduce framework for processing data in Hadoop clusters. It effectively serves as the Hadoop operating system, and takes Hadoop from a single-use data platform for batch processing to a multi-use platform that enables batch, interactive, online and stream processing. This new management framework improves scalability and cluster utilization according to criteria such as capacity guarantees, fairness, and service-level agreements.
High Availability: …
[Data] Hive performance: Order of magnitude improvements to Hive query response times (up to 40x) and to data compression (up to 80%) using the Optimized Row Columnar (ORC) format.
Pig, Sqoop, Qozie, Ambari: …