Home » Posts tagged 'Amazon Web Services'
Tag Archives: Amazon Web Services
For information on OpenStack provided earlier on this blog see:
– Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others, ‘Experiencing the Cloud’, Dec 10, 2013
– Red Hat Enterprise Linux OpenStack Platform 4 delivery and Dell as the first company to OEM it co-engineered on Dell infrastructure with Red Hat, ‘Experiencing the Cloud’, Feb 19, 2014
To understand the OpenStack V4 level state-of-technology-development as of June 25, 2015:
– go to my homepage: https://lazure2.wordpress.com/
– or to the OpenStack related part of Microsoft Cloud state-of-the-art: Hyper-scale Azure with host SDN — IaaS 2.0 — Hybrid flexibility and freedom, ‘Experiencing the Cloud’, July 11, 2015
May 19, 2016:
With OpenStack in tow you’ll go far — be it your house, your bank, your city or your car.
Just look at all of the exciting places we’re going:
From the phone in your pocket
The telecom industry is undergoing a massive shift, away from hundreds of proprietary devices in thousands of central offices accumulated over decades, to a much more efficient and flexible software plus commodity hardware approach. While some carriers like AT&T have already begun routing traffic from the 4G networks over OpenStack powered clouds to millions of cellphone users, the major wave of adoption is coming with the move to 5G, including plans from AT&T, Telefonica, SK Telekom, and Verizon.
We are on the cusp of a revolution that will completely re-imagine what it means to provide services in the trillion dollar telecom industry, with billions of connected devices riding on OpenStack-powered infrastructure in just a few years.
To the living room socket
The titans of TV like Comcast, DirecTV, and Time Warner Cable all rely on OpenStack to bring the latest entertainment to our homes efficiently, and innovators like DigitalFilm Tree are producing that content faster than ever thanks to cloud-based production workflows.
Your car, too, will get smart
Speaking of going places, back here on earth many of the world’s top automakers, such as BMW and the Volkswagen group, which includes Audi, Lamborghini, and even Bentley, are designing the future of transportation using OpenStack and big data. The hottest trends to watch in the auto world are electric zero emissions cars and self-driving cars. Like the “smart city” mentioned above, a proliferation of sensors plus connectivity call for distributed systems to bring it all together, creating a huge opportunity for OpenStack.
And your bank will take part
Money moves faster than ever, with digital payments from startups and established players alike competing for consumer attention. Against this backdrop of enormous market change, banks must meet an increasingly rigid set of regulatory rules, not to mention growing security threats. To empower their developers to innovate while staying diligent on regs and security, financial leaders like PayPal, FICO, TD Bank, American Express, and Visa are adopting OpenStack.
Your city must keep the pace
Powering the world’s cities is a complex task and here OpenStack is again driving automation, this time in the energy sector. State Grid Corporation, the world’s largest electric utility, serves over 120 million customers in China while relying on OpenStack in production.
Looking to the future, cities will be transformed by the proliferation of fast networks combined with cheap sensors. Unlocking the power of this mix are distributed systems, including OpenStack, to process, store, and move data. Case in point: tcpcloud in Prague is helping introduce “smart city” technology by utilizing inexpensive Raspberry Pis embedded in street poles, backed by a distributed system based on Kubernetes and OpenStack. These systems give city planners insight into traffic flows of both pedestrians and cars, and even measure weather quality. By routing not just packets but people, cities are literally load balancing their way to lower congestion and pollution.
From inner to outer space
The greatest medical breakthroughs of the next decade will come from analyzing massive data sets, thanks to the proliferation of distributed systems that put supercomputer power into the hands of every scientist. And OpenStack has a huge role to play empowering researchers all over the globe: from Melbourne to Madrid, Chicago to Chennai, or Berkeley to Beijing, everywhere you look you’ll find OpenStack.
To explore this world, I recently visited the Texas Advanced Computing Center (TACC) at the University of Texas at Austin where I toured a facility that houses one of the top 10 supercomputers in the world, code named “Stampede
But what really got me excited about the future was the sight of two large OpenStack clusters: one called Chameleon, and the newest addition, Jetstream, which put the power of more than 1,000 nodes and more than 15,000 cores into the hands of scientists at 350 universities. In fact, the Chameleon cloud was recently used in a class at the University of Arizona by students looking to discover exoplanets. Perhaps the next Neil deGrasse Tyson is out there using OpenStack to find a planet to explore for NASA’s Jet Propulsion Laboratories.
Where should we go next?
Mark Collier is OpenStack co-founder, and currently the OpenStack Foundation COO. This article was first published in Superuser Magazine, distributed at the Austin Summit.
May 9, 2016:
From OpenStack Summit Austin, Part 1: Vendors digging in for long haul by Al Sadowski, 451 Research, LLC: This report provides highlights from the most recent OpenStack Summit
THE 451 TAKE OpenStack mindshare continues to grow for enterprises interested in deploying cloud-native applications in greenfield private cloud environments. However, its appeal is limited for legacy applications and enterprises sold on hyperscale multi-tenant cloud providers like AWS and Azure. There are several marquee enterprises with OpenStack as the central component of cloud transformations, but many are still leery of the perceived complexity of configuring, deploying and maintaining OpenStack-based architectures. Over the last few releases, processes for installation and upgrades, tooling, and API standardization across projects have improved as operators have become more vocal during the requirements phase. Community membership continues to grow on a global basis, and the supporting organization also depicts a similar geographic trend.
… Horizontal scaling of Nova is much improved, based on input from CERN and Rackspace. CERN, an early OpenStack adopter, demonstrated the ability for the open source platform to scale – it now has 165,000 cores running OpenStack. However, Walmart, PayPal and eBay are operating larger OpenStack environments.
May 18, 2015:
Walmart‘s Cloud Journey by Amandeep Singh Juneja
May 19, 2015:
OpenStack Update from eBay and PayPal by Subbu Allamaraju
May 18, 2015:
Architecting Organizational Change at TD Bank by Graeme Peacock, VP Engineering, TD Bank Group
TD Bank uses cloud as catalyst for cultural change in IT
May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:
While OpenStack may have been conceived as an open source multi-tenant IaaS, its future success will mainly come from hosted and on-premises private cloud deployments. Yes, there are many pockets of success with regional or vertical-focused public clouds based on OpenStack, but none with the scale of AWS or the growth of Microsoft Azure. Hewlett Packard Enterprise shuttered its OpenStack Helion-based public cloud, and Rackspace shifted engineering resources away from its own public cloud. Rackspace, the service provider with the largest share of OpenStack-related revenue, says its private cloud is growing in the ‘high double digits.’ Currently, 56% of OpenStack’s service-provider revenue total is public cloud-based, but we expect private cloud will account for a larger portion over the next few years.
October 21, 2015:
A new model to deliver public cloud by Bill Hill, SVP and GM, HP Cloud
December 1, 2015:
May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:
As of the Mitaka release, two new gold members were added: UnitedStack and EasyStack, both from China. Other service providers and vendors shared their customer momentum and product updates with 451 Research during the summit. Among the highlights are:
- AT&T has cobbled together a DevOps team from 67 different organizations, in order to transform into a software company.
- All of GoDaddy’s new servers are going into its OpenStack environment. It is also using the Ironic (bare metal) project and exploring containers on OpenStack.
- SwiftStack built a commercial product with an AWS-like consumption model using the Swift (object storage) project. It now has over 60 customers, including eBay, PayPal, Burton Snowboards and Ancestry.com.
- OVH is based in France and operates a predominately pan-Europe public cloud. It added Nova compute in 2014, and currently has 75PB on Swift storage.
- Unitas Global says OpenStack-related enterprise engagements are a large part of its 100% Y/Y growth. While it does not contribute code, it is helping to develop operational efficiencies and working with Canonical to deploy ‘vanilla’ OpenStack using Juju charms. Tableau Software is a client.
- DreamHost is operating an OpenStack public cloud, DreamCompute, and is a supporter of the Astara (network orchestration) project. It claims 2,000 customers for DreamCompute and 10,000 customers for its object storage product.
- Platform9 is a unique OpenStack in SaaS startup with 20 paying customers. Clients bring their own hardware, and the software provides the management functions and takes care of patching and upgrades.
- AppFormix is a software startup focused on cloud operators and application developers that has formed a licensing agreement with Rackspace. Its analytics and capacity-planning dashboard software will now be deployed on Rackspace’s OpenStack private cloud. The software also works with Azure and AWS.
- Tesora is leveraging the Trove project to offer DBaaS. The vendor built a plug-in for Mirantis’ Fuel installer. The collaboration claims to make commercial, open source relational and NoSQL databases easier for administrators to deploy.
April 25, 2016:
AT&T’s Cloud Journey with OpenStack by Sorabh Saxena SVP, Software Development & Engineering, AT&T
OpenStack + AT&T Innovation = AT&T Integrated Cloud.
AT&T’s network has experienced enormous growth in traffic in the last several years and the trend continues unabated. Our software defined network initiative addresses the escalating traffic demands and brings greater agility and velocity to delivering features to end customers. The underlying fabric of this software defined network is AT&T Integrated Cloud (AIC).
Sorabh Saxena, AT&T’s SVP of Software Development & Engineering, will share several use cases that will highlight a multi-dimensional strategy for delivering an enterprise & service provider scale cloud. The use cases will illustrate OpenStack as the foundational element of AIC, AT&T solutions that complement it, and how it’s integrated with the larger AT&T ecosystem.
As the Senior Vice President of Software Development and Engineering at AT&T, Sorabh Saxena is leading AT&T’s transformation to a software-based company. Towards that goal, he is leading the development of platforms that include AT&T’s Integrated Cloud (AIC), API, Data, and Business Functions. Additionally, he manages delivery and production support of AT&T’s software defined network.
Sorabh and his organization are also responsible for technology solutions and architecture for all IT projects, AT&T Operation Support Systems and software driven business transformation programs that are positioning AT&T to be a digital first, integrated communications company with a best in class cost structure. Sorabh is also championing a cultural shift with a focus on workforce development and software & technology skills development.
Through Sorabh and his team’s efforts associated with AIC, AT&T is implementing an industry leading, highly complex and massively scaled OpenStack cloud. He is an advocate of OpenStack and his organization contributes content to the community that represents the needs of large enterprises and communication services providers.
April 25, 2016: And the Superuser Award goes to… AT&T takes the fourth annual Superuser Award.
AUSTIN, Texas — The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.
NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.
AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list–read their full application here.The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.
April 1, 2016: Austin Superuser Awards Finalist: AT&T
The legacy telecom is in the top 20 percent for upstream contributions with plans to increase this significantly in 2016.
It’s time for the community to determine the winner of the Superuser Award to be presented at the OpenStack Austin Summit. Based on the nominations received, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to four finalists.
Now, it’s your turn.
The team from AT&T is one of the four finalists. Review the nomination criteria below, check out the other nominees and cast your vote before the deadline, Friday, April 8 at 11:59 p.m.Pacific Daylight Time. Voting is limited to one ballot per person.
How has OpenStack transformed your business?
AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond.
- Virtualization and virtual network functions (VNFs) are of critical importance to the Telecom industry to address growth and agility. AT&T’s Domain 2.0 Industry Whitepaper released in 2013 outlines the need as well as direction.
- AT&T chose OpenStack as the core foundation of their cloud and virtualization strategy
- OpenStack has reinforced AT&T’s open source strategy and strengthened our dedication to the community as we actively promote and invest resources in OpenStack
- AT&T is committing staff and resources to drive the vision and innovation in the OpenStack and OPNFV communities to help drive OpenStack as the default cloud orchestrator for the Telecom industry
- AT&T as a founding member of the ETSI ISG network functions virtualization (NFV) helped drive OpenStack as the cloud orchestrator in the NFV platform framework. OpenStack was positioned as the VIM – Virtual Infrastructure Manager. This accelerated the convergence of the Telco industry onto OpenStack.
OpenStack serves as a critical foundation for AT&T’s software-defined networking (SDN) and NFV future and we take pride in the following:
- AT&T has deployed 70+ OpenStack (Juno & Kilo based) clouds globally, which are currently operational. Of the 70+ clouds 57 are production application and network clouds.
- AT&T plans 90% growth, going to 100+ production application and network clouds by the end of 2016.
- AT&T connects more than 14 million wireless customers via virtualized networks, with significant subscriber cut-over planned again in 2016
- AT&T controls 5.7% of our network resources (29 Telco production grade VNFs) with OpenStack, with plans to reach 30% by the end of 2016 and 75% by 2020.
- AT&T trained more than 100 staff in OpenStack in 2015
AT&T plans to expand to expand its community team of 50+ employees in 2016 As the chosen cloud platform OpenStack enabled AT&T in the following SDN and NFV related initiatives:
- Our recently announced 5G field trials in Austin
- Re-launch of unlimited data to mobility customers
- Launch of AT&T Collaborate a next generation communication tool for enterprise
- Provisioning of a Network on Demand platform to more than 500 enterprise customers
- Connected Car and MVNO (Mobile Virtual Network Operator)
- Mobile Call Recording
- Internally we are virtualizing our control services like DNS, NAT, NTP, DHCP, radius, firewalls, load balancers and probes for fault and performance management.
Since 2012, AT&T has developed all of our significant new applications in a cloud native fashion hosted on OpenStack. We also architected OpenStack to support legacy apps.
- AT&T’s SilverLining Cloud (predecessor to AIC) leveraged the OpenStack Diablo release, dating as far back as 2011
- OpenStack currently resides on over 15,000 VMs worldwide, with the expectation of further, significant growth coming in 2016-17
- AT&T’s OpenStack integrated Orchestration framework has resulted in a 75% reduction in turnaround time for requests for virtual resources
- AT&T Plans to move 80% of our Legacy IT into the OpenStack based virtualized cloud environment within coming years
- Uniform set of APIs exposed by OpenStack allows AT&T business units to leverage a “develop-once-run-everywhere” set of tools OpenStack helps AT&T’s strategy to begin to adopt best of the breed solutions at five 9’s of reliability for:
- Internet-scale storage service
- Putting all AT&T’s workloads on one common platform Deployment Automation: OpenStack modules have enabled AT&T to cost-effectively manage the OpenStack configuration in an automated, holistic fashion.
- Using OpenStack Heat, AT&T pushed rolling updates and incremental changes across 70+ OpenStack clouds. Doing it manually would be take many more people and a much longer schedule.
- Using OpenStack Fuel as a pivotal component in its cloud deployments AT&T accelerates the otherwise consuming, complex, and error-prone process of deploying, testing, and maintaining various configuration flavors of OpenStack at scale. AT&T was a major contributor towards Fuel 7.0 and Fuel 8.0 requirements. OpenStack has been a pivotal driver of AT&T’s overall culture shift. AT&T as an organization is in the midst of a massive culture shift from a Legacy Telco to a company where new skills, techniques and solutions are embraced.
OpenStack has been a key driver of this transformation in the following ways:
- AT&T is now building 50 percent of all software on open source technologies
- Allowing for the adoption of a dev ops model that creates a more unified team working towards a better end product
- Development transitioned from a waterfall to cloud-native CICD methodologies
- Developers continue to support OpenStack and make their applications cloud-native whenever possible.
How has the organization participated in or contributed to the OpenStack community?
AT&T was the first U.S. telecom service provider to sign up for and adopt the then early stage NASA-spawned OpenStack cloud initiative, back in 2011.
- AT&T has been an active OpenStack contributor since the Bexar release.
- AT&T has been a Platinum Member of the OpenStack Foundation since its origins in 2012 after helping to create its bylaws.
- Toby Ford, AVP AT&T Cloud Technology has provided vision, technology leadership, and innovation to OpenStack ecosystem as an OpenStack Foundation board member since late 2012.
- AT&T is founding member of ETSI, and OPNFV.
- AT&T has invested in building an OpenStack upstream contribution team with 25 current employees and a target for 50+ employees by the end of 2016.
- During the early years of OpenStack, AT&T brought many important use-cases to the community. AT&T worked towards solving those use-cases by leveraging various OpenStack modules, in turn encouraging other enterprises to have confidence in the young ecosystem.
- AT&T drove these following Telco-grade blueprint contributions to past releases of OpenStack:
- VLAN aware VMs (i.e. Trunked vNICs) – Support for BGP VPN, and shared volumes between guest VMs
- Complex query support for statistics in Ceilometer
- Spell checker gate job
- Metering support for PCI/PCIe per VM tenant
- PCI passthrough measurement in Ceilometer – Coverage measurement gate job
- Nova using ephemeral storage with cinder
- Climate subscription mechanism
- Access switch port discovery for bare metal nodes
- SLA enforcement per vNIC – MPLS VPNaaS
- NIC-state aware scheduling
- Toby Ford has regularly been invited to present keynotes, sessions, and panel talks at a number of OpenStack summits. For instance: Role of OpenStack in a Telco: User case study – at Atlanta Summit May 2014 – Leveraging OpenStack to Solve Telco needs: Intro to SDN/NFV – Atlanta Summit May 2014 – Telco OpenStack Roadmap Panel Talk – Tokyo Summit October 2015 – OpenStack Roadmap Software Trajectory – Atlanta Summit May 2014 – Cloud Control to Major Telco – Paris Summit November 2014.
- Greg Stiegler, assistant vice president – AT&T cloud tools & development organization represented the AT&T technology development organization at the Tokyo Summit.
- AT&T Cloud and D2 Architecture team members were invited to present various keynote sessions, summit sessions and panel talks including: – Participation at the Women of OpenStack Event – Tokyo Summit 2015 – Empower Your Cloud Through Neutron Service Function Chaining – Tokyo Summit Oct 2015 – OPNFV Panel – Vancouver Summit May 2015 – OpenStack as a Platform for Innovation – Keynote at OpenStack Silicon Valley – Aug 2015 – Taking OpenStack From Zero to Production in a Fortune-500 – Tokyo Summit October 2015 – Operating at Web-scale: Containers and OpenStack Panel Talk – Tokyo Summit October 2015 * AT&T strives to collaborate with other leading industry partners in the OpenStack ecosystem. This has led to the entire community benefiting from AT&T’s innovation.
- Margaret Chiosi gives talks worldwide on AT&T’s D2.0 vision at many Telco conferences ranging from Optics (OFC) to SDN/NFV conferences advocating OpenStack as the de-facto cloud orchestrator.
- AT&T Entertainment Group (DirecTV) architected multi-hypervisor hybrid OpenStack cloud by designing Neutron ML2 plugin. This innovation helped achieve integration between legacy virtualization and OpenStack.
- AT&T is proud to drive OpenStack adoption by sharing knowledge back to the OpenStack community in the form of these summit sessions at the upcoming Austin summit:
- Telco Cloud Requirements: What VNFs Are Asking For
- Using a Service VM as an IPv6 vRouter
- Service Function Chaining
- Technology Analysis Perspective
- Deploying Lots of Teeny Tiny Telco Clouds
- Everything You Ever Wanted to Know about OpenStack At Scale
- Valet: Holistic Data Center Optimization for OpenStack
- Gluon: An Enabler for NFV
- Among the Cloud: Open Source NFV + SDN Deployment
- AT&T: Driving Enterprise Workloads on KVM and vCenter using OpenStack as the Unified Control Plane
- Striving for High-Performance NFV Grid on OpenStack. Why you, and every OpenStack community member should be excited about it
- OpenStack at Carrier Scale
- AT&T is the “first to market” with deployment of OpenStack supported carrier-grade Virtual Network Functions. We provide the community with integral data, information, and first-hand knowledge on the trials and tribulations experienced deploying NFV technology.
- AT&T ranks in the top 20 percent of all companies in terms of upstream contribution (code, documentation, blueprints), with plans to increase this significantly in 2016.
- Commits: 1200+
- Lines of Code: 116,566
- Change Requests: 618
- Patch Sets: 1490
- Draft Blueprints: 76
- Completed Blueprints: 30
- Filed Bugs: 350
- Resolved Bugs: 250
What is the scale of the OpenStack deployment?
- AT&T’s OpenStack based AIC is deployed at 70+ sites across the world. Of the 70+ 57 are production app and network clouds.
- AT&T plans 90% growth, going to 100+ production app and network clouds by end of 2016.
- AT&T connects more than 14 million of the 134.5 million wireless customers via virtualized networks with significant subscriber cutover planned again in 2016
- AT&T controls 5.7% of our network resources (29 Telco production grade VNF) with a goal of high 80s by end of 2016) on OpenStack.
- Production workloads also include AT&T’s Connected Car, Network on Demand, and AT&T Collaborate among many more.
How is this team innovating with OpenStack?
- AT&T and AT&T Labs are leveraging OpenStack to innovate with Containers and NFV technology.
- Containers are a key part of AT&Ts Cloud Native Architecture. AT&T chairs the Open Container Initiative (OCI) to drive the standardization around container formats.
- AT&T is leading the effort to improve Nova and Neutron’s interface to SDN controllers.
- Margaret Chiosi, an early design collaborator to Neutron, ETSI NFV, now serves as President of OPNFV. AT&T is utilizing its position with OPNFV to help shape the future of OpenStack / NFV. OpenStack has enabled AT&T to innovate extensively.
The following recent unique workloads would not be possible without the SDN and NFV capabilities which OpenStack enables: * Our recent announcements of 5G field trials in Austin * Re-launch of unlimited data to mobility customers * Launch of AT&T Collaborate * Network on Demand platform to more than 500 enterprise customers * Connected Car and MVNO (Mobile Virtual Network Operator) * Mobile Call Recording New services by AT&T Entertainment Group (DirecTV) that would use OpenStack based cloud infrastructure in coming years: * NFL Sunday Ticket with up to 8 simultaneous games * DirecTV Streaming Service Without Need For satellite dish
In summary – the innovation with OpenStack is not just our unique workloads, but also to support them together under the same framework, management systems, development/test, CI/CD pipelines, and deployment automation toolset(s).
Who are the team members?
- AT&T Cloud and D2 architecture team
- AT&T Integrated Cloud (AIC) Members: Margaret Chiosi, distinguished member of technical staff, president of OPNFV; Toby Ford, AVP – AT&T cloud technology & D2 architecture – strategy, architecture & pPlanning, and OpenStack Foundation Board Member; Sunil Jethwani – director, cloud & SDN architecture, AT&T Entertainment Group; Andrew Leasck – director – AT&T Integrated cloud development; Janet Morris – director – AT&T integrated cloud development; Sorabh Saxena, senior vice president – AT&T software development & engineering organization; Praful Shanghavi – director – AT&T integrated cloud development; Bryan Sullivan – director member of technical staff; Ryan Van Wyk – executive director – AT&T integrated cloud development.
- AT&T’s project teams top contributors: Paul Carver, Steve Wilkerson, John Tran, Joe D’andrea, Darren Shaw.
April 30, 2016: Swisscom in Production with OpenStack and Cloud Foundry
Swisscom has one of the largest in-production industry standard Platform as a Service built on OpenStack. Their offering is focused on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry, and PLUMgrid. With services such as MongoDB, MariaDB, RabbitMQ, ELK, and an object storage, the PaaS cloud offers what developers need to get started right away. Join this panel for take-away lessons on Swisscom’s journey, the technologies, partnerships, and developers who are building apps everyday on Swisscom’s OpenStack cloud.
May 23, 2016: How OpenStack public cloud + Cloud Foundry = a winning platform for telecoms interview on ‘OpenStack Superuser’ with Marcel Härry, chief architect, PaaS at Swisscom
Swisscom has one of the largest in-production industry standard platform-as-a-service built on OpenStack.
Their offering focuses on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom, Switzerland’s leading telecom provider, embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry and PLUMgrid.
Superuser interviewed Marcel Härry, chief architect, PaaS at Swisscom and member of theTechnical Advisory Board of the Cloud Foundry Foundation to find out more.
How are you using OpenStack?
OpenStack has allowed us to rapidly develop and deploy our Cloud Foundry-based PaaS offering, as well as to rapidly develop new features within SDN and containers. OpenStack is the true enabler for rapid development and delivery.
An example: after half a year from the initial design and setup, we already delivered two production instances of our PaaS offering built on multiple OpenStack installations on different sites. Today we are already running multiple production deployments for high-profile customers, who further develop their SaaS offerings using our platform. Additionally, we are providing the infrastructure for numerous lab and development instances. These environments allow us to harden and stabilize new features while maintaining a rapid pace of innovation, while still ensuring a solid environment.
We are running numerous OpenStack stacks, all limited – by design – to a single region, and single availability zone. Their size ranges from a handful of compute nodes, to multiple dozens of compute nodes, scaled based on the needs of the specific workloads. Our intention is not to build overly large deployments, but rather to build multiple smaller stacks, hosting workloads that can be migrated between environments. These stacks are hosting thousands of VMs, which in turn are hosting tens of thousands of containers to run production applications or service instances for our customers.
What kinds of applications or workloads are you currently running on OpenStack?
We’ve been using OpenStack for almost three years now as our infrastructure orchestrator. Swisscom built its Elastic Cloud on top of OpenStack. On top of this we run Swisscom’s Application Cloud, or PaaS, built on Cloud Foundry with PLUMgrid as the SDN layer. Together, the company’s clouds deliver IaaS to IT architects, SaaS to end users and PaaS to app developers among other services and applications. We mainly run our PaaS/Cloud Foundry environment on OpenStack as well as the correlated managed services (i.e. a kind of DBaaS, Message Service aaS etc.) which are running themselves in Docker containers.
What challenges have you faced in your organization regarding OpenStack, and how did you overcome them?
The learning curve for OpenStack is pretty steep. When we started three years ago almost no reference architectures were available, especially none with enterprise-grade requirements such as dual-site, high availability (HA) capabilities on various levels and so forth. In addition, we went directly into the SDN, SDS levels of implementation which was a big, but very successful step at the end of the day.
What were your major milestones?
Swisscom’s go-live for its first beta environment was in spring of 2014, go live for an internal development (at Swisscom) was spring of 2015, and the go-live for its public Cloud Foundry environment fully hosted on OpenStack was in the fall of 2015. The go-live date for enterprise-grade and business-critical workloads on top of our stack from various multinational companies in verticals like finance or industry is spring, 2016, and Swisscom recently announced Swiss Re as one of its first large enterprise cloud customers.
What have been the biggest benefits to your organization as a result of using OpenStack?
Pluggability and multi-vendor interoperability (for instance with SDN like PLUMgrid or SDS like ScaleIO) to avoid vendor lock in and create a seamless system. OpenStack enabled Swisscom to experiment with deployments utilizing a DevOps model and environment to deploy and develop applications faster. It simplified the move from PoC to production environments and enabled us to easily scale out services utilizing a distributed cluster-based architecture.
What advice do you have for companies considering a move to OpenStack?
It’s hard in the beginning but it’s really worth it. Be wise when you select your partners and vendors, this will help you to be online in a very short amount of time. Think about driving your internal organization towards a dev-ops model to be ready for the first deployments, as well as enabling your firm to change deployment models (e.g. going cloud-native) for your workloads when needed.
How do you participate in the community?
This year’s Austin event was our second OpenStack Summit where we provided insights into our deployment and architecture, contributing back to the community in terms of best practices, as well as providing real-world production use-cases. Furthermore, we directly contribute patches and improvements to various OpenStack projects. Some of these patches have already been accepted, while a few are in the pipeline to be further polished for publishing. Additionally, we are working very closely together with our vendors – RedHat, EMC, ClusterHQ/Flocker, PLUMgrid as well as the Cloud Foundry Foundation – and work together to further improve their integration and stability within the OpenStack project. For example, we worked closely together with Flocker for their cinder-based driver to orchestrate persistency among containers. Furthermore, we have provided many bug reports through our vendors and have worked together with them on fixes which then have made their way back into the OpenStack community.
We have a perfect solution for non-persistent container workloads for our customers. We are constantly evolving this product and are working especially hard to meet the enterprise- and finance-verticals requirements when it comes to the infrastructure orchestration of OpenStack.
Härry spoke about OpenStack in production at the recent Austin Summit, along with Pere Monclus of PLUMgrid, Chip Childers of the Cloud Foundry Foundation, Chris Wright of Red Hat and analyst Rosalyn Roseboro.
May 10, 2016: Lenovo‘s Highly-Available OpenStack Enterprise Cloud Platform Practice with EasyStack press release by EasyStack
Microsoft chairman: The transition to a subscription-based cloud business isn’t fast enough. Revamp the sales force for cloud-based selling.
See also my earlier posts:
– John W. Thompson, Chairman of the Board of Microsoft: the least recognized person in the radical two-men shakeup of the uppermost leadership, ‘Experiencing the Cloud’, February 6, 2014
– Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft, ‘Experiencing the Cloud’, July 23, 2014
May 17, 2016: John Thompson: Microsoft Should Move Faster on Cloud Plan in an interview with Bloomberg’s Emily Chang on “Bloomberg West”
The focus is very-very good right now. We’re focused on cloud, on the hydrid model of the cloud. We’re focused on the application services we can deliver not just in the cloud but on multiple devices. If ever I would like to see something change, it’s more about pace. From my days at IBM [Thompson spent 28 years at IBM before becoming chief executive at Symantec] I can remember we never seemed to be running or moving fast enough. That is always the case in the established enterprise. While you believe that you’re moving fast in fact you’re not moving as fast as a startup.
June 2, 2016: Microsoft Ramps Up Its Cloud Efforts Bloomberg Intelligence’s Mandeep Singh reports on “Bloomberg Markets”
If you look at their segment revenue 43% from Windows and hardware devices. That part is the one where it is hard to come up with a cloud strategy to really kind of migrate that segment to the cloud very quickly. The infrastructure side is 30%, that is taken care of, and the Office is the other 30% that they have a good mix. That is really the other 43% revenue they have to figure out how to accelerate that transition to the cloud.
Then Bloomberg’s June 2, 2016 article (written by Dina Bass) came out with the following verdict:
Board members at Microsoft Corp. are grappling with a growing concern: that the company’s traditional software business, which makes up the majority of its sales, could evaporate in a matter of years — and Chairman John Thompson is pushing for a more aggressive shift into newer cloud-based products.
Thompson said he and the board are pleased with a push by Chief Executive Officer Satya Nadella to make more money from software and services delivered over the internet, but want it to move much faster. They’re considering ideas like increasing spending, overhauling the sales force and managing partnerships differently to step up the pace.
The cloud growth isn’t merely nice to have — it’s critical against the backdrop of declining demand for what’s known as on-premise software programs, the more traditional approach that involves installing software on a company’s own computers and networks. No one knows exactly how quickly sales of those legacy offerings will drop off, Thompson said, but it’s “inevitable that part of our business will be under continued pressure.”
The board members’ concern was born from experience. Thompson recounts how fellow director Chuck Noski, a former chief financial officer of AT&T, watched the telecom carrier’s traditional wireline business evaporate in just three years as the world shifted to mobile. Now, Noski and Thompson are asking whether something similar could happen to Microsoft.
“What’s the likelihood that could happen with on-prem versus cloud? That in three years, we look up and it’s gone?” Thompson said in an interview, snapping his fingers to make the point.
Small, but Growing
Nadella has said the company is on track to make its forecast for $20 billion in annualized sales from commercial cloud products in fiscal 2018. Still, Thompson said, the cloud business could be even further along, and the software maker should have started its push much earlier. Commercial cloud services revenue has posted impressive growth rates — with Azure product sales rising more than 100 percent quarterly — but the total business contributed just $5.8 billion of Microsoft’s $93.6 billion in sales in the latest fiscal year.
Thompson praised the technology behind smaller cloud products, such as Power BI tools for business analysis and data visualization and the enterprise mobile management service, which delivers apps and data to various corporate devices. But the latter, for example, brings in $300 million a year — just a sliver of overall annual revenue, which will soon top $100 billion, Thompson said.
The board is examining whether Microsoft has invested enough in its complete cloud lineup, Thompson said. It’s not just about developing better cloud technology — it’s a question of how the company sells those products and its strategy for recruiting partners to resell Microsoft’s services and build their own offerings on top of them. Persuading partners to develop compatible applications is a strong point for cloud market leader Amazon.com Inc., he said.
Thompson declined to be specific about what the company might change in sales and partnerships, but he said the company may need to “re-imagine” those organizations. “The question is, should it be more?” he said. “If you believe we need to run harder, run faster, be less risk-averse as a mantra, the question is how much more do you do.”
Analysts say Microsoft should seek to develop a deeper bench of partners making software for Azure and consultants to install and manage those services for customers who need the help. Microsoft is working on this, but is behind Amazon Web Services, said Lydia Leong, an analyst at Gartner Inc.
“They are nowhere near at the same level of sophistication, and the Microsoft partners are mostly new to the Azure ecosystem, so they don’t know it as well,” she said. “If you’re a customer and you want to migrate to AWS, you have this massive army that can help you.”
In the sales force, Microsoft’s representatives need more experience in cloud deals — which are generally subscription-based rather than one-time purchases — and how they differ from traditional software contracts, said Matt McIlwain, managing director at Seattle’s Madrona Venture Partners. “They haven’t made enough of a transition to a cloud-based selling motion,” he said. “It’s still a work in progress.”
Microsoft declined to comment on the company’s cloud strategy or any changes to sales and partnerships for this story, and director Noski couldn’t be reached for comment.
The company’s dependence on demand for traditional software was painfully apparent in its most recent quarterly report, when revenue was weighed down by weakness in its transactional business, or one-time purchases of software that customers store and run on their own PCs and networks. Chief Financial Officer Amy Hood in April said that lackluster transactional sales were likely to continue.
Microsoft’s two biggest cloud businesses are the Azure web-based service, which trails top provider Amazon but leads Google and International Business Machines Corp., and the Office 365 cloud versions of e-mail, collaboration software, word-processing and spreadsheet software. Microsoft’s key on-premise products include Windows Server and traditional versions of Office and the SQL database server.
Slumps like last quarter’s hurt even more amid the company’s shift to the cloud, which has brought a lot of changes to its financial reporting. For cloud deals, revenue is recognized over the term of the deal rather than providing an up-front boost. They’re also lower-margin businesses, squeezed by the cost of building and maintaining data centers to deliver the services. Microsoft’s gross margin dropped from 80 percent in fiscal 2010 to 65 percent in the year that ended June 30, 2015.
“This business growing incredibly well, but the gross margin of that is substantially lower than their core products of the olden days,” said Anurag Rana, an analyst at Bloomberg Intelligence. “How low do they go?”
‘Different Model’ [of doing business for subscription-based software]
It’s jarring for some investors, but the other option is worse, said Thompson.
“That’s a very different model for Microsoft and one our investors are going to have to suck it up and embrace, because the alternative is don’t embrace the cloud and you wake up one day and you look just like — guess who?” Thompson doesn’t finish the sentence, but makes it clear he’s referring to IBM, the company where he spent more than 27 years, which he says is “not relevant anymore.” IBM declined to comment.
The pressure is good for Microsoft, Thompson said — pressure tends to result in change.
“You can re-imagine things when you’re stressed. It’s a lot easier to do it when you’re stressed because you feel compelled to do something,” Thompson said. “I see a lot of stress at Microsoft.”
When an open-source database written in Java that runs primarily in production on Linux becomes THE solution for the cloud platform from Microsoft (i.e. Azure) in the fully distributed, highly secure and “always on” transactional database space then we should take a special note of that. This is the case of DataStax:
July 15, 2015: Building the intelligent cloud Scott Guthrie’s keynote on the Microsoft Worldwide Partner Conference 2015, the DataStax related segment in 7 minutes only
SCOTT GUTHRIE, EVP of Microsoft Cloud and Enterprise: What I’d like to do is invite three different partners now on stage, one an ISV, one an SI, and one a managed service provider to talk about how they’re taking advantage of our cloud offerings to accelerate their businesses and make their customers even more successful.
First, and I think, you know, being able to take advantage of all of these different capabilities that we now offer.
Now, the first partner I want to bring on stage is DataStax. DataStax delivers an enterprise-grade NoSQL offering based on Apache Cassandra. And they enable customers to build solutions that can scale across literally thousands of servers, which is perfect for a hyper-scale cloud environment.
And one of the customers that they’re working with is First American, who are deploying a solution on Microsoft Azure to provide richer insurance and settlement services to their customers.
What I’d like to do is invite Billy Bosworth, the CEO of DataStax, on stage to join me to talk about the partnership that we’ve had and how some of the great solutions that we’re building together. Here’s Billy. (Applause.)
Well, thanks for joining me, Billy. And it’s great to have you here.
BILLY BOSWORTH, CEO of DataStax: Thank you. It’s a real privilege to be here today.
SCOTT GUTHRIE: So tell us a little bit about DataStax and the technology you guys build.
BILLY BOSWORTH: Sure. At DataStax, we deliver Apache Cassandra in a database platform that is really purpose-built for the new performance and availability demands that are being generated by today’s Web, mobile and IOT applications.
With DataStax Enterprise, we give our customers a fully distributed and highly secure transactional database platform.
Now, that probably sounds like a lot of other database vendors out there as well. But, Scott, we have something that’s really different and really important to us and our customers, and that’s the notion of being always on. And when you talk about “always on” and transactional databases, things can get pretty complicated pretty fast, as you well know.
The reason for that is in an always-on world, the datacenter itself becomes a single point of failure. And that means you have to build an architecture that is going to be comprehensive and include multiple datacenters. That’s tough enough with almost any other piece of the software stack. But for transactional databases, that is really problematic.
Fortunately, we have a masterless architecture in Apache Cassandra that allows us to have DataStax enterprise scale in a single datacenter or across multiple datacenters, and yet at the same time remain operationally simple. So that’s really the core of what we do.
SCOTT GUTHRIE: Is the always-on angle the key differentiator in terms of the customer fit with Azure?
BILLY BOSWORTH: So if you think about deployment to multiple datacenters, especially and including Azure, it creates an immediate benefit. Going back to your hybrid clouds comment, we see a lot of our customers that begin their journey on premises. So they take their local datacenter, they install DataStax Enterprise, it’s an active database up and running. And then they extend that database into Azure.
Now, when I say that, I don’t mean they do so for disaster recovery or failover, it is active everywhere. So it is taking full read-write requests on premises and in Azure at the same time.
So if you lose connectivity to your physical datacenter, then the Azure active nodes simply take over. And that’s great, and that solves the always-on problem.
But that’s not the only thing that Azure helps to solve. Our applications, because of their nature, tend to drive incredibly high throughput. So for us, hundreds of millions or even tens and hundreds of billions of transactions a day is actually quite common.
You guys are pretty good, Scott, but I don’t think you’ve changed the laws of physics yet. And so the way that you get that kind of throughput with unbelievable performance demands, because our customers demand millisecond and microsecond response times, is you push the data closer to the end points. You geographically distribute it.
Now, what our customers are realizing is they can try and build 19 datacenters across the world, which I’m sure was really cheap and easy to do, or they can just look at what you’ve already done and turn to a partnership like ours to say, “Help us understand how we do this with Azure.”
So not only do you get the always-on benefit, which is critical, but there’s also a very important performance element to this type of architecture as well.
SCOTT GUTHRIE: Can you tell us a little bit about the work you did with First American on Azure?
BILLY BOSWORTH: Yeah. First American is a leading name in the title insurance and settlement services businesses. In fact, they manage more titles on more properties than anybody in the world.
Every title comes with an associated set of metadata. And that metadata becomes very important in the new way that they want to do business because each element of that needs to be transacted, searched, and done in real-time analysis to provide better information back to the customer in real time.
And so for that on the database side, because of the type of data and because of the scale, they needed something like DataStax Enterprise, which we’ve delivered. But they didn’t want to fight all those battles of the architecture that we discussed on their own, and that’s where they turned to our partnership to incorporate Microsoft Azure as the infrastructure with DataStax Enterprise running on top.
And this is one of many engagements that you know we have going on in the field that are really, really exciting and indicative of the way customers are thinking about transforming their business.
SCOTT GUTHRIE: So what’s it like working with Microsoft as a partner?
BILLY BOSWORTH: I tell you, it’s unbelievable. Or, maybe put differently, highly improbable that you and I are on stage together. I want you guys to think about this. Here’s the type of company we are. We’re an open-source database written in Java that runs primarily in production on Linux.
Now, Scott, Microsoft has a couple of pretty good databases, of which I’m very familiar from my past, and open source and Java and Linux haven’t always been synonymous with Microsoft, right?
So I would say the odds of us being on stage were almost none. But over the past year or two, the way that you guys have opened up your aperture to include technologies like ours — and I don’t just say “include.” His team has embraced us in a way that is truly incredible. For a company the size of Microsoft to make us feel the way we do is just remarkable given the fact that none of our technologies have been something that Microsoft has traditionally said is part of their family.
So I want to thank you and your team for all the work you’ve done. It’s been a great experience, but we are architecting systems that are going to drive businesses for the coming decades. And that is super exciting to have a partner like you engaged with us.
SCOTT GUTHRIE: Fantastic. Well, thank you so much for joining us on stage.
BILLY BOSWORTH: Thanks, Scott. (Applause.)
The typical data framework capabilities of DataStax in all respects is best understood via the the following webinar which presents Apache Spark as well as the part of the complete data platform solution:
– Apache Cassandra is the leading distributed database in use at thousands of sites with the world’s most demanding scalability and availability requirements.
– Apache Spark is a distributed data analytics computing framework that has gained a lot of traction in processing large amounts of data in an efficient and user-friendly manner.
– The joining of both provides a powerful combination of real-time data collection with analytics.
After a brief overview of Cassandra and Spark, (Cassandra till 16:39, Spark till 19:25) this class will dive into various aspects of the integration (from 19:26).
August 19, 2015: Big Data Analytics with Cassandra and Spark by Brian Hess, Senior Product Manager of Analytics, DataStax
September 23, 2015: DataStax Announces Strategic Collaboration with Microsoft, company press release
- DataStax delivers a leading fully-distributed database for public and private cloud deployments
- DataStax Enterprise on Microsoft Azure enables developers to develop, deploy and monitor enterprise-ready IoT, Web and mobile applications spanning public and private clouds
- Scott Guthrie, EVP Cloud and Enterprise, Microsoft, to co-deliver Cassandra Summit 2015 keynote
SANTA CLARA, CA – September 23, 2015 – (Cassandra Summit 2015) DataStax, the company that delivers Apache Cassandra™ to the enterprise, today announced a strategic collaboration with Microsoft to deliver Internet of Things (IoT), Web and mobile applications in public, private or hybrid cloud environments. With DataStax Enterprise (DSE), a leading fully-distributed database platform, available on Azure, Microsoft’s cloud computing platform, enterprises can quickly build high-performance applications that can massively scale and remain operationally simple across public and private clouds, with ease and at lightning speed.
Click to Tweet: #DataStax Announces Strategic Collaboration with @Microsoft at #CassandraSummit bit.ly/1V8KY4D
PERSPECTIVES ON THE NEWS
“At Microsoft we’re focused on enabling customers to run their businesses more productively and successfully,” said Scott Guthrie, Executive Vice President, Cloud and Enterprise, Microsoft. “As more organizations build their critical business applications in the cloud, DataStax has proved to be a natural Azure partner through their ability to enable enterprises to build solutions that can scale across thousands of servers which is necessary in today’s hyper-scale cloud environment.”
“We are witnessing an increased adoption of DataStax Enterprise deployments in hybrid cloud environments, so closely aligning with Microsoft benefits any organization looking to quickly and easily build high-performance IoT, Web and mobile apps,” said Billy Bosworth, CEO, DataStax. “Working with a world-class organization like Microsoft has been an incredible experience and we look forward to continuing to work together to meet the needs of enterprises looking to successfully transition their business to the cloud.”
“As a leader in providing information and insight in critical areas that shape today’s business landscape, we knew it was critical to transform our back-end business processes to address scale and flexibility” said Graham Lammers, Director, IHS. “With DataStax Enterprise on Azure we are now able to create a next generation big data application to support the decision-making process of our customers across the globe.”
BUILD SIMPLE, SCALABLE AND ALWAY-ON APPS IN RECORD SPEED
To address the ever-increasing demands of modern businesses transitioning from on-premise to hybrid cloud environments, the DataStax Enterprise on Azure on-demand cloud database solution provides enterprises with both development and production ready Bring Your Own License (BYOL) DSE clusters that can be launched in minutes on theMicrosoft Azure Marketplace using Azure Resource Management (ARM) Templates. This enables the building of high-performance IoT, Web and mobile applications that can predictably scale across global Azure data centers with ease and at remarkable speed. Additional benefits include:
- Hybrid Deployment: Easily move DSE workloads between data centers, service providers and Azure, and build hybrid applications that leverage resources across all three.
- Simplicity: Easily manage, develop, deploy and monitor database clusters by eliminating data management complexities.
- Scalability: Quickly replicate online applications globally across multiple data centers into the cloud/hybrid cloud environment.
- Continuous Availability: DSE’s peer-to-peer architecture offers no single point of failure. DSE also provides maximum flexibility to distribute data where it’s needed most by replicating data across multiple data centers, the cloud and mixed cloud/on-premise environments.
MICROSOFT ENTERPRISE CLOUD ALLIANCE & FAST START PROGRAM
DataStax also announced it has joined Microsoft’s Enterprise Cloud Alliance, a collaboration that reinforces DataStax’scommitment to provide the best set of on-premise, hosted and public cloud database solutions in the industry. The goal of Microsoft’s Enterprise Cloud Alliance partner program is to create, nurture and grow a strong partner ecosystem across a broad set of Enterprise Cloud Products delivering the best on-premise, hosted and Public Cloud solutions in the industry. Through this alliance, DataStax and Microsoft are working together to create enhanced enterprise-grade offerings for the Azure Marketplace that reduce the complexities of deployment and provisioning through automated ARM scripting capabilities.
Additionally, as a member of Microsoft Azure’s Fast Start program, created to help users quickly deploy new cloud workloads, DataStax users receive immediate access to the DataStax Enterprise Sandbox on Azure for a hands-on experience testing out DSE on Azure capabilities. DataStax Enterprise Sandbox on Azure can be found here.
Cassandra Summit 2015, the world’s largest gathering of Cassandra users, is taking place this week and Microsoft Cloud and Enterprise Executive Vice President Scott Guthrie, DataStax CEO Billy Bosworth, and Apache Cassandra Project Chair and DataStax Co-founder and CTO Jonathan Ellis, will deliver the conference keynote at 10 a.m. PT on Wednesday, September 23. The keynote can be viewed at DataStax.com.
DataStax delivers Apache Cassandra™ in a database platform purpose-built for the performance and availability demands for IoT, Web and mobile applications. This gives enterprises a secure, always-on database technology that remains operationally simple when scaling in a single datacenter or across multiple datacenters and clouds.
With more than 500 customers in over 50 countries, DataStax is the database technology of choice for the world’s most innovative companies, such as Netflix, Safeway, ING, Adobe, Intuit and eBay. Based in Santa Clara, Calif., DataStax is backed by industry-leading investors including Comcast Ventures, Crosslink Capital, Lightspeed Venture Partners, Kleiner Perkins Caufield & Byers, Meritech Capital, Premji Invest and Scale Venture Partners. For more information, visit DataStax.com or follow us @DataStax.
September 30, 2014: Why Datastax’s increasing presence threatens Oracle’s database by Anne Shields at Market Realist
Must know: An in-depth review of Oracle’s 1Q15 earnings (Part 9 of 12)
Datastax databases are built on open-source technologies
Datastax is a California-based database management company. It offers an enterprise-grade NoSQL database that seamlessly and securely integrates real-time data with Apache Cassandra. Databases built on Apache Cassandra offer more flexibility than traditional databases. Even in case of calamities and uncertainties, like floods and earthquakes, data is available due to its replication at other data centers. NoSQL and Cassandra are open-source software.
Cassandra database was developed by Facebook (FB) to handle its enormous volumes of data. The technology behind Cassandra was developed by Amazon (AMZN) and Google (GOOGL). Oracle’s MySQL (ORCL), Microsoft’s SQL Server (MSFT), and IBM’s DB2 (IBM) are the traditional databases present in the market .
Huge amounts of funds raised in the open-source technology database space
Datastax raised $106 million in September 2014 to expand its database operations. MongoDB Inc. and Couchbase Inc.—both open-source NoSQL database developers—raised $231 million and $115 million, respectively, in 2014. According to Market Research Media, a consultancy firm, spending on NoSQL technology in 2013 was less than $1 billion. It’s expected to reach $3.4 billion by 2020. This explains why this segment is attracting such huge investments.
Oracle’s dominance in the database market is uncertain
Oracle claims it’s a market leader in the relational database market, with a revenue share of 48.3%. In 2013, it launched Oracle Database 12C. According to Oracle, “Oracle Database 12c introduces a new multitenant architecture that simplifies the process of consolidating databases onto the cloud; enabling customers to manage many databases as one — without changing their applications.” To know in detail about Database 12c, please click here .
In July 2013, DataStax announced that dozens of companies have migrated from Oracle databases to DataStax databases. Customers cited scalability, disaster avoidance, and cost savings as the reasons for shifting databases. Datastax databases’ rising popularity jeopardizes Oracle’s dominant position in the database market.
Browse this series on Market Realist:
September 24, 2014: Building a better experience for Azure and DataStax customers by Matt Rollender, VP Cloud Strategy, DataStax, Inc. on Microsoft Azure blog
Cassandra Summit is in high gear this week in Santa Clara, CA, representing the largest NoSQL event of its kind! This is the largest Cassandra Summit to date. With more than 7,000 attendees (both onsite and virtual), this is the first time the Summit is a three-day event with over 135 speaking sessions. This is also the first time DataStax will debut a formalized Apache Cassandra™ training and certification program in conjunction with O’Reilly Media. All incredibly exciting milestones!
We are excited to share another milestone. Yesterday, we announced our formal strategic collaboration with Microsoft. Dedicated DataStax and Microsoft teams have been collaborating closely behind the scenes for more than a year on product integration, QA testing, platform optimization, automated provisioning, and characterization of DataStax Enterprise (DSE) on Azure, and more to ensure product validation and a great customer experience for users of DataStax Enterprise on the Azure cloud. There is strong coordination across the two organizations – very close executive, field, and technical alignment – all critical components for a strong partnership.
This partnership is driven and shaped by our joint customers. Our customers oftentimes begin their journey with on-premise deployments of our database technology and then have a requirement to move to the cloud – Microsoft is a fantastic partner to help provide the flexibility of a true hybrid environment along with the ability to migrate to and scale applications in the cloud. Additionally, Microsoft has significant breadth regarding their data centers – customers can deploy in numerous Azure data centers around the globe, in order to be ‘closer’ to their end users. This is highly complementary to DataStax Enterprise software as we are a peer-to-peer distributed database and our customers need to be close to their end users with their always-on, always available enterprise applications.
To highlight a couple of joint customers and use cases we have First American Title and IHS, Inc. First American is a leading provider of title insurance and settlement services with revenue over $5B. They ingest and store the largest number (billions) of real estate property records in the industry. Accessing, searching and analyzing large data-sets to get relevant details quickly is the new way they want to do business – to provide better information back to their customers in real-time and allow end users to easily search through the property records on-line. They chose DSE and Azure because of the large data requirements and because of the need to continue to scale the application.
A second great customer and use case is IHS, Inc., a $2B revenue-company that provides information and analysis to support the decision-making process of businesses and governments. This is a transformational project for IHS as they are building out an ‘internet age’ parts catalog – it’s a next generation big data application, using NoSQL, non-relational technology and they want to deploy in the cloud to bring the application to market faster.
As you can see, we are enabling enterprises to engage their customer like never before with their always on, highly available and distributed applications. Stay tuned for more as we move forward together in the coming months!
For Additional information go to http://www.datastax.com/marketplace-microsoft-azure to try out Datastax Enterprise Sandbox on Azure.
See also DataStax Enterprise Cluster Production on Microsoft Azure Marketplace
September 23, 2015: Making Cassandra Do Azure, But Not Windows by Timothy Prickett Morgan Co-Editor, Co-Founder, The Next Platform
When Microsoft says that it is embracing Linux as a peer to Windows, it is not kidding. The company has created its own Linux distribution for switches used to build the Azure cloud, and it has embraced Spark in-memory processing and Cassandra as its data store for its first major open source big data project – in this case to help improve the quality of its Office365 user experience. And now, Microsoft is embracing Cassandra, the NoSQL data store originally created by Facebook when it could no longer scale the MySQL relational database to suit its needs, on the Azure public cloud.
Billy Bosworth, CEO at DataStax, the entity that took over steering development of and providing commercial support for Cassandra, tells The Next Platform that the deal with Microsoft has a number of facets, all of which should help boost the adoption of the enterprise-grade version of Cassandra. But the key one is that the Global 2000 customers that DataStax wants to sell support and services to are already quite familiar with both Windows Server in their datacenters and they are looking to burst out to the Azure cloud on a global scale.
“We are seeing a rapidly increasing number of our customers who need hybrid cloud, keeping pieces of our DataStax Enterprise on premise in their own datacenters and they also want to take pieces of that same live transactional data – not replication, but live data – and in the Azure cloud as well,” says Bosworth. “They have some unique capabilities, and one of the major requirements of customers is that even if they use cloud infrastructure, it still has to be distributed by the cloud provider. They can’t just run Cassandra in one availability zone in one region. They have to span data across the globe, and Microsoft has done a tremendous job of investing in its datacenters.”
With the Microsoft agreement, DataStax is now running its wares on the three big clouds, with Amazon Web Services and Google Compute Engine already certified able to run the production-grade Cassandra. And interestingly enough, Microsoft is supporting the DataStax implementation of Cassandra on top of Linux, not Windows. Bosworth says that while Cassandra can be run on Windows servers, DataStax does not recommend putting DataStax Enterprise (DSE), the commercial release, on Windows. (It does have a few customers who do, nonetheless, and it supports them.) Bosworth adds that DataStax and the Cassandra community have been “working diligently” for the past year to get a Windows port of DSE completed and that there has been “zero pressure” for the Microsoft Azure team to run DSE on anything other than Linux.
It is important to make the distinction between running Cassandra and other elements of DSE on Windows and having optimized drivers for Cassandra for the .NET programming environment for Windows.
“All we are really talking about is the ability to run the back-end Cassandra on Linux or Windows, and to the developer, it is irrelevant on what that back end is running,” explains Bosworth. This takes away some of that friction, and what we find is that on the back end, we just don’t find religious conviction about whether it should run on Windows or Linux, and this is different from five years ago. We sell mostly to enterprises, and we have not had one customer raise their hand and say they can’t use DSE because it does not run on Windows.”
What is more important is the ability to seamless put Cassandra on public clouds and spread transactional data around for performance and resiliency reasons – the same reasons that Facebook created Cassandra for in the first place.
What Is In The Stack, Who Uses It, And How
The DataStax Enterprise distribution does not just include the Apache Cassandra data store, but has an integrated search engine that is API compatible with the open source Solr search engine and in-memory extensions that can speed up data accesses by anywhere from 30X to 100X compared to server clusters using flash SSDs or disk drives. The Cassandra data store can be used to underpin Hadoop, allowing it to be queried by MapReduce, Hive, Pig, and Mahout, and it can also underpin Spark and Spark Streaming as their data stores if customers decide to not go with the Hadoop Distributed File System that is commonly packaged with a Hadoop distribution.
It is hard to say for sure how many organizations are running Cassandra today, but Bosworth reckons that it is on the order of tens of thousands worldwide, based on a number of factors. DataStax does not do any tracking of its DataStax Community edition because it wants a “frictionless download” like many open source projects have. (Developers don’t want software companies to see what tools they are playing with, even though they might love open source code.) DataStax provides free training for Cassandra, however, where it does keep track, and developers are consuming over 10,000 units of this training per month, so that probably indicates that the Cassandra installed base (including tests, prototypes, and production) is in the five figures.
DataStax itself has over 500 paying customers – now including Microsoft after its partner tried to build its own Spark-Cassandra cluster using open source code and decided that the supported versions were better thanks to the extra goodies that DataStax puts into its distro. DataStax has 30 of the Fortune 100 using its distribution of Cassandra in one form or another, and it is always for transactional, rather than batch analytic, jobs and in most cases also for distributed data stores that make use of the “eventual consistency” features of Cassandra to replicate data across multiple clusters. The company has another 600 firms participating in its startup program, which gives young companies freebie support on the DSE distro until they hit a certain size and can afford to start kicking some cash into the kitty.
The largest installation of Cassandra is running at Apple, which as we previously reported has over 75,000 nodes, with clusters ranging in size from hundreds to over 1,000 nodes and with a total capacity in the petabytes range. Netflix, which used to employ the open source Cassandra, switched to DSE last May and had over 80 clusters with more than 2,500 nodes supporting various aspects of its video distribution business. In both cases, Cassandra is very likely housing user session state data as well as feeding product or play lists and recommendations or doing faceted search for their online customers.
We are always intrigued to learn how customers are actually deploying tools such as Cassandra in production and how they scale it. Bosworth says that it is not uncommon to run a prototype project on as few as ten nodes, and when the project goes into production, to see it grow to dozens to hundreds of nodes. The midrange DSE clusters range from maybe 500 to 1,000 nodes and there are some that get well over 1,000 nodes for large-scale workloads like those running at Apple.
In general, Cassandra does not, like Hadoop, run on disk-heavy nodes. Remember, the system was designed to support hot transactional data, not to become a lake with a mix of warm and cold data that would be sifted in batch mode as is still done with MapReduce running atop Hadoop.
The typical node configuration has changed as Cassandra has evolved and improved, says Robin Schumacher, vice president of products at DataStax. But before getting into feeds and speeds, Schumacher offered this advice. “There are two golden rules for Cassandra. First, get your data model right, and second, get your storage system right. If you get those two things right, you can do a lot wrong with your configuration or your hardware and Cassandra will still treat you right. Whenever we have to dive in and help someone out, it is because they have just moved over a relational data model or they have hooked their servers up to a NAS or a SAN or something like that, which is absolutely not recommended.”
Only four years ago, because of the limitations in Cassandra (which like Hadoop and many other analytics tools is coded in Java), the rule of thumb was to put no more than 512 GB of disk capacity onto a single node. (It is hard to imagine such small disk capacities these days, with 8 TB and 10 TB disks.) The typical Cassandra node has two processors, with somewhere between 12 and 24 cores, and has between 64 GB and 128 GB of main memory. Customers who want the best performance tend to go with flash SSDs, although you can do all-disk setups, too.
Fast forward to today, and Cassandra can make use of a server node with maybe 5 TB of capacity for a mix of reads and writes, and if you have a write intensive application, then you can push that up to 20 TB. (DataStax has done this in its labs, says Schumacher, without any performance degradation.) Pushing the capacity up is important because it helps reduce server node count for a given amount of storage, which cuts hardware and software licensing and support costs. Incidentally, only a quarter of DSE customers surveyed said they were using spinning disks, but disk drives are fine for certain kinds of log data. SSDs are used for most transactional data, but the bits that are most latency sensitive should use DSE to store data on PCI-Express flash cards, which have lower latency.
Schumacher says that in most cases, the commercial-grade DSE Cassandra is used for a Web or mobile application, and a DSE cluster is not set up for hosting multiple applications, but rather companies have a different cluster for each use case. (As you can see is the case with Apple and Netflix.) Most of the DSE shops to make use of the eventual consistency replication features of Cassandra to span multiple datacenters with their data stores, and span anywhere from eight to twelve datacenters with their transactional data.
Here’s where it gets interesting, and why Microsoft is relevant to DataStax. Only about 30 percent of the DSE installations are running on premises. The remaining 70 percent are running on public clouds. About half of DSE customers are running on Amazon Web Services, with the remaining 20 percent split more or less evenly between Google Compute Engine and Microsoft Azure. If DataStax wants to grow its business, the easiest way to do that is to grow along with AWS, Compute Engine, and Azure.
So Microsoft and DataStax are sharing their roadmaps and coordinating development of their respective wares, and will be doing product validation, benchmarking, and optimization. The two will be working on demand generation and marketing together, too, and aligning their compensation to sell DSE on top of Azure and, eventually, on top of Windows Server for those who want to run it on premises.
In addition to announcing the Microsoft partnership at the Cassandra Summit this week, DataStax is also releasing its DSE 4.8 stack, which includes certification for Cassandra to be used as the back end for the new Spark 1.4 in-memory analytics tool. DSE Search has a performance boosts for live indexing, and running DSE instances inside of Docker containers has been improved. The stack also includes Titan 1.0, the graph database overlay for Cassandra, HBase, and BerkeleyDB that DataStax got through its acquisition of Aurelius back in February. DataStax is also previewing Cassandra 3.0, which will include support for JSON documents, role-based access control, and a lot of little tweaks that will make the storage more efficient, DataStax says. It is expected to ship later this year.
Scott Guthrie about changes under Nadella, the competition with Amazon, and what differentiates Microsoft’s cloud products
From The cloud, not Windows 10, is key to Microsoft’s growth [Fortune, Oct 1, 2014]
- about changes under Nadella:
Well, I don’t know if I’d say there’s been a big change from that perspective. I mean, I think obviously we’ve been saying for a while this mobile-first, cloud-first…”devices and services” is maybe another way to put it. That’s been our focus as a company even before Satya became CEO. From a strategic perspective, I think we very much have been focused on cloud now for a couple of years. I wouldn’t say this now means, “Oh, now we’re serious about cloud.” I think we’ve been serious about cloud for quite a while.
More information: Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft [‘Experiencing the Cloud’, July 23, 2014]
- about the competition with Amazon:
… I think there’s certainly a first mover advantage that they’ve been able to benefit from. … In terms of where we’re at today, we’ve got about 57% of the Fortune 500 that are now deployed on Microsoft Azure. … Ultimately the way we think we do that [gain on the current leader] is by having a unique set of offerings and a unique point of view that is differentiated.
- about uniqueness of Microsoft offering:
One is, we’re focused on and delivering a hyper-scale cloud platform with our Azure service that’s deployed around the world. …
… that geographic footprint, as well as the economies of scale that you get when you install and have that much capacity, puts you in a unique position from an economic and from a customer capability perspective …
Where I think we differentiate then, versus the other two, is around two characteristics. One is enterprise grade and the focus on delivering something that’s not only hyper-scale from an economic and from a geographic reach perspective but really enterprise-grade from a capability, support, and overall services perspective. …
The other thing that we have that’s fairly unique is a very large on-premises footprint with our existing server software and with our private cloud capabilities. …
… the technological alternative relative to what is given in the Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014] post
Wearable and IOT [designreuse YouTube channel, May 2, 2014]
Featuring hands-on demonstrations of technologies
and end products
EE Live! Conference & Expo, San Jose, CA – 1st April, 2014 – Imagination Technologies (IMG.L) will highlight its expertise and momentum in IoT and wearables at the EE Live! Conference and Expo, being held March 31st – April 3rd at the McEnery Convention Center in San Jose, CA.
Imagination is working closely with partners to enable creation of SoCs for IoT and wearable devices that feature extended battery life and enhanced security, as well as device and infrastructure ecosystems, all driven by the right IP solutions.
Says Kevin Kitagawa, director of strategic marketing at Imagination: “Imagination has all of the IP needed to create complete, class-leading IoT and wearable solutions, and our technologies are already powering numerous SoCs designed for these applications. Through industry initiatives such as the AllSeen Alliance, and key partners including Google, Ineda, Ingenic, Microchip Technology and others, we are building the ecosystems and technologies needed for a new generation of IoT and wearable SoCs.”
In its booth number 816 at EE Live!, Imagination will feature hands-on demonstrations and highlight many of its technologies for IoT and wearables including:
- MIPS Warrior CPUs: a highly scalable family of CPUs including the new MIPS M-class M51xx cores, which have features that make them ideal for IoT and wearables including DSP engine, small code size, hardware virtualization support and ultra-secure processing
- PowerVR GPUs: the de facto standard for mobile and embedded graphics including the new PowerVR Rogue 6XE G6050, one of the industry’s smallest OpenGL ES 3.0-compliant GPUs delivering high fillrate and exceptional efficiency—perfect for a range of high-end IoT devices
- Ensigma Series4 Explorer radio communications processors (RPUs): a unique universal and highly scalable solution for integrating global connectivity and broadcast communications capabilities into SoCs, including solutions for Wi-Fi and Bluetooth LE (low Energy)
- FlowCloud: an application-independent technology platform for emerging IoT and cloud-connected devices, enabling rapid construction and management of device-to-device and device-to-cloud applications.
- PowerVR Series5 video processors (VPUs): the most efficient multi-standard and multi-stream video decoders and encoders, which offer a range of solutions for video intensive IoT applications such as security cameras or wearable devices such as smart glasses
- PowerVR Raptor imaging processor cores: scalable and highly-configurable solutions which join other PowerVR multimedia cores to form a complete, integrated vision platform that saves power and bandwidth for today’s camera applications and other smart sensors
- Caskeid: unique, patented technology that delivers exceptionally accurate synchronized wireless multiroom connected audio streaming for audiophile-quality stereo playback with less than 25µs synchronization accuracy
- Codescape: a complete, proven and powerful debug solution that supports the full range of MIPS CPUs, offers Linux and RTOS awareness features, and provides heterogeneous debug of SoCs using one or more MIPS and Ensigma processors
Imagination will also feature IoT and wearable related products and technologies including:
- New MIPS-based IoT development platform “Newton” from Ingenic Semiconductor, which integrates CPU, Flash, LPDDR, Wi-Fi, Bluetooth, NFC, PMU and various sensors on a single board around the size of an SD card
- Development boards for MIPS including those for Microchip Technology’s 32-bit PIC32MZ MCUs and a new a complete low-cost MIPS-based Android and Linux platform for system developers
- Comprehensive development tools for all MIPS CPUs, including the latest GNU tools for Linux and bare-metal embedded systems from Mentor Graphics’ Sourcery CodeBench, and Imperas’ high-speed instruction-accurate OVP models and QuantumLeap parallel simulation acceleration technology
- Smartwatches that are shipping today based on the MIPS architecture, including the SpeedUp Smartwatch as well as those from Tomoon, HiWatch, SmartQ, Geak and others
- Toumaz’ solutions for the SensiumVitals® System, an ultra-low power wireless patch remotely managed via Imagination’s FlowCloud technology
- FlowTalk and FlowAudio – Imagination’s solutions for connected audio and cross-platform V.VoIP/VoLTE, leveraging the FlowCloud
Imagination’s vice president of strategic marketing, Amit Rohatgi, will participate in a Technology Workshop during EE Live!, “The Role of Embedded Systems in the Internet of Everything,” sponsored by the Chinese American Semiconductor Professionals Association (CASPA). The event will be held on Wednesday, April 2nd, from 5:00 p.m. – 8:00 p.m. For more information and to register, visit http://www.caspa.com/node/6349.
About Imagination Technologies
Imagination is a global technology leader whose products touch the lives of billions of people throughout the world. The company’s broad range of silicon IP (intellectual property) includes the key multimedia, communications and general purpose processors needed to create the SoCs (Systems on Chips) that power all mobile, consumer, automotive, enterprise, infrastructure, IoT and embedded electronics. These are complemented by its unique software and cloud IP and system solution focus, enabling its licensees and partners get to market quickly by creating and leveraging highly differentiated SoC platforms. Imagination’s licensees include many of the world’s leading semiconductor manufacturers, network operators and OEMs/ODMs who are creating some of the world’s most iconic and disruptive products. See:www.imgtec.com.
Creating next-generation chips from the ground-up for wearables and IoT [Imagination Blog, April 1, 2014]
There has been a lot of momentum lately around Imagination’s initiatives and technologies focused on creating a new generation of chips built specifically for IoT and wearable use cases. We thought we’d take a moment to fill you in.
Today, low-end IoT devices and wearables typically use multiple general purpose chips to achieve microcontroller, sensor and radio functionality, leading to expensive, compromised solutions. At the high end, devices such as smartwatches use existing smartphone chips, leading to overpowered, expensive devices.
The solution from Imagination
To reach the incredible volumes predicted by analysts, SoCs for wearable devices and IoT must be designed from the ground-up. Working with our partners, Imagination is enabling the design of new chips that extend battery life, enhance data and device security and feature the right CPU, graphics, video and multi-standard connectivity solutions. We’re also focused on building the needed standards, operating environments, and other ecosystem technologies to support these chips.
Imagination is proud to already have our IP in such SoCs, and our customers are giving us great feedback on our wearables roadmap. Together with industry initiatives such as the AllSeen Alliance or the cool new Android Wear from Google, and key partners includingIneda Systems, Ingenic Semiconductor, Microchip Technology and others, we are taking a leading role in building the ecosystems and technologies needed for a new generation of SoCs.
Extending battery life
With the always-on requirement for sensors in most wearables and IoT devices, together with their tiny form factors, battery life is a more critical concern for designers than ever before. Using power and area efficient silicon IP is therefore a must.
In wearable and IoT applications that require a CPU, an intelligent hierarchy of CPUs optimized for specific tasks can lead to extremely low power consumption. For example, an SoC can use a MIPS CPU such as a new Warrior M-class core, which achieves the highest CoreMark/MHz scores for MCU-class processors, to perform the function of monitoring sensors and also to manage the connectivity peripherals. When the SoC needs to process or analyze data, the system can wake up other CPUs in the system to perform their dedicated tasks. Such an implementation offers key benefits for extending battery life in wearables and IoT devices.
Ineda, a developer of low-power SoCs, is uniting various Imagination IP cores in its ultra-low power Wearable Processing Units (WPUs) designed to reduce power consumption in a variety of devices, including fitness bands, smartwatches and IoT. With unique combinations of Imagination’s MIPS CPUs and highly efficient PowerVR GPUs, the new Ineda WPUs represent one of the first SoC architectures built specifically for this new generation of devices.
Ineda Systems’ WPUs will address the wearable platforms from a ground-up manner
As more and more devices are connected to the cloud and each other, security becomes an ever-growing concern. Imagination has the right IP for public key infrastructure and crypto functions needed to provide trusted execution environments, secure boot, secure code updates, key protection, device authentication and IP/transport layer data security to transmit data to the cloud. Virtualization and security features across the range of MIPS Series5 Warrior CPU cores make them ideal for meeting next-generation security needs.
In space-constrained, low-power systems such as IoT or wearable devices, a virtualization based approach could be used to implement a multiple-guest environment where one guest running a real-time kernel manages the secure transmission of sensor data, while another guest, under RTOS control, can provide the multimedia capabilities of the system. For applications that demand an even higher level of security, the new MIPS Warrior M-class cores include tamper resistant features that provide countermeasures to unwanted access to the processor operating state. A secure debug feature increases the benefit by preventing external debug probes from accessing and interrogating the core internals.
MIPS M51xx CPUs support multiple guest operating systems
Driving new ecosystems and standardization efforts
Due to small device size, as well as a new and different functionality required in emerging IoT and wearable devices, much of the device and infrastructure ecosystems will be different than what’s needed for smartphones and other connected products. This includes standards in the areas of APIs, device-to-device communications, data analytics, device authentication, low-power connectivity and protocols, and even operating environments, which are critical to driving consumer and industry adoption.
At Imagination we are partnering with Google and other industry players on Android Wear, a project that extends Android to wearables, beginning with smartwatches. Already a strong player in the Android ecosystem, MIPS is one of the three CPU architectures fully supported by Google in each Android release, including the latest Android 4.4 KitKat.
Images from the Android Wear Developer Preview site
To drive ecosystem development for IoT, we’ve also recently joined the AllSeen Alliance, which has been formed to create an open, universal development framework to drive the widespread adoption of products, systems and services that support IoT. The goal is to enable companies and individuals to create interoperable products that can discover, connect and interact directly with other nearby devices, systems and services regardless of transport layer, device type, platform, operating system or brand.
Imagination’s own application-independent FlowCloud technology platform enables rapid construction and management of M2M connected services. Designed to address the needs of emerging IoT and cloud-connected devices, FlowCloud enables easy product registration and updates as well as access to partner-enabled services including FlowAudio, a cloud-based music and radio service that includes hundreds of thousands of radio stations, on-demand programs, podcasts and more. Imagination intends for FlowCloud to be easily integrated with products using the AllSeen Alliance framework.
Imagination’s FlowCloud enables device-centric services including registration, security, storage, notifications, updates and remote control
Flexible, multi-standard connectivity
Wearables and IoT devices today use existing connectivity standards, such as Wi-Fi or Bluetooth LE (Low Energy), but new standards, such as ultra-low power Wi-Fi extensions, are still in development. This means that choosing future-proofed, flexible solutions is a must for companies who want to create a product today that will still be viable when new standards are ratified.
Imagination’s programmable, multi-standard Ensigma radio processors (RPUs) can accommodate such emerging standards with a powerful and uniquely optimized balance of programmability and hardware configurability, delivering impressive functionality in compact silicon area.
The right IP for the application
Imagination’s IP is already integrated into wearable and IoT products that are shipping today. This includes a number of smartwatches that leverage the MIPS architecture and smart glasses with PowerVR graphics and video.
Imagination’s IP is already integrated into wearable products such as the SpeedUp Smartwatch, the world’s first Android 4.4 KitKat smartwatch
For example, Ingenic Semiconductor is offering a new MIPS-based IoT development platform called Newton. The Ingenic Newton platform integrates a MIPS-based XBurst CPU, multimedia (2D graphics, multi-standard VPU) low-power memory (mobile DDR3/DDR2/LPDDR and flash) 4-in-1 connectivity (Wi-Fi, Bluetooth, NFC, FM) and various sensors on a single board around the size of an SD card (find out more about Ingenic Newton here).
In addition, MIPS-based 32-bit PIC32MZ MCUs from Microchip Technology [all details are given here in the 2nd half of this post] are ideal for a number of wearable and IoT applications.
For designers of next-generation SoCs, Imagination’s broad IP portfolio offers scalable solutions for their specific application. This includes our MIPS Series5 Warrior CPUs including the new MIPS M-class M51xx cores, PowerVR Rogue GPUs including the PowerVR G6050, Ensigma Series4 Explorer RPUs with solutions for Wi-Fi, Bluetooth LE and more, PowerVR Series5 video processors (VPUs), PowerVR Raptor imaging processor cores, our unique Caskeid audio synchronization technology, and of course FlowCloud.
MIPS Powered Wearables from Imagination Technologies [RCR Wireless News YouTube channel, Jan 15, 2014]
Smart watches: The first wave of wearable and connected devices integrating Imagination IP [Imagination Blog, Jan 27, 2014]
Over the past few months, we’ve seen a new wave of announcements related to Internet of Things (IoT) and other ultra-portable devices integrating Imagination IP. One of the biggest buzz words right now is wearable devices; there were several wearable concepts introduced at CES 2014, covering any and every use case, from augmented and virtual reality or entertainment to fitness, health, and many more.
At Imagination, we are well prepared to deliver innovative hardware and software IP that has been specifically designed to address the rapid growth in demand for these applications. Imagination is the only IP company that can deliver a full suite of low-power, feature-rich technologies encompassing CPU, graphics, video, vision, connectivity, cloud services and beyond. Our market-leading PowerVR GPUs and VPUs, efficient MIPS CPUs, innovative Ensigma RPUs and other IP solutions create the perfect foundation for developing new processors for ultra low-power wearables that will be soon find their way into a myriad of devices such as smart watches, health and fitness devices and more.
MIPS and smart watches
One of the companies that have been at the forefront of innovation in the mobile and wearable market is Ingenic. Their MIPS-based XBurst SoC is an innovative MIPS32-based apps processor which redefines the performance and power consumption criteria for modern embedded SoCs.
Among the recent design wins, one interesting use case for the MIPS architecture is the smart watch. There were several smart watch designs on display on our booth at CES 2014; this article is a quick summary of what we and our partners were showcasing on the show floor.
- The GEAK smart watch runs stock Android 4.1 out of the box, can be used to monitor your heartbeat and blood pressure, and acts as a pedometer or smartphone remote to snap pictures. The GEAK smart watch is a water-resistant (IP3X) device and comes with a 1.55″ color IPS screen.
- The NextONE smart watch from YiFang Digital uses the Android 4.1 OS to create an open architecture system that can run any verified third party applications. The smart watch is customizable to every aspect of a user’s life, from communicating with work and friends to health and fitness. The NextONE smartwatch improves the smartphone experience by making the information a user wants accessible at any time.
- Tomoon T-Fire is another exciting smart watch design coming out of China. It has an innovative curved E-ink screen measuring 1.73″, it runs Android 4.3 and is expected to ship soon. It currently comes in three colors and promises to deliver on the fitness front, with a trio of sensors (gyroscope, g-sensor, compass).
- SmartQ Z Watch promises to deliver an incredible standby time, can record motion data and even analyzes the quality of your sleep. It provides good water resistance, can pair up with your smartphone and tablet and doubles as an MP3 player too.
The smart wearables of the future
Wearable electronics cannot accommodate the larger batteries of their bigger counterparts (smartphones, tablets) so ultra-portable devices must use SoCs that have low power consumption. Because our technologies have been built around efficiency, we can help our partners design highly competitive solutions that enable them to achieve design wins in multiple markets. Companies looking for proven, low power multimedia and connectivity IP can rely on Imagination to provide the building blocks for IoT-ready chips.
A recent example is Ineda who have licensed PowerVR GPU and MIPS CPU IP to design System-on-Chip solutions for portable consumer electronics like wearable devices. Ineda CEO Dasaradha Gude says that Imagination’s IP cores provide the power efficiency required for wearable devices to succeed but also accelerate time to market, since everything they needed was provided by Imagination which simplified all the integration work.
Smart glasses: The first wave of wearable and connected devices integrating Imagination IP [Imagination Blog, Jan 23 2014]
Over the past few months, we’ve seen a new wave of announcements related to Internet of Things (IoT) and other ultra-portable devices integrating Imagination IP. One of the biggest buzz words right now is wearable devices; there were several wearable concepts introduced at CES 2014, covering any and every use case, from augmented and virtual reality or entertainment to fitness, health, and many more.
At Imagination, we are well prepared to deliver innovative hardware and software IP that has been specifically designed to address the rapid growth in demand for these applications. Imagination is the only IP company that can deliver a full suite of low-power, feature-rich technologies encompassing CPU, graphics, video, vision, connectivity, cloud services and beyond. Our market-leading PowerVR GPUs and VPUs, efficient MIPS CPUs, innovative Ensigma RPUs and other IP solutions create the perfect foundation for developing new processors for ultra low-power wearables that will be soon find their way into a myriad of devices such as smart watches, health and fitness devices and more.
PowerVR and smart glasses
An example of a type of wearable device that has benefited from Imagination’s IP is smart glasses. Google Glass has been the first; featuring a Texas Instruments OMAP4430 processor with a PowerVR SGX540 GPU, Glass is able to take pictures, record videos, search the internet, and navigate maps.
But in the hand of ingenious developers, it can do so much more. For example, a recent article in the MIT Technology Review highlights an app that can recognize objects in front of a person wearing a Google Glass device.
However, there were multiple PowerVR-based smart glasses introduced at CES 2014:
- Recon Instruments introduced Snow2, an iPhone-connected HUD (Heads-Up Display) for winter sports. The Recon Snow2 project is a collaboration between Recon and Oakley and can be found as a complete kit called Oakley Airwave 1.5. Recon however is working with multiple companies to build several products that are tuned to their requirements. Recon Snow2 features an integrated GPS and can can display your speed, altitude, location, and act as a navigation instrument. For example, there is an iOS app that allows you to share your position on a map and locate your friends or family on the slopes.
- XOne is the first product from startup XOEye Technologies and took five years to design. XOne is a pair of safety glasses designed to improve efficiency and enhance safety for skilled labor jobs. The glasses rely entirely on audio and LEDs to communicate messages to the wearer. XOne integrates two 5MPx cameras (one inside each lens), speakers and a microphone, a gyroscope, and an accelerometer; the system is powered by a TI OMAP 4460 processor, running a custom version of Linux designed for enterprise use.
- The Vuzix M100 is one of the first commercially available smart glasses. They are an Android-based wearable computer, featuring a monocular display, recording features and wireless connectivity capabilities. Vuzis M100 has been designed to cover a range of applications; powerful, small and light, the M100 is well suited for a variety of industrial, medical, retail and prosumer users.
- The Epson Moverio BT-200 smart glasses are designed for users who like to enjoy their multimedia and do their gaming on a pair of glasses. Epson have put a lot of effort into integrating the technology (an OMAP processor) with the physical design. Even better, the smart glasses run Android 4.0.4 and apps from the Epson store; another unique feature is how users interact with the device, which is mainly done via a hand-held touchpad controller wired to the glasses. Epson has been named a 2014 CES Innovations Awards honoree in wearable tech for its Moverio BT-200 smart glasses.
- Lumus generated a lot of attention around its DK-40 wearable smart glasses at CES. They were very eager to show off the new developer unit in public focusing on how the monocular headset overlays a full VGA digital image over the right eye instead of using a small window for notifications. Lumus DK-40 runs Android, includes an OMAP processor and comes in multiple colors.
I hope you’ve enjoyed our recap of some of the most interesting smart glass designs revealed at CES 2014. If you are interested in this category of devices and want to know more about the wearable gadgets that use our IP, make sure you follow us on Twitter (@ImaginationPR) and keep coming back to our blog.
Imagination and Google partner up for Android Wear and the wearable revolution [Imagination Blog, March 24, 2014]
Earlier this week Google announced a developer preview of Android Wear, a mobile operating system designed to extend the Android experience to wearable devices. This initiative will help jumpstart developers building innovative applications specifically targeting the next generation of innovation in wearables. The initial focus is on the smartwatch space and leverages the rich notification APIs already defined in Android.Android Wear extends the Android platform to wearables, starting with a familiar form factor — watches. Download the developer preview at: developer.android.com/wear
Google is using this developer preview to give app developers the chance to experiment with enhanced notifications (e.g. weather, sports scores, navigation, etc.) for their applications to display on the smaller screen of smartwatches. For example, Android Wear supports notifications on a watch similar to how Google Now displays notifications on the smartphone. The next step for Google is to publish a full SDK that allows app developers to create complete, smartwatch-centric applications.
Delivering the ultimate wearable experience with MIPS processor IP
Imagination has been a pioneer in delivering ultra-low power technologies across its entire IP portfolio. Following the acquisition of MIPS, one of the first things we did was to scrutinize all the CPUs from low end to high end to ensure we applied our leadership in low power design to MIPS CPUs. As a result, we believe MIPS is the ideal CPU for wearables, enabling our partners to build some of the most innovative solutions around for this growing market.
This year at MWC, wearables-focused startup Ineda demonstrated its ultra-low power Wearable Processor Unit (WPU) SoCs which deliver exceptional low power consumption. Ineda’s SoC devices integrate multiple IP processors from Imagination, including MIPS CPUs and PowerVR GPUs. Also, SpeedUp Technology announced its first wearable technology product, the SpeedUp SmartWatch, a revolutionary wearable device which incorporates an ultra-low power MIPS-based CPU from Ingenic.
Imagination is a Google launch partner for Android Wear – something we’re pretty proud of. Already a strong player in the Android ecosystem, Imagination’s MIPS architecture is one of the three CPU architectures fully supported by Google in every Android release including the latest Android 4.4 KitKat.
All MIPS CPUs are optimized to offer the best Android experience on smartphones, tablets, wearables and other mobile devices
Low power, high performance MIPS CPUs already power billions of products around the globe. Thanks to a flexible architecture that scales from entry-level 32-bit embedded processors to some of the industry’s highest performing 64-bit CPUs, MIPS CPUs pave the way for next-generation embedded designs, including a growing presence in wearables. The Series5 Warrior generation includes two new processors (MIPS M5100 and M5150) that provide key features ideal for wearables such as a high-performance DSP engine, small code size, virtualization, and ultra-secure processing. All Series5 Warrior CPUs deliver industry-leading CoreMark performance in a very efficient area and power envelope.
Look for a MIPS-based smartwatch in a store near you
Several of our licensees are working very hard to deliver MIPS-based, Android Wear-compliant devices that will be available in the market once the operating system is officially released.
By being a launch partner, we will work very closely to ensure that Android Wear will be optimized for MIPS CPUs as well as our other IP technologies such as PowerVR graphics, video and vision, and Ensigma RPUs.
The list of members in the Android Wear alliance includes several leading consumer electronics manufacturers (Asus, HTC, LG, Motorola and Samsung), chip makers (Broadcom, Intel, Mediatek and Qualcomm) and fashion brands (the Fossil Group), all keen to bring you watches powered by the new operating system later this year.
The list of official Android Wear partners
For more info about Android Wear and what was announced, visit:
- Google and Android Developers blog posts on the Android Wear Announcement http://officialandroid.blogspot.com/
- New Google Developers website: http://developer.android.com/wear
- Android Wear on the commercial website: www.android.com/wear
I. Microchip Technology
From: IoT Era excites Semiconductor Players [Electronics Maker, May 6, 2014]
(other than Microsochip Technology companies are covered in the Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014] post)
… (see in Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014])
… (see in Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014])
Microchip Technology [https://www.facebook.com/microchiptechnology]
Mike Ballard, Senior Manager, Home Appliance Solutions Group, Microchip Technology Inc.
Microchip has many devices that are well situated to enable IoT functionality, such as 8, 16 and 32-bit PIC® microcontrollers, analog, mixed-signal, memory, and embedded Wi-Fi® and Bluetooth® modules. In addition, IoT designers can take advantage of Microchip’s flexible development environment, broad connectivity solutions and product longevity.
Microchip is so broad based, with 80,000+ global customers, that we do not see any singular market or application that will drive our growth in IoT. Our customer value proposition is that we provide a very broad embedded portfolio, including both the hardware and software solutions to help companies create their IoT products.
Microchip has a significant number of products that fit well into the IoT markets. We have close relationships with our customers and have been incorporating these technologies into our products, based on their feedback. Technologies such as XLP in our MCUs (which enables low-power designs), Wi-Fi Modules (Microchip offers two approaches, giving customers flexibility), and power-measurement devices, all enable our customers to meet their design and cost goals. In addition, we have been acquiring companies and technologies to ensure that we continue to meet these markets’ needs today and in the future.
What is Deep Sleep [MicrochipTechnology YouTube channel, April 22, 2009] with which the minimal power consumption could be as low as 20 nA which allows years of operation on a single battery:
Our Home Appliance Solutions Group can help you implement the new features and functionality needed for your next design. This short video introduces you to our Induction Cooktop Reference Design, which can significantly shorten your design cycle: http://mchp.us/1hI8kip
Induction Cooktop Reference Design [MicrochipTechnology YouTube channel, Dec 5, 2013]
microchip.com/appliance: Home Appliance
Appliance manufacturers face numerous challenges in today’s ever-changing global market. Government regulations, customer expectations, competitive forces and application innovations are fueling the integration of new technologies into many appliances. Bringing these technology advancements to market can be even more challenging with shorter deadlines, the pressure to maintain and grow market share and the constant need to innovate. In addition, finding partners with technical solutions to enable these goals can be daunting and drain your resources.
Microchip Technology can help you implement the new features and functionality required for your next appliance design. By providing Microchip’s solutions for user interface, motor control, sensing, connectivity and more, your design teams can focus on implementing the application.
Microchip’s cost-effective tools enable your design to reach the market faster. Our free, award winning MPLAB®X Integrated Design Environment (IDE) provides a single development platform for all of our 8-, 16- and 32-bit microcontrollers and 16-bit Digital Signal Controllers (DSCs). Microchip makes it easy to develop your code and migrate to higher performance solutions as needed. Learning curves are minimized even when changing cores due to additional features, increased code size or the need for more computing power.
MIPS MCUs Outrun ARM [Processor Watch from The Linley Group, Feb 18, 2014]
Author: Tom R. Halfhill
Microchip’s newest 32-bit microcontrollers not only match the features of their Cortex-M4 competitors but also achieve higher EEMBC CoreMark scores. The new PIC32MZ EC family is powered by a MIPS microAptiv CPU core running at 200MHz—a speed demon by MCU standards.
These MCUs have more memory than comparable chips (up to 2MB of flash and 512KB of SRAM) plus Ethernet, Hi-Speed USB2.0, an LCD interface, and a cryptography accelerator. An early sample scored 654 CoreMarks—the highest EEMBC-certified score for any 32-bit MCU executing from internal flash memory.
Microchip’s earlier PIC32MX family uses the smaller MIPS32 M4K core running at a maximum clock speed of 100MHz. The microAptiv CPU in the new family not only runs twice as fast but also supports the microMIPS 32-bit instruction-set architecture. MicroMIPS combines 16- and 32-bit instructions to achieve better code density than previous MIPS32 cores or even Cortex-M cores using 16/32-bit Thumb-2 instructions. Microchip claims the PIC32MZ family has 30% better code density than similar ARM-based MCUs. Also, microAptiv adds 159 new signal-processing instructions.
The PIC32MZ family is designed for high-end controller applications, such as vehicle dashboard systems, building environmental controls, and consumer-appliance control modules. Some PIC32MZ chips will begin volume production in March, and the remainder by mid-year. Prices for 10,000-unit volumes will range from $6.68 to about $10—relatively expensive for MCUs but reasonable for the performance and features.
Leading performance and superior code density for new microAptiv-based PIC32MZ 32-bit MCU family from Microchip [Imagination Blog, Nov 25, 2013]
Although mainly known for our leadership position in CPU IP for digital home and networking, the MIPS architecture has recently seen rapid growth in the 32-bit microcontroller space thanks to the expanding list of silicon partners that are offering high-performance, feature-rich and low-power solutions at affordable price points.
The most recent example of our expansion into MCUs is the 200MHz 32-bit PIC32MZ family from Microchip. PIC32MZ MCUs integrate our microAptiv UP CPU IP core which enables Microchip to offer industry-leading performance at 330 DMIPS and 3.28 CoreMark™/MHz.
The PIC32MZ comes fully loaded with up to 2MB of Dual-Panel Flash with Live Update, 512KB SRAM and 16KB Instruction cache and 4KB data cache memories. This newest family in the PIC32 portfolio also offers a full suite of embedded connectivity options and peripherals, including 10/100 Ethernet MAC, Hi-Speed USB MAC/PHY (a first for PIC® MCUs), audio, graphics, crypto engine (supporting AES, 3DES, SHA) and dual CAN ports, all vital in supporting today’s complex applications.
By transitioning to the new MIPS microAptiv core, the PIC32MZ family offers a more than 3x increase in performance and better signal processing capabilities over the previous M4K-based PIC32MX families. In addition, the microAptiv core includes an Instruction Set Architecture (ISA) called microMIPS that reduces code size by up to 30% compared to executing 32-bit only code. This enables the PIC32MZ to load and execute application software in less memory.
The MIPS microAptiv family is available in two versions: microAptiv UC and microAptiv UP. microAptiv UC includes a SRAM controller interface and Memory Protection Unit designed for use in real-time, high performance low power microcontroller applications that are controlled by a Real Time OS (RTOS) or application-specific kernel. microAptiv UP contains a high performance cache controller and Memory Management Unit which enables it to be designed into Linux based systems.
Why choose MIPS32-based CPU IP for your MCUs?
MIPS-based MCUs are used in a wide and very diverse set of applications including industrial, office automation, automotive, consumer electronic systems and leading-edge technologies such as wireless communications. Furthermore, we’ve recently seen growing demand from the wearable and ultra-portable market; companies targeting these markets are looking to silicon IP providers like Imagination to deliver performance and power efficient solutions that can be easily integrated in fully-featured products.
CPU IP cores for microcontrollers need to be all-round flexible designs that are able to deliver higher levels of performance efficiency, improved real-time response, lower power and a broad tools and developer ecosystem. And the requirements continue to grow, especially with the new challenges presented by designing for the Internet of Things: better security, the ability to create more complex RTOS-controlled software and the ability to support a growing number of interfaces.
The microAptiv and future MIPS Series5 ‘Warrior’ M-class cores are perfectly positioned to provide an ideal 32-bit MCU solution for these next-generation applications. We understand that picking the right processor architecture is a key decision criterion to achieving performance, cost and time-to-market objectives in a MCU product. This is why we’ve made sure that the MIPS32 architecture enables our partners to design higher performance, lower power solutions with more advanced features and superior development support.
In the words of Jim Turley from his “Micro-Super-Computer-Chip‘ article inside the EE Journal: “With sub-$10 chips and sub-$150 computer boards, it looks like MIPS took over the world after all.”
We will be demonstrating the PIC32MZ on a Microchip multimedia board at the Embedded World 2014 event (February 25th – 27th) in in Nürnberg, Germany, so make sure you drop by our booth if you are attending the conference. In the meantime, follow us on Twitter (@ImaginationPR and @MIPSGuru) for the latest news and announcements from Imagination and its partners.
Microchip’s PIC32MZ 32-bit MCUs Have Class-Leading Performance of 330 DMIPS and 3.28 CoreMarks™/MHz; 30% Better Code Density [Microchip press release, Nov 18, 2013]
New 24-Member Family Integrates 2 MB Flash, 512 KB RAM,
28 Msps ADC, Crypto Engine, Hi-Speed USB,
10/100 Ethernet, CAN and Many Serial Channels
Microchip Technology Inc., a leading provider of microcontroller, mixed-signal, analog and Flash-IP solutions, today announced the new 24-member PIC32MZ Embedded Connectivity (EC) family of 32-bit MCUs. It provides class-leading performance of 330 DMIPS and 3.28 CoreMarks™/MHz, along with dual-panel, live-update Flash (up to 2 MB), large RAM (512 KB) and the connectivity peripherals—including a 10/100 Ethernet MAC, Hi-Speed USB MAC/PHY (a first for PIC® MCUs) and dual CAN ports—needed to support today’s demanding applications. The PIC32MZ also has class-leading code density that is 30% better than competitors, along with a 28 Msps ADC that offers one of the best throughput rates for 32-bit MCUs. Rounding out this family’s high level of integration is a full-featured hardware crypto engine with a random number generator for high-throughput data encryption/decryption and authentication (e.g., AES, 3DES, SHA, MD5 and HMAC), as well as the first SQI interface on a Microchip MCU and the PIC32’s highest number of serial channels.
View a brief presentation: http://www.microchip.com/get/1WEC
Embedded designers are faced with ever-increasing demands for additional features that require more MCU performance and memory. At the same time, they are looking to lower cost and complexity by utilizing fewer MCUs. The PIC32MZ family provides 3x the performance and 4x the memory over the previous-generation PIC32MX families, along with a high level of advanced peripheral integration. For applications requiring embedded connectivity, the family includes Hi-Speed USB, Ethernet and CAN, along with a broad set of wired and wireless protocol stacks. Many embedded applications are adding better graphics displays, and the PIC32MZ can support up to a WQVGA [400×240] display without any external graphics chips. Streaming/digital audio applications can take advantage of this family’s 159 DSP instructions, large memory, peripherals such as I2S, and available software.
Field updates are another growing challenge for design engineers and managers. The PIC32MZ’s 2 MB of internal Flash enables live updates via dual independent panels that provide a fail-safe way to conduct field updates while operating at full speed.
“Our new PIC32MZ family was designed for high-end and next-generation embedded applications that require high levels of performance, memory and advanced-peripheral integration,” said Rod Drake, director of Microchip’s MCU32 Division. “The PIC32MZ enables designers to add features such as improved graphics displays, faster real-time performance and increased security with a single MCU, lowering both cost and complexity.”
The PIC32MZ is Microchip’s first MCU to employ Imagination’s MIPS microAptiv™ core, which adds 159 new DSP instructions that enable the execution of DSP algorithms at up to 75% fewer cycles than the PIC32MX families. This core also provides the microMIPS® instruction-set architecture, which improves code density while operating at near full rate, instruction and data cache, and its 200 MHz/330 DMIPS offers 3x the performance of the PIC32MX.
“Microchip is a flag-bearer for the MIPS architecture in microcontrollers, having created its performance-leading PIC32 line around MIPS. Additionally, Microchip was a valued partner in defining the feature set for the new MIPS microAptiv CPU, which is designed to fulfill next-generation application demands for increased performance and functionality,” said Tony King-Smith, EVP Marketing, Imagination Technologies. “With its new microAptiv-based PIC32MZ family, Microchip is again taking MCU performance and feature innovation to new levels. Imagination is delighted with this latest achievement of our strategic relationship with Microchip to address ever-evolving market needs.”
Microchip is making four new PIC32MZ development tools available today. The complete, turn-key PIC32MZ EC Starter Kit costs $119, and comes in two flavors to support family members with the integrated crypto engine (Part # DM320006-C) and those without (Part # DM320006). The Multimedia Expansion Board II (Part # DM320005-2), which is available at the introductory rate of $299 for the first six months and can be used with either Starter Kit to develop graphics HMI, connectivity and audio applications. The 168-pin to132-pin Starter Kit Adapter (Part # AC320006, $59) enables development with Microchip’s extensive portfolio of application-specific daughter boards. The PIC32MZ2048EC Plug-in Module (Part # MA320012, $25) is available for existing users of the Explorer 16 Modular Development Board. For more information and to purchase these tools, visit http://www.microchip.com/get/JDVB.
Pricing & Availability
The first 12 members of the PIC32MZ family are expected starting in December for sampling and volume production, while the remaining 12, along with additional package options, are expected to become available at various dates through May 2014. The crypto engine is integrated into eight of the PIC32MZ MCUs, and there is an even split of 12 MCUs with 1 MB of Flash and 12 MCUs with 2 MB of Flash. Pricing starts at $6.68 each in 10,000-unit quantities. The superset family members and their package options are the 64-pin QFN (9×9 mm) and TQFP (9×9 mm) for the PIC32MZ2048ECH064; 100-pin TQFP (12×12 and 14×14 mm) for the PIC32MZ2048ECH100; 124-pin VTLA (9×9 mm) for the PIC32MZ2048ECH124; and 144-pin TQFP (16×16 mm) and LQFP (20×20 mm) for the PIC32MZ2048ECH144. The superset versions with an integrated crypto engine are the PIC32MZ2048ECM064, PIC32MZ2048ECM100, PIC32MZ2048ECM124 and PIC32MZ2048ECM144.
For more information, contact any Microchip sales representative or authorized worldwide distributor, or visit Microchip’s Web site athttp://www.microchip.com/get/ESJG. To purchase products mentioned in this press release, go to microchipDIRECT or contact one of Microchip’s authorized distributors.
RSS Feed for Microchip Product News: http://www.microchip.com/get/E09A
Microchip’s New Cloud-Based Development Platform Now Available on Amazon Web Services Marketplace [Microchip press release, Oct 22, 2013]
Allows Embedded Engineers to Easily Connect Designs
to Amazon EC2 Instances;
Bridges Cloud and Embedded Worlds, Enabling Internet of Things
Microchip Technology Inc., a leading provider of microcontroller, mixed-signal, analog and Flash-IP solutions, today announced a simple Cloud Development Platform that is available on the Amazon Web Services (AWS) Marketplace and enables embedded engineers to quickly learn cloud based communication. Microchip’s platform provides designers with the ability to easily create a working demo that connects an embedded application with the Amazon Elastic Compute Cloud (EC2) service. At the heart of this platform is Microchip’s Wi-Fi® Client Module Development Kit (Part # DM182020), which offers developers a simple way to bridge the embedded world and the cloud, to create applications encompassing the Internet of Things.
A rapidly growing number of embedded engineers need to add cloud connectivity to their designs, but have limited experience in this area. Microchip’s new Cloud Development Platform builds designer confidence by making it quick and easy for them to get up and running on the proven Amazon EC2 cloud infrastructure.
Amazon EC2 is a Web service that provides scalable, pay-as-you-go compute capacity in the cloud. It is designed to make Web-scale computing easier for developers.
“I view this as a huge step forward for corporations who produce embedded products, to quickly develop infrastructure and connect their devices to the cloud,” said Mike Ballard, senior manager of Microchip’s Home Appliance Solutions Group and leader of its Cloud Enablement Team. “With the vast amount of expertise and scalability provided by AWS, developers can easily customize their connectivity instances and the user’s experience.”
“With Microchip’s Wi-Fi Client Module Development Kit available via our AWS Marketplace, customers can easily learn to connect embedded products to AWS,” said Sajai Krishnan, GM, AWS Marketplace. “This is an effective step to help bridge the embedded world and the cloud.”
Pricing & Availability
Microchip’s Cloud Development Platform is available today at http://www.microchip.com/get/R837. As part of this platform, its Wi-Fi Client Module Development Kit (Part # DM182020) is available for purchase today for $99, at http://www.microchip.com/get/0D84. For additional information, contact any Microchip sales representative or authorized worldwide distributor, or visit Microchip’s Web site athttp://www.microchip.com/get/ST1C. To purchase products mentioned in this press release, go to microchipDIRECT or contact one of Microchip’s authorized distribution partners.
Smart Move [Business Today [India], May 11, 2014]
Why venture funds are rushing to back Ineda, maker of chips for wearable devices.
Ineda Systems is just the sort of company you’d expect from Dasaradha R. Gude, who has spent a large part of his career in the world of processors. “We are processors” is how he describes himself and his team of nearly 200 people.
Gude, or GD as he is known to many of his colleagues and business associates, is clearly excited about the power of wearable chips. Ineda – the name is derived from ‘integrated electronics designs for advanced systems’ – designs chips for use in wearable devices.
From 2007 to 2010, Gude was Corporate Vice President at Advanced Micro Devices (AMD) Inc, and later Managing Director at AMD India. He founded Ineda in 2011, and members of his team have previously worked in global companies such as AMD and Intel. He says: “They are people with courage to leave big companies and step out to do something innovative.”
To his customers, he plans to offer chips in sizes of five, seven, nine and 12 square millimetres, which can fit into wearable devices such as smart watches, health and fitness trackers, and pretty much anything that needs to be connected to the emerging ‘Internet of things’ which allows users to monitor connected devices from a long distance.
He promises chips that not only go easy on battery life, but also versions that can provide a range of features, almost like a smartphone. He says his potential customers are leaders in wearable technology, who would need tens of millions of chips a year, and this would bring his costs down.
The going has been good so far for Ineda. The company has just received funding from the US-based Walden Riverwood Ventures, from the venture capital arms of Samsung and Qualcomm, and a UK-based research and development company called Imagination Technologies. The total funding is to the tune of $17 million or Rs 103 crore, and Gude intends to use the money to ensure that the chips attain stability for mass production. In April 2013, Ineda raised $10 million (more than Rs 60 crore), with Imagination Technologies as the lead investor.
The chips will be manufactured in Taiwan, and Gude is in talks with about two dozen potential customers, big names in the wearable technology market such as Nike and Fitbit. “Because we have a unique proposition and will need huge volumes, we are talking to the really big guys,” he says.
Clearly, wearable technology is a growing market. Gude says it is already worth a couple of billion dollars globally, and is expected to be a $10-billion industry by 2016. Everyone, from Google to Intel to fitness companies, has its eye on this market. For instance, Theatro, a US-based company, is developing voice-controlled wearable computers for the retail and hospitality segments of the enterprise market. It emerged from stealth mode in December 2013 when it announced its product’s commercial availability and relationship with its first customer, The Container Store. Its tiny 35-gm WiFi-based wearable device enables voice-controlled human-to-human interaction (one-to-one, group and store-to-store) and replaces two-way radios. It also enables voice-controlled human-to-machine interaction with, say, in-store systems for inventory, pricing and loyalty programmes. Another potential use is in-store employee location-based services and analytics.
There is so much excitement about wearable technology that some companies are even crowdsourcing ideas. For instance, Intel has launched its ‘Make It Wearable’ challenge, which offers prize money to the best real-world applications submitted by designers, scientists and innovators.
So Ineda’s chips could be used in devices such as Google Glass, smart watches, and Nike’s FuelBand. And when does Ineda expect its chips to become commercially available? “End of this year or the by the first quarter of 2015,” says Gude.
He says that at the moment, he has no direct competitor with whom he can do an apples-to-apples comparison. His rivals are either too big and expensive, or too small with few functionality options. He positions Ineda somewhere in between in terms of functionality and price. How the market will respond remains to be seen, but investors are clearly interested.
Ineda Systems Delivers Breakthrough Power Consumption for Wearable Devices and the Internet of Things [press release, April 8, 2014]
Extends Battery Life for Wearable Devices Up to a Month
Ineda Systems, a leader of low-power SoCs (system on a chip) for use in both consumer and enterprise applications, today announced its Dhanush family of Wearable Processing Units (WPU™). The Dhanush WPU family supports a large range of wearable devices including fitness bands, smart watches, glasses, athletic video recorders and the Internet of Things. The Dhanush WPUs will enable a new industry milestone for always-on battery life of up to one month.
The Dhanush WPU is powered by Ineda’s patent pending Hierarchical Computing architecture. Dhanush is sampling to tier-one customers now, and will be available in volume production in the second half of 2014.
The Hierarchical Computing architecture, along with low power, high-performance MIPS-based microprocessor cores and PowerVR mobile graphics and video processors, enable the Dhanush WPU to offer leading performance with unprecedented low power consumption. The Dhanush family of SoCs also supports a scalable range of connectivity from Bluetooth LE through Bluetooth and Wi-Fi to address a range of applications.
“The Ineda engineering team in India has developed an innovative, low-power architecture designed specifically for wearable devices,” said Dasaradha Gude, CEO of Ineda Systems.
“The Dhanush family of WPUs offers better power consumption by an order of magnitude than smart phone processors that are currently being retrofitted for wearable devices.”
“The smart phone market grew substantially with the advent of smartphone-specific dedicated application processors. Dhanush WPU SoCs will enable a similar transformation in the wearable market segment,” Gude added.
The Dhanush WPU is an industry-first wearable SoC that addresses all the needs of the wearable device market. It features Hierarchical Computing architecture that allows applications and tasks to run at the right power optimized performance and memory footprint and has an always-on sensor hub optimized for wearable devices. The Dhanush WPU family consists of products – Nano, Micro, Optima and Advanced – which are designed for specific applications and product segments. Each of these products will aim to provide 30-day always-on battery life, up to 10x power consumption reduction compared to the current generation of application processors and be available at consumer price points.
“Ineda Systems is bringing the first wearable-specific chipset design to market,” said Chris Jones, VP and principal analyst at Canalys. “Strict power constraints are the greatest technological challenge for smart wearables, and Ineda is the first company taking this challenge truly seriously at the SoC level with Dhanush. Always-on sensor functionality is also critical and inherent to its design.”
The Dhanush family of SoCs comes in four different tiers that are designed for specific implementations:
- Dhanush Advanced: Designed to include all the features required in a high-end wearable device – rich graphic and user interface – along with the capability to run a mobile class operating system such as Android™.
- Dhanush Optima: This is a subset of the Dhanush Advanced and retains all the same features except the capability of running a mobile class operating system. It offers enough compute and memory footprint required to run mid-range wearable devices.
- Dhanush Micro: Designed for use in low-end smartwatches that have increased compute and memory footprint. This contains a sensor hub CPU subsystem that takes care of the always-on functionality of wearable devices.
- Dhanush Nano: Designed for simple wearable devices that require microcontroller-class compute and memory footprint.
Hierarchical Computing Architecture
Hierarchical Computing is a tiered multi-CPU architecture with shared peripherals and memory. This architecture allows multiple CPUs to run independently and together to create a unified application experience for the user – allowing optimal use of CPUs per use-case for power efficient performance.
With Hierarchical Computing, all the CPUs can be individually or simultaneously active, working in sync while handling specific tasks assigned to them independently. Based on the mode of operation and the applications being used, the corresponding CPU is enabled to provide optimal performance at optimal power consumption. Resource sharing further enables Hierarchical Computing to work on the same hardware resources at different performance and power levels.
Ineda’s reference design, SDK and APIs enable OEMs and third-party application developers to seamlessly realize the benefits of the Hierarchical Computing architecture and provide a better user experience for their end products.
Ineda Systems plans to begin producing its WPU this year and will offer multiple SoC variations that will correspond with a specific class of wearable device. Ineda’s development kits are available for evaluation to select customers today.
About Ineda Systems
Ineda Systems, Inc. (pronounced “E-ne-da”) is a startup company founded by industry veterans from the United States and India with an ultimate goal of becoming a leader in developing low power SoCs for use in both consumer and enterprise applications. The advisory and management team has world-class experience working in both blue chip companies as well as fast-paced technology startups. Ineda’s expertise is in the area of SoC/IP development, architecture and software that is necessary to design silicon and systems for next generation of low power consumer and enterprise applications.
The company has offices in Santa Clara, California, USA and Hyderabad, India.
Ineda Systems, Inc. has applied for the trademark of WPU. Android is a trademark of Google Inc. All other trademarks used herein are the property of their respective owners.
Justin Rosenstein of Asana: Be happy in a project-oriented teamwork environment made free of e-mail based communication hassle
Get Organized: Using Asana in Business [PCMag YouTube channel, Febr 24, 2014]
Steven Sinofsky, former head of Microsoft Office and (later) Windows at Microsoft:
We’ve all seen examples of the collaborative process playing out poorly by using email. There’s too much email and no ability to track and manage the overall work using the tool. Despite calls to ban the process, what is really needed is a new tool. So Asana is one of many companies working to build tools that are better suited to the work than one we currently all collectively seem to complain about.
in Don’t ban email—change how you work! [Learning by Shipping, Jan 31, 2014]
Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
in You’re doing it wrong [Learning by Shipping, April 10, 2014] and Shipping is a Feature: Some Guiding Principles for People That Build Things [Learning by Shipping, April 17, 2014]
Making e-mail communication easier [Fox Business Video]
May. 06, 2014 – 3:22 – Asana co-founder Justin Rosenstein weighs in on his new email business.
How To Collaborate Effectively With Asana [Forbes YouTube channel, Feb 26, 2013]
Dustin Moskovitz: How Asana Gets Work Done [Forbes YouTube channel, Feb 26, 2013]
Do Great Things: Keynote by Justin Rosenstein of Asana | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]
Asana’s Justin Rosenstein: “I Flew Coach Here.” | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]
How we use Asana [asana blog, Oct 9, 2013]
We love to push the boundaries of what Asana can do. From creating meeting agendas to tracking bugs to maintaining snacks in the refrigerator, the Asana product is (unsurprisingly) integral to everything we do at Asana. We find many customers are also pushing the boundaries of Asana to fit their teams’ needs and processes. Since Asana was created to be flexible and powerful enough for every team, nothing makes us more excited than hearing about these unique use cases.
Recently, we invited some of our Bay Area-based customers to our San Francisco HQ to share best practices with one another and hear from our cofounder Justin Rosenstein about the ways we use Asana at Asana. We’re excited to pass on this knowledge through some video highlights from the event. You can watch the entire video here: The Asana Way to Coordinate Ambitious Projects with Less Effort
Capture steps in a Project
“The first thing we always do is create a Project that names what we’re trying to accomplish. Then we’ll get together as a team and think of, ‘What is every single thing we need to accomplish between now and the completion of that Project?’ Over the course of the Project, all of the Tasks end up getting assigned.”
“Typically when I start my day, I’ll start by looking at all the things that are assigned to me. I’ll choose a few that I want to work on today. I try to be as realistic as possible, which means adding half as many things as I am tempted to add. After putting those into my ‘Today’ view, there are often a couple of other things I need to do. I just hit enter and add a few more tasks.”
Forward emails to Asana
“Because I want Asana to be the source of truth for everything I do, I want to put emails into my task list and prioritize them. I’ll just take the email and forward it to firstname.lastname@example.org. We chose ‘x’ so it wouldn’t conflict with anything else in your address book. Once I send that, it will show up in Asana with the attachments and everything right intact.”
Run great meetings
“We maintain one Project per meeting. If I’m looking at my Task list and see a Task I want to discuss at the meeting, I’ll just use Quick Add (tab + Q) to put the Task into the correct Project. Then when the meeting comes around, everything that everyone wants to talk about has already been constructed ahead of time.”
“Often a problem comes up and someone asks, ‘Who’s responsible for that?’ So instead, we’ve built out a list of areas of responsibility (AoRs), which is all the things that someone at the company has to be responsible for. By having AoRs, we distribute responsibility. We can allow managers to focus on things that are more specific to management and empower everyone at the company to be a leader in their own field.”
Background on https://asana.com/
How it all started and progressed?
asana demo & vision talk [Robert Marquardt YouTube channel, Feb 15, 2011]
The Asana Vision & Demo [asana blog, Feb 7, 2011]
We recently hosted an open house at our offices in San Francisco, where we showed the first public demo of Asana and deep-dived into the nuances of the product, the long-term mission that drives us, how the beta’s going, and more. We were really excited to be able to share what we’ve been working on and why we’re so passionate about it, and hope you enjoy
thisthe above video of the talk:
Asana will be available more broadly later this year. In the meantime,
Introducing Asana: The Modern Way to Work Together [asana blog, Nov 2, 2011]
Asana is a modern web application that keeps teams in sync, a shared task list where everyone can capture, organize, track, and communicate what they are working on in service of their common goal. Rather than trying to stay organized through the tedious grind of emails and meetings, teams using Asana can move faster and do more — or even take on bigger and more interesting goals.
How Asana Works:
Asana re-imagines the way we work together by putting the fundamental unit of productivity – the task – at the center. Breaking down ambitious goals into small pieces, assigning ownership of those tasks, and tracking them to completion is how things get built, from software to skyscrapers. With Asana, you can:
- capture everything your team is planning and doing in one place. When tasks and the conversations about them are collected together, instead of spread around emails, documents, whiteboards, and notebooks, they become the shared, trusted, collective memory for your organization.
- keep your team in sync on the priorities, and what everyone is working on. When you have a single shared view of a project’s priorities, along with an accurate view into what each person is working on and when, everyone on the team knows exactly what matters, and what work remains between here and the goal.
- get the right information at the right time. Follow tasks, and you’ll receive emails as their status evolves. Search, and you’ll see the full activity feed of all the discussions and changes to a task over its history. Now, it’s easy to stay on top of the details — without asking people to forward you a bunch of email threads.
Building tools for teamwork [asana blog, Nov 22, 2013]
Our co-founder, Justin, recently wrote in Wired about why we need to rethink the tools we use to work together. The article generated a lot of interesting comments, from ideas on knowledge management to fatigue with the “meeting lifestyle,” to this protest on the typical office culture:
“Isn’t the root of this problem that, within our own organizations, we fiercely guard information and our decision-making processes? Email exchanges and invite-only meetings shut out others– forcing the need for follow-up conversations, summary reports, and a trail of other status/staff meetings to relay content already covered some place/some time before.”
To reach its goals, we think a team needs clarity of purpose, plan and responsibility. Technology and tools can help us reach that kind of clarity, but only if they target the right problem. From their roles at Facebook, Asana’s founders have extensive knowledge of social networks, and the social graph technology they rely on. But Asana isn’t a social network. Why? Because, as Justin outlines, the social graph doesn’t target the problem of work:
Our personal and professional lives, even if they overlap, have two distinct goals — and they require different “graphs.”
For our personal lives, the goal is love (authentic interpersonal connection), and that requires a social graph with people at the center. For our work lives, the goal is creation (working together to realize our collective potential), and that requires a work graph, with the work at the center.
Don’t get me wrong: Human connection is valuable within a business. But it should be in service to the organizational function of getting work done, and doesn’t need to be the center of the graph.
So, how does this change the experience for you and your teammates? A work graph means having all the information you need when you need it. Instead of blasting messages at the whole team, like “Hey, has anyone started working on this yet?”, you should be able to efficiently find out exactly who’s working on that task and how much progress they’ve made. That’s the target Asana is aiming for. Read Justin’s full Wired article.
Organizations in Asana [asana blog, May 1, 2013]
Today, we’re excited to be launching a collection of new features aimed at helping companies use and support Asana across their entire enterprise. We call it Organizations.
Since we began, Asana has been on a mission to help great teams achieve more ambitious goals. We started 18 months ago with our free service, targeted at smaller teams and even individuals – helping them get and stay organized.
When we launched our first premium tiers six months later, we enabled medium sized teams and companies – think 10s to 100s of people – to go further with Asana. In the year between then and now, we’ve been continuously amazed by all the places and ways Asana is being used to organize a team: in industries as diverse as education, healthcare, finance, technology, and manufacturing; in companies from two-person partnerships to Fortune 100 enterprises; and in dozens of countries representing every continent but the frozen one. There’s a lot of important work being organized in Asana.
But we’re still just getting started – there remain teams that we haven’t been ready to support: the largest teams, those that grow from 100s to 1,000s of people. While it would be remarkable if it only took a small number of coworkers to design and manufacture electric cars, synthesize DNA, or deliver healthcare to villages across the globe – these missions are complex, and require more people to be involved in them to succeed. Many of the teams using Asana today are inside these bigger organizations, and they’ve been asking for Asana to work at enterprise-scale. So for the past several months, we’ve been working on just that.
Stories from our first year [asana blog, Nov 12, 2012]
… When we launched a year go, we had an ambitious mission: to create a shared task management platform that empowers teams of like-minded people to do great things. … In the course of our first year, tens of thousands of teams looking for a better way to work together have adopted Asana. …
… we collected three of these stories from three distinct kinds of teams:
– a tech startup [Foursquare],
– a fast-growing organic food company [Bare Fruit & Sundia] and
– a leading Pacific Coast aquarium [Aquarium of the Bay].
Foursquare Launches 5.0
Right around the time Foursquare passed 100 employees over the last year, we started building Foursquare 5.0. This update was a big deal: we were overhauling Foursquare’s core mechanics, evolving from check-ins towards the spontaneous discovery of local businesses. As we built the new app, we needed a way to gather feedback from the entire team.
We tried what felt like every collaboration tool around. Group emails were a mess. Google Docs was impossible to parse. We’d heard about Asana and decided to give it a shot.
Using Asana, we were easily able to collect product feedback and bugs from everyone in the company, then parse, discuss, distribute and prioritize the work. It became an indispensable group communication tool.
Foursquare 5.0 was a giant success, and we couldn’t have done it without Asana.
–Noah Weiss, Product Manager
Then, Of Course, There Is Us
It’s an understatement to say that we rely on Asana. We use our own product to manage every function of our business. Asana is where we plan, capture ideas, build meeting agendas, prioritize our product roadmap, document which bugs to fix and list the snacks to buy. It’s our CRM, our editorial calendar, our Applicant Tracking System, and our new-hire orientation system. Every team in the company – from product, design, and engineering to sales and marketing to recruiting and user operations – relies on the product we are building to stay in sync, connect our individual tasks to the bigger picture and accomplish our collective goals.
Q&A: Rising Realty Partners builds their business with Asana [asana blog, Feb 7, 2014]
As our business expanded, we found ourselves relying heavily on email, faxes, and even FedEx to communicate with each other and collaborate with outside parties. We needed a better way to organize, prioritize and communicate around our work, and we found the answer in Asana.
I can’t image how complex our communications would have been if we weren’t using Asana. We had dozens of people internally, and more than 50 people externally, all involved in making this deal happen. Having all of that communication in Asana significantly cut down on the craziness.
Because of Asana’s Dropbox integration, our workflow is now fast, intuitive and organized — something that was impossible to achieve over email. For the acquisition, we used Asana and Dropbox simultaneously to keep track of everything; from what each team member was doing, to the current status of each transaction, to keeping a history of all related documents. We had more than 18,000 items in Dropbox that we would link to in Asana instead of attaching them in email. We removed more than 30 gigabytes of information per recipient from our inboxes and everything was neatly organized around the work we were doing in Asana. This meant that the whole team always had the latest and most relevant information.
For this entire project, maybe one percent of our total internal communication was happening in email. With Asana, anyone in the company could look at any aspect of the project, see where it stood, and add their input. No one had to remember to cc’ or ‘reply all’.
The success of this deal was largely due to Asana and we plan to use it in future acquisitions –Asana has become essential to our team’s success.
Our iPhone App Levels Up [asana blog, Sept 6, 2012]
Until recently, we’ve focused most of our energy on the browser-based version of Asana. But, in the last few months, even as we’ve launched major new features in our web application, we’ve been putting much more time into improving the mobile experience. In June, we made several meaningful architectural improvements to pave the way for bigger and better things and hinted that these changes were in the works.
Today, we’ve taken the next step in that direction: Version 2.0 of our iPhone app is in the App Store now. We are really proud of this effort – almost everyone at Asana played a part in this release. This new version is a top-to-bottom redesign that really puts the power of the desktop web version of Asana right in your pocket.
Asana comes to Android [asana blog, Feb 28, 2013]
Five months ago, we launched our first bonafide mobile app, for the iPhone, and we’ve been steadily improving it ever since. Focusing on a single platform at first allowed us to be meticulous about our mobile experience, adding new features and honing the design until we knew it was something people loved. After strong positive feedback from our customers and a solid rating in the iTunes App Store, we knew it was time.
Today, we are happy to announce that Asana for Android is here. You can get it right now in the Google Play store
As of today (May 8, 2014) there are 70 employees and 15 open positions. The company has 4 investors: Benchmark Capital, Andreessen-Horowitz, Founders Fund and Peter Thiel. The first two put $9 million in November 2009. Then Founders Fund and Peter Thiel added to that $28 million in July 2012. Reuters reported that with Facebook alumni line up $28 million for workplace app Asana [July 23, 2012]:
Asana, a Silicon Valley start-up, has lined up $28 million in a financing round led by PayPal co-founder Peter Thiel and his Founders Fund, the company said.
The funding round values the workplace-collaboration company at $280 million, a person familiar with the matter said.
“This investment allows us to attract the best and brightest designers and engineers,” said Asana co-founder Justin Rosenstein, who said that in turn would help the company build on its goal of making interaction among its client-companies’ employees easier.
Asana launched the free version last year of its company management software that makes it easier to collaborate on projects. It introduced a paid, premium service earlier this year. It declined to give revenue figures, but said “hundreds” of customers had upgraded to the premium version.
Although Rosenstein and co-founder Dustin Moskovitz are alumni of social-network Facebook– Moskovitz co-founded the service with his Harvard roommate Mark Zuckerberg – they were quick to distance Asana from social networking.
Instead, they say, they view the company as an alternative to email, in-person meetings, physical whiteboards, and spreadsheets.
“That’s what we see as our competition,” said Rosenstein. “Replacing those technologies.”
With its latest funding round, Asana has now raised a total of $38 million from investors including Benchmark Capital and Andreessen Horowitz.
Thiel, who got to know Moskovitz and Rosenstein thanks to his early backing of Facebook, had already invested in Asana when it raised its “angel” round in early 2009. Now, his high-profile Founders Fund is investing and Thiel is joining Asana’s board.
Facebook has 901 million monthly users and revenue last year of $3.7 billion. But its May initial public offering disappointed many investors after it priced at $38 per share and then quickly fell. It closed on Friday at $28.76.
Many investors speculate that start-ups will have to accept lower valuations in the wake of the Facebook IPO. The Asana co-founders said the terms of their latest funding round were set before Facebook debuted on public markets.
A few of Facebook’s longtime employees have gone on to work on their own ventures.
Bret Taylor, formerly chief technology officer, said last month he was leaving to start his own company.
Dave Morin, who joined Facebook in 2008 from Apple, left in 2010 to found social network Path. Facebook alumni Adam D’Angelo and Charlie Cheever left in 2009 to start Quora, their question-and-answer company, which is also backed by Thiel.
Another former roommate of Zuckerberg’s, Chris Hughes, also left a few years ago and coordinated online organizing for Barack Obama’s 2008 presidential campaign. Now, he is publisher of the New Republic magazine.
Matt Cohler, who joined Facebook from LinkedIn early in 2005, joined venture capital firm Benchmark Capital in 2008. His investments there include Asana and Quora.
Core technology used
Luna, our in-house framework for writing great web apps really quickly [asana blog, Feb 2, 2010]
At Asana, we’re building a Collaborative Information Manager that we believe will make it radically easier for groups of people to get work done. Writing a complex web application, we experienced pain all too familiar to authors of “Web 2.0″ software (and interactive software in general): there were all kinds of extremely difficult programming tasks that we were doing over and over again for every feature we wanted to write. So we’re developing Lunascript — an in-house programming language for writing rich web applications in about 10% of the time and code you can today.
Check out the
videowe made »
[rather an article about Luna as of Nov 2, 2011]
Release the Kraken! An open-source pub/sub server for the real-time web [asana blog, March 5, 2013]
Today, we are releasing Kraken, the distributed pub/sub server we wrote to handle the performance and scalability demands of real-time web apps like Asana.
Before building Kraken, we searched for an existing open-source pub/sub solution that would satisfy our needs. At the time, we discovered that most solutions in this space were designed to solve a much wider set of problems than we had, and yet none were particularly well-suited to solve the specific requirements of real-time apps like Asana. Our team had experience writing routing-based infrastructure and ultimately decided to build a custom service that did exactly what we needed – and nothing more.
The decision to build Kraken paid off. For the last three years, Kraken has been fearlessly routing messages between our servers to keep your team in sync. During this time, it has yet to crash even once. We’re excited to finally release Kraken to the community!
Issues Moving to Amazon’s Elastic Load Balancer [asana blog, June 5, 2012]
Asana’s infrastructure runs almost entirely on top of Amazon Web Services (AWS). AWS provides us with the ability to launch managed production infrastructure in minutes with simple API calls. We use AWS for servers, databases, monitoring, and more. In general, we’ve been very happy with AWS. A month ago, we decided to use Amazon’s Elastic Load Balancer service to balance traffic between our own software load balancers.
Announcing the Asana API [asana blog, April 19, 2012]
Today we are excited to share that you can now add and access Asana data programmatically using our simple REST API.
The Asana API lets you build a variety of applications and scripts to integrate Asana with your business systems, show Asana data in other contexts, and create tasks from various locations.
Here are some examples of the things you can build:
- Source Control Integration to mark a Task as complete and add a link to the code submission as a comment when submitting code.
- A desktop app that shows the Tasks assigned to you
- A dashboard page that shows a visual representation of complete and incomplete Tasks in a project
Asana comes to Internet Explorer [asana blog, Oct 16, 2013]
Amazon Web Services not only achieved the clear and far dominant leader status in the Cloud Infrastructure as a Service (Cloud IaaS) market, but “the balance of new projects are going to AWS, not the other providers” – according to Gartner
According to the latest analysis by Gartner, Amazon Web Services (AWS) is:
- “overwhelmingly the dominant vendor” of the Cloud Infrastructure as a Service (Cloud IaaS) market
- a clear leader, with more than five times the compute capacity in use than the aggregate total of the other fourteen providers included in the so called Magic Quadrant (MQ)
- appreciated for “innovative, exceptionally agile and very responsive to the market and the richest IaaS product portfolio” which puts AWS into a quite far ahead position even against CSC, the only other in the Leaders quadrant currently
In addition Amazon Web Services has come up in July with a price cut that reaches 80% on its EC2 cloud computing platform.
|Note that Gartner’s ranking is a complex evaluation, based on various point of views deemed to be most important from vendor-supplier point of view (see in the 3d party explanation of Gartner’s Magic Quadrant included in the Details part). It is not based on any kind of banchmarking, not even those run buy customers according to their specific application requirements. Therefore it is a well know fact that from pure cloud engineering point of view, especially in terms of focussed benchmarks Amazon EC2 is far from being a leader. The latest example of that:
About the Test
UnixBench runs a set of individual benchmark tests, aggregates the scores, and creates a final, indexed score to gauge the performance of UNIX-like systems,which include Linux and its distributions (Ubuntu, CentOS, and Red Hat). From the Unixbench homepage:
The UnixBench suite used for these tests ran tests that include: Dhrystone 2, Double-precision Whetstone, numerous File Copy tests, Pipe Throughput, ProcessCreation, Shell Scripts, System Call Overhead, and Pipe-based Context Switching.
Price-Performance Value: The CloudSpecs Score
The CloudSpecs score calculates the relationship between the cost of a virtual server per hour and the performance average seen from each provider. The scores are relational to each other; e.g., if Provider A scores 50 and Provider B scores 100, then Provider B delivers 2x the performance value in terms of cost. The highest value provider will always receive a score of 100, and every additional provider is pegged in relation to that score. The calculation is:
Source: IaaS Price Performance Analysis: Top 14 Cloud Providers – A study of performance among the Top 14 public cloud infrastructure providers [Cloud Spectator and the Cloud Advisory Council, Oct 15, 2013] where—in addition of Unixbench—even more focussed benchmark results are reported as well from the Phoronix Test Suite (i.e. one of benchmark suites in PTS):
THE DETAILS BEHIND
The 2013 Cloud IaaS Magic Quadrant [by Lydia Leong on Gartner blog, Aug 21, 2013]
Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, 2013, has just been released (see the client-only interactive version, or the free reprint). Gartner clients can also consult the related charts, which summarize the offerings, features, and data center locations.
the best image obtained from the web:
In particular, market momentum has strongly favored Amazon Web Services. Many organizations have now had projects on AWS for several years, even if they hadn’t considered themselves to have “done anything serious” on AWS. Thus, as those organizations get serious about cloud computing, AWS is their incumbent provider — there are relatively few truly greenfield opportunities in cloud IaaS now. Many Gartner clients now actually have multiple incumbent providers (the most common combination is AWS and Terremark), but nearly all such customers tell us that the balance of new projects are going to AWS, not the other providers.
Little by little, AWS has systematically addressed the barriers to “mainstream”, enterprise adoption. While it’s still far from everything that it could be, and it has some specific and significant weaknesses, that steady improvement over the last couple of years has brought it to the “good enough” point. While we saw much stronger momentum for AWS than other providers in 2012, 2013 has really been a tipping point. We still hear plenty of interest in competitors, but AWS is overwhelmingly the dominant vendor.
At the same time, many vendors have developed relatively solid core offerings. That means that the number of differentiators in the market has decreased, as many features become common “table stakes” features that everyone has. It means that most offerings from major vendors are now fairly decent, but only a few are really stand out for their capabilities.
That leads to an unusual Magic Quadrant, in which the relative strength of AWS in both Vision and Execution essentially forces the whole quadrant graphic to rescale. (To build an MQ, analysts score providers relative to each other, on all of the formal evaluation criteria, and the MQ tool automatically plots the graphic; there is no manual adjustment of placements.) That leaves you with centralized compression of all of the other vendors, with AWS hanging out in the upper right-hand corner.
Note that a Magic Quadrant is an evaluation of a vendor in the market; the actually offering itself is only a portion of the overall score. I’ll be publishing a Critical Capabilities research note in the near future that evaluates one specific public cloud IaaS offering from each of these vendors, against its suitability for a set of specific use cases. My colleagues Kyle Hilgendorf and Chris Gaun have also been publishing extremely detailed technical evaluations of individual offerings — AWS, Rackspace, and Azure, so far.
A Magic Quadrant is a tremendous amount of work — for the vendors as well as for the analyst team (and our extended community of peers within Gartner, who review and comment on our findings). Thanks to everyone involved. I know this year’s placements came as disappointments to many vendors, despite the tremendous hard work that they put into their offerings and business in this past year, but I think the new MQ iteration reflects the cold reality of a market that is highly competitive and is becoming even more so.
A 3d party explanation of the GARTNER IaaS MAGIC QUADRANT 2013 [cloud☁mania, Aug 29, 2013]
Gartner just released the 2013 update of his traditionally Magic Quadrant for Cloud Infrastructure-as-a-Service. Here are some consideration about the evaluation methodology and MQ players.
In the context of this Magic Quadrant, IaaS is defined by Gartner as “a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and API optionally. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s datacentre.”
To be included in Magic Quadrant IaaS providers should target enterprise and midmarket customers, offering high-quality services, with excellent availability, good performance, high security and good customer support. For each IaaS provider included in MQ Gartner is offering deep description related to services offer like: datacentre locations, computing issues, storage & network features, special notes, and recommended users. Also deep comments about Strengths & Caution in Cloud adoption are offered for each IaaS provider, despite the MQ positioning.
Gartner Magic Quadrant for IaaS is a more than eloquent picture of actual status of IaaS major players. IaaS market momentum is strongly dominated by Amazon Web Services both Vision and Execution essentially directions. According Garner analysts, AWS is a clear leader, with more than five times the compute capacity in use than the aggregate total of the other fourteen providers included in MQ. AWS is appreciated for “innovative, exceptionally agile and very responsive to the market and the richest IaaS product portfolio”.
The Leaders Quadrant is positioning CSC as second player, a traditional IT outsourcer with a broad range of datacentre outsourcing capabilities. CSC is appreciated for his commitment to embrace the highly standardized cloud model, and his solid platform attractive to traditional IT operations organizations that still want to retain control, but need to offer greater agility to the business
The Challengers Quadrant is including Verizon Terremark – the market share leader in VMware-virtualized public cloud IaaS, Dimension Data – a large SI and VAR entering in the cloud IaaS market through the 2011 acquisition of OpSource, and Savvis – a CenturyLink company with a long track record of leadership in the hosting market.
Big surprise for Visionaries Quadrant is the comfortable positioning of Microsoft with his Windows Azure platform. Previously strictly PaaS, Azure is becoming IaaS also in April 2013 when Microsoft launched Windows Azure Infrastructure Services which include Virtual Machines and Virtual Networks. Microsoft place in Visionary Quadrant is motivated by Gartner by the global vision of infrastructure and platform services “that are not only leading stand-alone offerings, but also seamlessly extend and interoperate with on-premises Microsoft infrastructure (rooted in Hyper-V, Windows Server, Active Directory and System Center) and applications, as well as Microsoft’s SaaS offerings.”
Between the IaaS providers from the Niche Players Quadrant, we have to note the presence of heawy playes triade:IBM, HP, and Fujitsu. Gartner appreciate IBM for his wide range of cloud-related products and services, IaaS MQ analyse including only cloud offering from SmartCloud Enterprise (SCE) and cloud-enabled infrastructure service IBM SmartCloud Enterprise+. In the same way, from HP’s range of cloud-related products and services Gartner is considered only HP Public Cloud and some cloud-enabled infrastructure services, such HP Enterprise Services Virtual Private Cloud. Fujitsu is one of the few non-American cloud providers, being appreciated by Gartner for the large cloud IaaS offerings, including the Fujitsu Cloud IaaS Trusted Public S5 (formerly the Fujitsu Global Cloud Platform), multiple regional offerings based on a global reference architecture (Fujitsu Cloud IaaS Private Hosted, formerly known as Fujitsu Local Cloud Platform), and multiple private cloud offerings, especially in Asia-Pacific area and Europe.
Speaking about non-America regions we should observe that significant European-based providers like CloudSigma, Colt, Gigas, Orange Business Services, OVH and Skyscape Cloud Services was not included in this Magic Quadrant. The same for Asia/Pacific region with major players like Datapipe, NTT and Tata Communications.
Gartner considered also two offerings that are currently in beta stage, and therefore could not be included in this evaluation, but could be considered as prospective players of next MQ edition: Google Compute Engine (GCE) – a model similar to Amazon EC2′s, and VMware vCloud Hybrid Service (vCHS) – a full-featured offering with more functionality than vCloud Datacenter Service.
Additional Gartner blog posts related to that:
Cloud IaaS market share and the developer-centric world [by Lydia Leong on Gartner blog, Sept 4, 2013]
Bernard Golden recently wrote a CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. He raised a number of good questions that I thought it would be useful to address. This is part 1 of my response. (See part 2 for more.)
(Broadly, as a matter of Gartner policy, analysts do not debate Magic Quadrant results in public, and so I will note here that I’m talking about the market, and not the MQ itself.)
Bernard: “Why is there such a distance between AWS’s offering and everyone else’s?”
In the Magic Quadrant, we rate not only the offering itself in its current state, but also a whole host of other criteria — the roadmap, the vendor’s track record, marketing, sales, etc. (You can go check out the MQ document itself for those details.) You should read the AWS dot positioning as not just indicating a good offering, but also that AWS has generally built itself into a market juggernaut. (Of course, AWS is still far from perfect, and depending on your needs, other providers might be a better fit.)
But Bernard’s question can be rephrased as, “Why does AWS have so much greater market share than everyone else?”
Two years ago, I wrote two blog posts that are particularly relevant here:
- Common Service Provider Myths About Cloud Infrastructure
- In Cloud IaaS, Developers are the Face of Business Buyers
These posts were followed up wih two research notes (links are Gartner clients only):
- New Entrants to the Cloud IaaS Market Face Tough Competitive Challenges
- How Buyers Purchase Cloud IaaS
I have been beating the “please don’t have contempt for developers” drum for a while now. (I phrase it as “contempt” because it was often very clear that developers were seen as lesser, not real buyers doing real things — merely ignoring developers would have been one thing, but contempt is another.) But it’s taken until this past year before most of the “enterprise class” vendors acknowledged the legitimacy of the power that developers now hold.
Many service providers held tight to the view espoused by their traditional IT operations clientele: AWS was too dangerous, it didn’t have sufficient infrastructure availability, it didn’t perform sufficiently well or with sufficient consistency, it didn’t have enough security, it didn’t have enough manageability, it didn’t have enough governance, it wasn’t based on VMware — and it didn’t look very much like an enterprise’s data center architecture. The viewpoint was that IT operations would continue to control purchases, implementations would be relatively small-scale and would be built on traditional enterprise technologies, and that AWS would never get to the point that they’d satisfy traditional IT operations folks.
What they didn’t count on was the fact that developers, and the business management that they ultimately serve, were going to forge on ahead without them. Or that AWS would steadily improve its service and the way it did business, in order to meet the needs of the traditional enterprise. (My colleagues in GTP — the Gartner division that was Burton Group — do a yearly evaluation of AWS’s suitability for the enterprise, and each year, AWS gets steadily, materially better. Clients: see the latest.)
Today, AWS’s sheer market share speaks for itself. And it is definitely not just single developers with a VM or two, start-ups, or non-mission-critical stuff. Through the incredible amount of inquiry we take at Gartner, we know how cloud IaaS buyers think, source, succeed, and sometimes suffer. And every day at Gartner, we talk to multiple AWS customers (or prospects considering their options, though many have already bought something on the click-through agreement). Most are traditional enterprises of the G2000 variety (including some of the largest companies in the world), but over the last year, AWS has finally cracked the mid-market by working with systems integrator partners. The projected spend levels are clearly increasing dramatically, the use cases are extremely broad, the workloads increasingly have sensitive data and regulatory compliance concerns, and customers are increasingly thinking of AWS as a strategic vendor.
(Now, as my colleagues who cover the traditional data center like to point out, the spend levels are still trivial compared to what these customers are spending on the rest of their data center IT, but I think what’s critical here is the shift in thinking about where they’ll put their money in the future, and their desire to pick a strategic vendor despite how relatively early-stage the market is.)
But put another way — it is not just that AWS advanced its offering, but it convinced the market that this is what they wanted to buy (or at least that it was a better option than the other offerings), despite the sometimes strange offering constructs. They essentially created demand in a new type of buyer — and they effectively defined the category. And because they’re almost always first to market with a feature — or the first to make the market broadly aware of that capability — they force nearly all of their competitors into playing catch-up and me-too.
That doesn’t mean that the IT operations buyer isn’t important, or that there aren’t an array of needs that AWS does not address well. But the vast majority of the dollars spent on cloud IaaS are much more heavily influenced by developer desires than by IT operations concerns — and that means that market share currently favors the providers who appeal to development organizations. That’s an ongoing secular trend — business leaders are currently heavily growth-focused, and therefore demanding lots of applications delivered as quickly as possible, and are willing to spend money and take greater risks in order to obtain greater agility.
This also doesn’t mean that the non-developer-centric service providers aren’t important. Most of them have woken up to the new sourcing pattern, and are trying to respond. But many of them are also older, established organizations, and they can only move so quickly. They also have the comfort of their existing revenue streams, which allow them the luxury of not needing to move so quickly. Many have been able to treat cloud IaaS as an extension of their managed services business. But they’re now facing the threat of systems integrators like Cognizant and Capgemini entering this space, combining application development and application management with managed services on a strategic cloud IaaS provider’s platform — at the moment, normally AWS. Nothing is safe from the broader market shift towards cloud computing.
As always, every individual customer’s situation is different from another’s, and the right thing to do (or the safe, mainstream thing to do) evolves through the years. Gartner is appropriately cautionary when it discusses such things with clients. This is a good time to mention that Magic Quadrant placement is NEVER a good reason to include or exclude a vendor from a short list. You need to choose the vendor that’s right for your use case, and that might be a Niche Player, or even a vendor that’s not on the MQ at all — and even though AWS has the highest overallplacement, they might be completely unsuited to your use case.
Where are the challengers to AWS? [by Lydia Leong on Gartner blog, Sept 4, 2013]
Bernard: “What skill or insight has allowed AWS to create an offering so superior to others in the market?”
AWS takes a comprehensive view of “what does the customer need”, looks at what customers (whether current customers or future target customers) are struggling with, and tries to address those things. AWS not only takes customer feedback seriously, but it also iterates at shocking speed. And it has been willing to invest massively in engineering. AWS’s engineering organization and the structure of the services themselves allows multiple, parallel teams to work on different aspects of AWS with minimal dependencies on the other teams. AWS had a head start, and with every passing year their engineering lead has grown larger. (Even though they have a significant burden of technical debt from having been first, they’ve also solved problems that competitors haven’t had to yet, due to their sheer scale.)
Many competitors haven’t had the willingness to invest the resources to compete, especially if they think of this business as one that’s primarily about getting a VM fast and that’s all. They’ve failed to understand that this is a software business, where feature velocity matters. You can sometimes manage to put together brilliant, hyper-productive small teams, but this is usually going to get you something that’s wonderful in the scope of what they’ve been able to build, but simply missing the additional capabilities that better-resourced competitors can manage (especially if a competitor can muster both resources and hyper-productivity). There are some awesome smaller companies in this space, though.
Bernard: “Plainly stated, why hasn’t a credible competitor emerged to challenge AWS?”
I think there’s a critical shift happening in the market right now. Three very dangerous competitors are just now entering the market — Microsoft, Google, and VMware. I think the real war for market share is just beginning.
For instance, consider the following, off the cuff, thoughts on those vendors. These are by no means anything more than quick thoughts and not a complete or balanced analysis. I have a forthcoming research note called “Rise of the Cloud IaaS Mega-Vendors” that focuses on this shift in the competitive landscape, and which will profile these four vendors in particular, so stay tuned for more. So, that said:
Microsoft has brand, deep customer relationships, deep technology entrenchment, and a useful story about how all of those pieces are going to fit together, along with a huge army of engineers, and a ton of money and the willingness to spend wherever it gains them a competitive advantage; its weakness is Microsoft’s broader issues as well as the Microsoft-centricity of its story (which is also its strength, of course). Microsoft is likely to expand the market, attracting new customers and use cases to IaaS — including blended PaaS models.
Google has brand, an outstanding engineering team, and unrivaled expertise at operating at scale; its weakness is Google’s usual challenges with traditional businesses (whatever you can say about AWS’s historical struggle with the enterprise, you can say about Google many times over, and it will probably take them at least as long as AWS did to work through that). Google’s share gain will mostly come at the expense of AWS’s base of HPC customers and young start-ups, but it will worm its way into the enterprise via interactive agencies that use its cloud platform; it should have a strong blended PaaS model.
VMware has brand, a strong relationship with IT operations folks, technology it can build on, and a hybrid cloud story to tell; whether or not its enterprise-class technology can scale to global-class clouds remains to be seen, though, along with whether or not it can get its traditional customer base to drive sufficient volume of cloud IaaS. It might expand the market, but it’s likely that much of its share gain will come at the expense of VMware-based “enterprise-class” service providers.
Obviously, it will take these providers some time to build share, and there are other market players who will be involved, including the other providers that are in the market today (and for all of you wondering “what about OpenStack”, I would classify that under the fates of the individual providers who use it). However, if I were to place my bets, it would be on those four at the top of market share, five years from now. They know that this is a software business. They know that innovative capabilities are vitally necessary. And they know that this has turned into a market fixated on developer productivity and business benefits. At least for now, that view is dominating the actual spending in this market.
You can certainly argue that another market outcome should have happened, that users shouldhave chosen differently, or even that users are making poor decisions now that they’ll regret later. That’s an interesting intellectual debate, but at this point, Sisyphus’s rock is rolling rapidly downhill, so anyone who wants to push it back up is going to have an awfully difficult time not getting crushed.
Verizon Cloud is technically innovative, but is it enough? [by Lydia Leong on Gartner blog, Oct 4, 2013]
Verizon already owns a cloud IaaS offering — in fact, it owns several. Terremark was an early AWS competitor with the Terremark Enterprise Cloud, a VMware-based offering that got strong enterprise traction during the early years of this market (and remains the second-most-common cloud provider amongst Gartner’s clients, with many companies using both AWS and Terremark), as well as a vCloud Express offering. Verizon entered the game later with Verizon Compute as a Service (now called Enterprise Cloud Managed Edition), also VMware-based. Since Verizon’s acquisition of Terremark, the company has continued to operate all the existing platforms, and intends to continue to do so for some time to come.
However, Verizon has had the ambition to be a bigger player in cloud; like many other carriers, it believes that network services are a commodity and a carrier needs to have stickier, value-added, higher-up-the-stack services in order to succeed in the future. However, Verizon also understood that it would have to build technology, not depend on other people’s technology, if it wanted to be a truly competitive global-class cloud player versus Amazon (and Microsoft, Google, etc.).
With that in mind, in 2011, Verizon went and made a manquisition — acquiring CloudSwitch not so much for its product (essentially hypervisor-within-a-hypervisor that allows workloads to be ported across cloud infrastructures using different technologies), as for its team. It gave them a directive to go build a cloud infrastructure platform with a global-class architecture that could run enterprise-class workloads, at global-class scale and at fully competitive price points.
Back in 2011, I conceived what I called the on-demand infrastructure fabric (see my blog post No World of Two Clouds, or, for Gartner clients, the research note, Market Trends: Public and Private Cloud Infrastructure Converge into On-Demand Infrastructure Fabrics) — essentially, a global-class infrastructure fabric with self-service selectable levels of availability, performance, and isolation. Verizon is the first company to have really built what I envisioned (though their project predates my note, and my vision was developed independently of any knowledge of what they were doing).
The Verizon Cloud architecture is actually very interesting, and, as far as I know, unique amongst cloud IaaS providers. It is almost purely a software-defined data center. Components are designed at a very low level — a custom hypervisor, SDN augmented with the use of NPUs, virtualized distributed storage. Verizon has generally tried to avoid using components for which they do not have source code. There are very few hardware components — there’s x86 servers, Arista switches, and commodity Flash storage (the platform is all-SSD). The network is flat, and high bandwidth is an expectation (Verizon is a carrier, after all). Oh, and there’s object-based storage, too (which I won’t discuss here).
The Verizon Cloud has a geographically distributed control plane designed for continuous availability, and it, along with the components, are supposed to be updatable without downtime (i.e., maintenance should not impact anything). It’s intended to provide fine-grained performance controls for the compute, network, and storage resource elements. It is also built to allow the user to select fault domains, allowing strong control of resource placement (such as “these two VMs cannot sit on the same compute hardware”); within a fault domain, workloads can be rebalanced in case of hardware failure, thus offering the kind of high availability that’s often touted in VMware-based clouds (including Terremark’s previous offerings). It is also intended to allow dynamic isolation of compute, storage, and networking components, allowing the creation of private clouds within a shared pool of hardware capacity.
The Verizon Cloud is intended to be as neutral as possible — the theory is that all VM hypervisors can run natively on Verizon’s hypervisor, many APIs can be supported (including its own API, the existing Terremark API, and the AWS, CloudStack, and OpenStack APIs), and there’ll be support for the various VM image formats. Initially, the supported hypervisor is a modified Xen. In other words, Verizon wants to take your workloads, wherever you’re running them now, and in whatever form you can export them.
It’s an enormously ambitious undertaking. It is, assuming it all works as promised, a technical triumph — it’s the kind of engineering you expect out of an organization like AWS or Google, or a software company like Microsoft or VMware, not a staid, slow-moving carrier (the mere fact that Verizon managed to launch this is a minor miracle unto itself). It is actually, in a way, what OpenStack might have aspired to be; the delta between this and the OpenStack architecture is, to me, full of sad might-have-beens of what OpenStack had the potential to be, but is not and is unlikely to become. (Then again, service providers have the advantage of engineering to a precisely-controlled environment. OpenStack, and for that matter, VMware, need to run on whatever junk the customer decides to use, instantly making the problem more complex.)
Unfortunately, the question at this stage is: Will anybody care?
Yes, I think this is an important development in the market, and the fact that Verizon is already a credible cloud player in the enterprise, with an entrenched base in the Terremark Enterprise Cloud, will help it. But in a world where developers control most IaaS purchasing, the bare-bones nature of the new Verizon offering means that it falls short of fulfilling the developer desire for greater productivity. In order to find a broader audience, Verizon will need to commit to developing all the richness of value-added capabilities that the market leaders will need — which likely means going after the PaaS market with the same degree of ambition, innovation, and investment, but certainly means committing to rapidly introducing complementing capabilities and bringing a rich ecosystem in the form of a software marketplace and other partnerships. Verizon needs to take advantage of its shiny new IaaS building blocks to rapidly introduce additional capabilities — much like Microsoft is now rapidly introducing new capabilities into Azure.
With that, assuming that this platform performs as designed, and Verizon can continue to treat Terremark’s cloud folks like they belong to a fast-moving start-up and not an ossified pipe provider, Verizon may have a shot at being one of the leaders in this market. Without that, the Verizon Cloud is likely to be relegated to a niche, just like every other provider whose capabilities stop at the level of offering infrastructure resources.
From: Amazon.com Announces Third Quarter Sales up 24% to $17.09 Billion [press release, Oct 24, 2013]
- Amazon Web Services (AWS) introduced more than 15 new features and enhancements to its fully managed relational and NoSQL database services. Amazon Relational Database Service (RDS) now supports Oracle Statspack performance diagnostics and has expanded MySQL support, including capabilities for zero downtime data migration. Enhancements to Amazon DynamoDB include new cross-region support, a local test tool, and location-based query capabilities.
- AWS continued to bolster its management services, making it easier to provision and manage more AWS resources with AWS CloudFormation and AWS OpsWorks, which both added support for Amazon Virtual Private Cloud (VPC). AWS also enhanced the AWS Console mobile app and introduced a new Command Line Interface.
- AWS continued to gain momentum in the public sector and now has more than 2,400 education institutions and 600 government agencies as customers, including recent new projects with customers such as the U.S. Federal Drug Administration.
THE JULY PRICE CUT
From Amazon.com Announces Second Quarter Sales up 22% to $15.70 Billion [press release, July 25, 2013]
- AWS announced it had lowered prices by up to 80% on Amazon EC2 Dedicated Instances, instances that run on single-tenant hardware dedicated to a single customer account. In addition, AWS lowered prices on Amazon RDS instances with On-Demand price reductions of up to 28% and Reserved Instance (RI) price reductions of up to 27%.
- Amazon Web Services (AWS) became the first major cloud provider to achieve FedRAMP Compliance which recognizes the ability of AWS to meet extensive security requirements and compliance mandates for running sensitive US government applications and protecting data. FedRAMP certification simplifies and speeds the ability for government agencies to evaluate and adopt AWS for a wide range of applications and workloads.
- AWS announced the launch of the AWS Certification Program, which recognizes IT professionals that possess the skills and technical knowledge necessary for building and maintaining applications and services on the AWS Cloud. AWS Certifications help organizations identify candidates and consultants who are proficient at architecting and developing for the cloud.
- AWS further enhanced its security and identity management capabilities across several services – introducing resource-level permissions for Amazon Elastic Compute Cloud (EC2) and Amazon Relational Database Service (RDS), adding identity federation to AWS Identity and Access Management (IAM), extending Amazon Simple Storage Service (S3) Server Side Encryption support to Amazon Elastic Map Reduce (EMR), and adding custom SSL certificate support for CloudFront. These enhancements give customers more granular security controls over their AWS deployments, applications and sensitive data.
- Et cetera (you can find the AWS highlights in every quarterly release about financials)
- All AWS related press releases
Some directly related and general/major previous press releases from that overall list:
- December, 2012: Amazon Web Services Introduces New Amazon EC2 High Storage Instance Family
- July, 2012: Amazon Web Services Introduces New Amazon EC2 High I/O Instance Type
- October, 2008: Amazon Web Services Launches Amazon EC2 for Windows
- August, 2008: Amazon Web Services Launches Amazon Elastic Block Store for Amazon EC2