Home » Posts tagged 'AWS'

Tag Archives: AWS

OpenStack adoption (by Q1 2016)

OpenStack Promise as per Moogsoft -- June 3, 2015For information on OpenStack provided earlier on this blog see:
– Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others, ‘Experiencing the Cloud’, Dec 10, 2013
– Red Hat Enterprise Linux OpenStack Platform 4 delivery and Dell as the first company to OEM it co-engineered on Dell infrastructure with Red Hat, ‘Experiencing the Cloud’, Feb 19, 2014
To understand the OpenStack V4 level state-of-technology-development as of June 25, 2015:
– go to my homepage: https://lazure2.wordpress.com/
– or to the OpenStack related part of Microsoft Cloud state-of-the-art: Hyper-scale Azure with host SDN — IaaS 2.0 — Hybrid flexibility and freedom, ‘Experiencing the Cloud’, July 11, 2015

May 19, 2016:

Oh, the places you’ll go with OpenStack! by Mark Collier, OpenStack Foundation COO on ‘OpenStack Superuser’:

With OpenStack in tow you’ll go far — be it your house, your bank, your city or your car.

Just look at all of the exciting places we’re going:

From the phone in your pocket

The telecom industry is undergoing a massive shift, away from hundreds of proprietary devices in thousands of central offices accumulated over decades, to a much more efficient and flexible software plus commodity hardware approach. While some carriers like AT&T have already begun routing traffic from the 4G networks over OpenStack powered clouds to millions of cellphone users, the major wave of adoption is coming with the move to 5G, including plans from AT&T, Telefonica, SK Telekom, and Verizon.

We are on the cusp of a revolution that will completely re-imagine what it means to provide services in the trillion dollar telecom industry, with billions of connected devices riding on OpenStack-powered infrastructure in just a few years.

To the living room socket

The titans of TV like Comcast, DirecTV, and Time Warner Cable all rely on OpenStack to bring the latest entertainment to our homes efficiently, and innovators like DigitalFilm Tree are producing that content faster than ever thanks to cloud-based production workflows.

Your car, too, will get smart

Speaking of going places, back here on earth many of the world’s top automakers, such as BMW and the Volkswagen group, which includes Audi, Lamborghini, and even Bentley, are designing the future of transportation using OpenStack and big data. The hottest trends to watch in the auto world are electric zero emissions cars and self-driving cars. Like the “smart city” mentioned above, a proliferation of sensors plus connectivity call for distributed systems to bring it all together, creating a huge opportunity for OpenStack.

And your bank will take part

Money moves faster than ever, with digital payments from startups and established players alike competing for consumer attention. Against this backdrop of enormous market change, banks must meet an increasingly rigid set of regulatory rules, not to mention growing security threats. To empower their developers to innovate while staying diligent on regs and security, financial leaders like PayPal, FICO, TD Bank, American Express, and Visa are adopting OpenStack.

Your city must keep the pace

Powering the world’s cities is a complex task and here OpenStack is again driving automation, this time in the energy sector. State Grid Corporation, the world’s largest electric utility, serves over 120 million customers in China while relying on OpenStack in production.

Looking to the future, cities will be transformed by the proliferation of fast networks combined with cheap sensors. Unlocking the power of this mix are distributed systems, including OpenStack, to process, store, and move data. Case in point: tcpcloud in Prague is helping introduce “smart city” technology by utilizing inexpensive Raspberry Pis embedded in street poles, backed by a distributed system based on Kubernetes and OpenStack. These systems give city planners insight into traffic flows of both pedestrians and cars, and even measure weather quality. By routing not just packets but people, cities are literally load balancing their way to lower congestion and pollution.

From inner to outer space

The greatest medical breakthroughs of the next decade will come from analyzing massive data sets, thanks to the proliferation of distributed systems that put supercomputer power into the hands of every scientist. And OpenStack has a huge role to play empowering researchers all over the globe: from Melbourne to Madrid, Chicago to Chennai, or Berkeley to Beijing, everywhere you look you’ll find OpenStack.

To explore this world, I recently visited the Texas Advanced Computing Center (TACC) at the University of Texas at Austin where I toured a facility that houses one of the top 10 supercomputers in the world, code named “Stampede

But what really got me excited about the future was the sight of two large OpenStack clusters: one called Chameleon, and the newest addition, Jetstream, which put the power of more than 1,000 nodes and more than 15,000 cores into the hands of scientists at 350 universities. In fact, the Chameleon cloud was recently used in a class at the University of Arizona by students looking to discover exoplanets. Perhaps the next Neil deGrasse Tyson is out there using OpenStack to find a planet to explore for NASA’s Jet Propulsion Laboratories.

Where should we go next?

Mark Collier is OpenStack co-founder, and currently the OpenStack Foundation COO. This article was first published in Superuser Magazine, distributed at the Austin Summit.

May 9, 2016:

From OpenStack Summit Austin, Part 1: Vendors digging in for long haul by Al Sadowski, 451 Research, LLC:  This report provides highlights from the most recent OpenStack Summit

THE 451 TAKE OpenStack mindshare continues to grow for enterprises interested in deploying cloud-native applications in greenfield private cloud environments. However, its appeal is limited for legacy applications and enterprises sold on hyperscale multi-tenant cloud providers like AWS and Azure. There are several marquee enterprises with OpenStack as the central component of cloud transformations, but many are still leery of the perceived complexity of configuring, deploying and maintaining OpenStack-based architectures. Over the last few releases, processes for installation and upgrades, tooling, and API standardization across projects have improved as operators have become more vocal during the requirements phase. Community membership continues to grow on a global basis, and the supporting organization also depicts a similar geographic trend.

…  Horizontal scaling of Nova is much improved, based on input from CERN and Rackspace. CERN, an early OpenStack adopter, demonstrated the ability for the open source platform to scale – it now has 165,000 cores running OpenStack. However, Walmart, PayPal and eBay are operating larger OpenStack environments.

May 18, 2015:

Walmart‘s Cloud Journey by Amandeep Singh Juneja
Sr. Director, Cloud Engineering and Operations, WalmartLabs: Introduction to World’s largest retailer and its journey to build a large private Cloud.

Amandeep Singh Juneja is Senior Director for Cloud Operations and Engineering at WalmartLabs. In his current role, Amandeep is responsible for the build out of elastic cloud used by various Walmart Ecommerce properties. Prior to his current role at Walmart Labs, Amandeep has held various leadership roles at HP, WebOS (Palm) and eBay.

May 19, 2015:

OpenStack Update from eBay and PayPal by Subbu Allamaraju
Chief Engineer, Cloud, eBay Inc: Journey and future of OpenStack eBay and PayPal

Subbu is the Chief Engineer of cloud at eBay Inc. His team builds and operates a multi-tenant geographically distributed OpenStack based private cloud. This cloud now serves 100% of PayPal web and mid tier workloads, significant parts of eBay front end and services, and thousands of users for their dev/test activities.

May 18, 2015:

Architecting Organizational Change at TD Bank by Graeme Peacock, VP Engineering, TD Bank Group

Graeme cut his teeth in the financial services consulting industry by designing and developing real-time Trading, Risk and Clearing applications. He then joined NatWest Markets and J.P. Morgan in executive level roles within the Equity Derivatives business lines.
Graeme then moved to a Silicon Valley Startup to expand his skillset as V.P. of Engineering at Application Networks. His responsibility extended to Strategy, Innovation, Product Development, Release Management and Support to some of the biggest names in the Financial Services Sector.
For the last 10 years, he has held Divisional CIO roles at Citigroup and Deutsche Bank, both of which saw him responsible for Credit, Securitized and Emerging Market businesses.
Graeme moved back to a V.P. of Engineering role at TD Bank Group several years ago. He currently oversees all Infrastructure Innovation — everything form Mobile and Desktop to Database, Middleware and Cloud.  His focus is on the transformational: software development techniques, infrastructure design patterns, and DevOps processes.

TD Bank uses cloud as catalyst for cultural change in IT
May 18, 2015 Written by Jonathan Brandon for Business Cloud News

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.
Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.
“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.
But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.
“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.
“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”
Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.
In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.
The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.
“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.
“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:

While OpenStack may have been conceived as an open source multi-tenant IaaS, its future success will mainly come from hosted and on-premises private cloud deployments. Yes, there are many pockets of success with regional or vertical-focused public clouds based on OpenStack, but none with the scale of AWS or the growth of Microsoft Azure. Hewlett Packard Enterprise shuttered its OpenStack Helion-based public cloud, and Rackspace shifted engineering resources away from its own public cloud. Rackspace, the service provider with the largest share of OpenStack-related revenue, says its private cloud is growing in the ‘high double digits.’ Currently, 56% of OpenStack’s service-provider revenue total is public cloud-based, but we expect private cloud will account for a larger portion over the next few years.

October 21, 2015:

A new model to deliver public cloud by Bill Hill, SVP and GM, HP Cloud

Over the past several years, HP has built its strategy on the idea that a hybrid infrastructure is the future of enterprise IT. In doing so, we have committed to helping our customers seamlessly manage their business across traditional IT and private, managed or public cloud environments, allowing them to optimize their infrastructure for each application’s unique requirements.
The market for hybrid infrastructure is evolving quickly. Today, our customers are consistently telling us that in order to meet their full spectrum of needs, they want a hybrid combination of efficiently managed traditional IT and private cloud, as well as access to SaaS applications and public cloud capabilities for certain workloads. In addition, they are pushing for delivery of these solutions faster than ever before.
With these customer needs in mind, we have made the decision to double-down on our private and managed cloud capabilities. For cloud-enabling software and solutions, we will continue to innovate and invest in our HP Helion OpenStack®platform. HP Helion OpenStack® has seen strong customer adoption and now runs our industry leading private cloud solution, HP Helion CloudSystem, which continues to deliver strong double-digit revenue growth and win enterprise customers. On the cloud services side, we will focus our resources on our Managed and Virtual Private Cloud offerings. These offerings will continue to expand, and we will have some very exciting announcements on these fronts in the coming weeks.

Public cloud is also an important part of our customers’ hybrid cloud strategy, and our customers are telling us that the lines between all the different cloud manifestations are blurring. Customers tell us that they want the ability to bring together multiple cloud environments under a flexible and enterprise-grade hybrid cloud model. In order to deliver on this demand with best-of-breed public cloud offerings, we will move to a strategic, multiple partner-based model for public cloud capabilities, as a component of how we deliver these hybrid cloud solutions to enterprise customers.

Therefore, we will sunset our HP Helion Public Cloud offering on January 31, 2016. As we have before, we will help our customers design, build and run the best cloud environments suited to their needs – based on their workloads and their business and industry requirements.

To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments. To enable this flexibility, we are helping customers build cloud-portable applications based on HP Helion OpenStack® and the HP Helion Development Platform. In Europe, we are leading the Cloud28+ initiative that is bringing together commercial and public sector IT vendors and EU regulators to develop common cloud service offerings across 28 different countries.
For customers who want access to existing large-scale public cloud providers, we have already added greater support for Amazon Web Services as part of our hybrid delivery with HP Helion Eucalyptus, and we have worked with Microsoft to support Office 365 and Azure. We also support our PaaS customers wherever they want to run our Cloud Foundry platform – in their own private clouds, in our managed cloud, or in a large-scale public cloud such as AWS or Azure.
All of these are key elements in helping our customers transform into a hybrid, multi-cloud IT world. We will continue to innovate and grow in our areas of strength, we will continue to help our partners and to help develop the broader open cloud ecosystem, and we will continue to listen to our customers to understand how we can help them with their entire end-to-end IT strategies.

 December 1, 2015:

Hewlett Packard Enterprise and Microsoft announce plans to deliver integrated hybrid IT infrastructure press release

London, U.K. – December 1, 2015 – Today at Hewlett Packard Enterprise Discover, HPE and Microsoft Corp. announced new innovation in Hybrid Cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft’s hybrid cloud offerings.

“Hewlett Packard Enterprise is committed to helping businesses transform to hybrid cloud environments in order to drive growth and value,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “Public cloud services, like those Azure provides, are an important aspect of a hybrid cloud strategy and Microsoft Azure blends perfectly with HPE solutions to deliver what our customers need most.”
The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models and accelerate their business further, faster.
“Our mission to empower every organization on the planet is a driving force behind our broad partnership with Hewlett Packard Enterprise that spans Microsoft Azure, Office 365 and Windows 10,” said Satya Nadella, CEO, Microsoft. “We are now extending our longstanding partnership by blending the power of Azure with HPE’s leading infrastructure, support and services to make the cloud more accessible to enterprises around the globe.”
Product Integration and Collaboration HPE and Microsoft are introducing the first hyper-converged system with true hybrid cloud capabilities, the HPE Hyper-Converged 250 for Microsoft Cloud Platform System StandardBringing together industry leading HPE ProLiant technology and Microsoft Azure innovation, the jointly engineered solution brings Azure services to customers’ datacenters, empowering users to choose where and how they want to leverage the cloud. An Azure management portal enables business users to self-deploy Windows and Linux workloads, while ensuring IT has central oversight. Azure services provide reliable backup and disaster recovery, and with HPE OneView for Microsoft System Center, customers get an integrated management experience across all system components. HPE offers hardware and software support, installation and startup services to customers to speed deployment to just a matter of hours, lower risk and decrease total cost of ownership. The CS 250 is available to order today.
As part of the expanded partnership, HPE will enable Azure consumption and services on every HPE server, which allows customers to rapidly realize the benefits of hybrid cloud.
Extended Support and Services to Simplify Cloud
HPE and Microsoft will create HPE Azure Centers of Excellence in Palo Alto, Calif. and Houston, Texas, to ensure customers have a seamless hybrid cloud experience when leveraging Azure across HPE infrastructure, software and services. Through the work at these centers, both companies will invest in continuing advancements in Hybrid IT and Composable Infrastructure.
Because Azure is a preferred provider of public cloud for HPE customers, HPE also plans to certify an additional 5,000 Azure Cloud Architects through its Global Services Practice. This will extend its Enterprise Services offerings to bring customers an open, agile hybrid cloud with improved security that integrates with Azure.
Partner Program Collaboration
Microsoft will join the HPE Composable Infrastructure Partner Program to accelerate innovation for the next-generation infrastructure and advance the automation and integration of Microsoft System Center and HPE OneView orchestration tools with today’s infrastructure.
Likewise, HPE joined two Microsoft programs that help customers accelerate their hybrid cloud journey through end-to-end cloud, mobility, identity and productivity solutions. As a participant in Microsoft’s Cloud Solution Provider program, HPE will sell Microsoft cloud solutions across Azure, the Microsoft Enterprise Mobility Suite and Office 365.

May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:

VENDOR DEVELOPMENTS

As of the Mitaka release, two new gold members were added: UnitedStack and EasyStack, both from China. Other service providers and vendors shared their customer momentum and product updates with 451 Research during the summit. Among the highlights are: ƒ

  • AT&T has cobbled together a DevOps team from 67 different organizations, in order to transform into a software company. ƒ
  • All of GoDaddy’s new servers are going into its OpenStack environment. It is also using the Ironic (bare metal) project and exploring containers on OpenStack. ƒ
  • SwiftStack built a commercial product with an AWS-like consumption model using the Swift (object storage) project. It now has over 60 customers, including eBay, PayPal, Burton Snowboards and Ancestry.com. ƒ
  • OVH is based in France and operates a predominately pan-Europe public cloud. It added Nova compute in 2014, and currently has 75PB on Swift storage. ƒ
  • Unitas Global says OpenStack-related enterprise engagements are a large part of its 100% Y/Y growth. While it does not contribute code, it is helping to develop operational efficiencies and working with Canonical to deploy ‘vanilla’ OpenStack using Juju charms. Tableau Software is a client. ƒ
  • DreamHost is operating an OpenStack public cloud, DreamCompute, and is a supporter of the Astara (network orchestration) project. It claims 2,000 customers for DreamCompute and 10,000 customers for its object storage product. ƒ
  • Platform9 is a unique OpenStack in SaaS startup with 20 paying customers. Clients bring their own hardware, and the software provides the management functions and takes care of patching and upgrades. ƒ
  • AppFormix is a software startup focused on cloud operators and application developers that has formed a licensing agreement with Rackspace. Its analytics and capacity-planning dashboard software will now be deployed on Rackspace’s OpenStack private cloud. The software also works with Azure and AWS. ƒ
  • Tesora is leveraging the Trove project to offer DBaaS. The vendor built a plug-in for Mirantis’ Fuel installer. The collaboration claims to make commercial, open source relational and NoSQL databases easier for administrators to deploy.

April 25, 2016:

AT&T’s Cloud Journey with OpenStack by Sorabh Saxena SVP, Software Development & Engineering, AT&T

OpenStack + AT&T Innovation = AT&T Integrated Cloud.

AT&T’s network has experienced enormous growth in traffic in the last several years and the trend continues unabated. Our software defined network initiative addresses the escalating traffic demands and brings greater agility and velocity to delivering features to end customers. The underlying fabric of this software defined network is AT&T Integrated Cloud (AIC).

Sorabh Saxena, AT&T’s SVP of Software Development & Engineering, will share several use cases that will highlight a multi-dimensional strategy for delivering an enterprise & service provider scale cloud. The use cases will illustrate OpenStack as the foundational element of AIC, AT&T solutions that complement it, and how it’s integrated with the larger AT&T ecosystem.

http://att.com/ecomp


As the Senior Vice President of Software Development and Engineering at AT&T, Sorabh Saxena is leading AT&T’s transformation to a software-based company.  Towards that goal, he is leading the development of platforms that include AT&T’s Integrated Cloud (AIC), API, Data, and Business Functions. Additionally, he manages delivery and production support of AT&T’s software defined network.

Sorabh and his organization are also responsible for technology solutions and architecture for all IT projects, AT&T Operation Support Systems and software driven business transformation programs that are positioning AT&T to be a digital first, integrated communications company with a best in class cost structure. Sorabh is also championing a cultural shift with a focus on workforce development and software & technology skills development.

Through Sorabh and his team’s efforts associated with AIC, AT&T is implementing an industry leading, highly complex and massively scaled OpenStack cloud.  He is an advocate of OpenStack and his organization contributes content to the community that represents the needs of large enterprises and communication services providers.

April 25, 2016And the Superuser Award goes to… AT&T takes the fourth annual Superuser Award.

AUSTIN, Texas — The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.

NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list–read their full application here.

mkih1tshgq4senfbijhh[1]

Sorabh Saxena gives a snapshot of AT&Ts OpenStack projects during the keynote.

The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.

The legacy telecom is in the top 20 percent for upstream contributions with plans to increase this significantly in 2016.

It’s time for the community to determine the winner of the Superuser Award to be presented at the OpenStack Austin Summit. Based on the nominations received, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to four finalists.

Now, it’s your turn.

The team from AT&T is one of the four finalists. Review the nomination criteria below, check out the other nominees and cast your vote before the deadline, Friday, April 8 at 11:59 p.m.Pacific Daylight Time. Voting is limited to one ballot per person.

How has OpenStack transformed your business?

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond.

  1. Virtualization and virtual network functions (VNFs) are of critical importance to the Telecom industry to address growth and agility. AT&T’s Domain 2.0 Industry Whitepaper released in 2013 outlines the need as well as direction.
  2. AT&T chose OpenStack as the core foundation of their cloud and virtualization strategy
  3. OpenStack has reinforced AT&T’s open source strategy and strengthened our dedication to the community as we actively promote and invest resources in OpenStack
  4. AT&T is committing staff and resources to drive the vision and innovation in the OpenStack and OPNFV communities to help drive OpenStack as the default cloud orchestrator for the Telecom industry
  5. AT&T as a founding member of the ETSI ISG network functions virtualization (NFV) helped drive OpenStack as the cloud orchestrator in the NFV platform framework. OpenStack was positioned as the VIM – Virtual Infrastructure Manager. This accelerated the convergence of the Telco industry onto OpenStack.

OpenStack serves as a critical foundation for AT&T’s software-defined networking (SDN) and NFV future and we take pride in the following:

  • AT&T has deployed 70+ OpenStack (Juno & Kilo based) clouds globally, which are currently operational. Of the 70+ clouds 57 are production application and network clouds.
  • AT&T plans 90% growth, going to 100+ production application and network clouds by the end of 2016.
  • AT&T connects more than 14 million wireless customers via virtualized networks, with significant subscriber cut-over planned again in 2016
  • AT&T controls 5.7% of our network resources (29 Telco production grade VNFs) with OpenStack, with plans to reach 30% by the end of 2016 and 75% by 2020.
  • AT&T trained more than 100 staff in OpenStack in 2015

AT&T plans to expand to expand its community team of 50+ employees in 2016 As the chosen cloud platform OpenStack enabled AT&T in the following SDN and NFV related initiatives:

  • Our recently announced 5G field trials in Austin
  • Re-launch of unlimited data to mobility customers
  • Launch of AT&T Collaborate a next generation communication tool for enterprise
  • Provisioning of a Network on Demand platform to more than 500 enterprise customers
  • Connected Car and MVNO (Mobile Virtual Network Operator)
  • Mobile Call Recording
  • Internally we are virtualizing our control services like DNS, NAT, NTP, DHCP, radius, firewalls, load balancers and probes for fault and performance management.

Since 2012, AT&T has developed all of our significant new applications in a cloud native fashion hosted on OpenStack. We also architected OpenStack to support legacy apps.

  • AT&T’s SilverLining Cloud (predecessor to AIC) leveraged the OpenStack Diablo release, dating as far back as 2011
  • OpenStack currently resides on over 15,000 VMs worldwide, with the expectation of further, significant growth coming in 2016-17
  • AT&T’s OpenStack integrated Orchestration framework has resulted in a 75% reduction in turnaround time for requests for virtual resources
  • AT&T Plans to move 80% of our Legacy IT into the OpenStack based virtualized cloud environment within coming years
  • Uniform set of APIs exposed by OpenStack allows AT&T business units to leverage a “develop-once-run-everywhere” set of tools OpenStack helps AT&T’s strategy to begin to adopt best of the breed solutions at five 9’s of reliability for:
    • NFV
    • Internet-scale storage service
    • SDN
  • Putting all AT&T’s workloads on one common platform Deployment Automation: OpenStack modules have enabled AT&T to cost-effectively manage the OpenStack configuration in an automated, holistic fashion.
  • Using OpenStack Heat, AT&T pushed rolling updates and incremental changes across 70+ OpenStack clouds. Doing it manually would be take many more people and a much longer schedule.
  • Using OpenStack Fuel as a pivotal component in its cloud deployments AT&T accelerates the otherwise consuming, complex, and error-prone process of deploying, testing, and maintaining various configuration flavors of OpenStack at scale. AT&T was a major contributor towards Fuel 7.0 and Fuel 8.0 requirements. OpenStack has been a pivotal driver of AT&T’s overall culture shift. AT&T as an organization is in the midst of a massive culture shift from a Legacy Telco to a company where new skills, techniques and solutions are embraced.

OpenStack has been a key driver of this transformation in the following ways:

  • AT&T is now building 50 percent of all software on open source technologies
  • Allowing for the adoption of a dev ops model that creates a more unified team working towards a better end product
  • Development transitioned from a waterfall to cloud-native CICD methodologies
  • Developers continue to support OpenStack and make their applications cloud-native whenever possible.

How has the organization participated in or contributed to the OpenStack community?

AT&T was the first U.S. telecom service provider to sign up for and adopt the then early stage NASA-spawned OpenStack cloud initiative, back in 2011.

  • AT&T has been an active OpenStack contributor since the Bexar release.
  • AT&T has been a Platinum Member of the OpenStack Foundation since its origins in 2012 after helping to create its bylaws.
  • Toby Ford, AVP AT&T Cloud Technology has provided vision, technology leadership, and innovation to OpenStack ecosystem as an OpenStack Foundation board member since late 2012.
  • AT&T is founding member of ETSI, and OPNFV.
  • AT&T has invested in building an OpenStack upstream contribution team with 25 current employees and a target for 50+ employees by the end of 2016.
  • During the early years of OpenStack, AT&T brought many important use-cases to the community. AT&T worked towards solving those use-cases by leveraging various OpenStack modules, in turn encouraging other enterprises to have confidence in the young ecosystem.
  • AT&T drove these following Telco-grade blueprint contributions to past releases of OpenStack:
    • VLAN aware VMs (i.e. Trunked vNICs) – Support for BGP VPN, and shared volumes between guest VMs
    • Complex query support for statistics in Ceilometer
    • Spell checker gate job
    • Metering support for PCI/PCIe per VM tenant
    • PCI passthrough measurement in Ceilometer – Coverage measurement gate job
    • Nova using ephemeral storage with cinder
    • Climate subscription mechanism
    • Access switch port discovery for bare metal nodes
    • SLA enforcement per vNIC – MPLS VPNaaS
    • NIC-state aware scheduling
  • Toby Ford has regularly been invited to present keynotes, sessions, and panel talks at a number of OpenStack summits. For instance: Role of OpenStack in a Telco: User case study – at Atlanta Summit May 2014 – Leveraging OpenStack to Solve Telco needs: Intro to SDN/NFV – Atlanta Summit May 2014 – Telco OpenStack Roadmap Panel Talk – Tokyo Summit October 2015 – OpenStack Roadmap Software Trajectory – Atlanta Summit May 2014 – Cloud Control to Major Telco – Paris Summit November 2014.
  • Greg Stiegler, assistant vice president – AT&T cloud tools & development organization represented the AT&T technology development organization at the Tokyo Summit.
  • AT&T Cloud and D2 Architecture team members were invited to present various keynote sessions, summit sessions and panel talks including: – Participation at the Women of OpenStack Event – Tokyo Summit 2015 – Empower Your Cloud Through Neutron Service Function Chaining – Tokyo Summit Oct 2015 – OPNFV Panel – Vancouver Summit May 2015 – OpenStack as a Platform for Innovation – Keynote at OpenStack Silicon Valley – Aug 2015 – Taking OpenStack From Zero to Production in a Fortune-500 – Tokyo Summit October 2015 – Operating at Web-scale: Containers and OpenStack Panel Talk – Tokyo Summit October 2015 * AT&T strives to collaborate with other leading industry partners in the OpenStack ecosystem. This has led to the entire community benefiting from AT&T’s innovation.
  • Margaret Chiosi gives talks worldwide on AT&T’s D2.0 vision at many Telco conferences ranging from Optics (OFC) to SDN/NFV conferences advocating OpenStack as the de-facto cloud orchestrator.
  • AT&T Entertainment Group (DirecTV) architected multi-hypervisor hybrid OpenStack cloud by designing Neutron ML2 plugin. This innovation helped achieve integration between legacy virtualization and OpenStack.
  • AT&T is proud to drive OpenStack adoption by sharing knowledge back to the OpenStack community in the form of these summit sessions at the upcoming Austin summit:
    • Telco Cloud Requirements: What VNFs Are Asking For
    • Using a Service VM as an IPv6 vRouter
    • Service Function Chaining
    • Technology Analysis Perspective
    • Deploying Lots of Teeny Tiny Telco Clouds
    • Everything You Ever Wanted to Know about OpenStack At Scale
    • Valet: Holistic Data Center Optimization for OpenStack
    • Gluon: An Enabler for NFV
    • Among the Cloud: Open Source NFV + SDN Deployment
    • AT&T: Driving Enterprise Workloads on KVM and vCenter using OpenStack as the Unified Control Plane
    • Striving for High-Performance NFV Grid on OpenStack. Why you, and every OpenStack community member should be excited about it
    • OpenStack at Carrier Scale
  • AT&T is the “first to marketwith deployment of OpenStack supported carrier-grade Virtual Network Functions. We provide the community with integral data, information, and first-hand knowledge on the trials and tribulations experienced deploying NFV technology.
  • AT&T ranks in the top 20 percent of all companies in terms of upstream contribution (code, documentation, blueprints), with plans to increase this significantly in 2016.
    • Commits: 1200+
    • Lines of Code: 116,566
    • Change Requests: 618
    • Patch Sets: 1490
    • Draft Blueprints: 76
    • Completed Blueprints: 30
    • Filed Bugs: 350
    • Resolved Bugs: 250

What is the scale of the OpenStack deployment?

  • AT&T’s OpenStack based AIC is deployed at 70+ sites across the world. Of the 70+ 57 are production app and network clouds.
  • AT&T plans 90% growth, going to 100+ production app and network clouds by end of 2016.
  • AT&T connects more than 14 million of the 134.5 million wireless customers via virtualized networks with significant subscriber cutover planned again in 2016
  • AT&T controls 5.7% of our network resources (29 Telco production grade VNF) with a goal of high 80s by end of 2016) on OpenStack.
  • Production workloads also include AT&T’s Connected Car, Network on Demand, and AT&T Collaborate among many more.

How is this team innovating with OpenStack?

  • AT&T and AT&T Labs are leveraging OpenStack to innovate with Containers and NFV technology.
  • Containers are a key part of AT&Ts Cloud Native Architecture. AT&T chairs the Open Container Initiative (OCI) to drive the standardization around container formats.
  • AT&T is leading the effort to improve Nova and Neutron’s interface to SDN controllers.
  • Margaret Chiosi, an early design collaborator to Neutron, ETSI NFV, now serves as President of OPNFV. AT&T is utilizing its position with OPNFV to help shape the future of OpenStack / NFV. OpenStack has enabled AT&T to innovate extensively.

The following recent unique workloads would not be possible without the SDN and NFV capabilities which OpenStack enables: * Our recent announcements of 5G field trials in Austin * Re-launch of unlimited data to mobility customers * Launch of AT&T Collaborate * Network on Demand platform to more than 500 enterprise customers * Connected Car and MVNO (Mobile Virtual Network Operator) * Mobile Call Recording New services by AT&T Entertainment Group (DirecTV) that would use OpenStack based cloud infrastructure in coming years: * NFL Sunday Ticket with up to 8 simultaneous games * DirecTV Streaming Service Without Need For satellite dish

In summary – the innovation with OpenStack is not just our unique workloads, but also to support them together under the same framework, management systems, development/test, CI/CD pipelines, and deployment automation toolset(s).

Who are the team members?

  • AT&T Cloud and D2 architecture team
  • AT&T Integrated Cloud (AIC) Members: Margaret Chiosi, distinguished member of technical staff, president of OPNFV; Toby Ford, AVP – AT&T cloud technology & D2 architecture – strategy, architecture & pPlanning, and OpenStack Foundation Board Member; Sunil Jethwani – director, cloud & SDN architecture, AT&T Entertainment Group; Andrew Leasck – director – AT&T Integrated cloud development; Janet Morris – director – AT&T integrated cloud development; Sorabh Saxena, senior vice president – AT&T software development & engineering organization; Praful Shanghavi – director – AT&T integrated cloud development; Bryan Sullivan – director member of technical staff; Ryan Van Wyk – executive director – AT&T integrated cloud development.
  • AT&T’s project teams top contributors: Paul Carver, Steve Wilkerson, John Tran, Joe D’andrea, Darren Shaw.

April 30, 2016Swisscom in Production with OpenStack and Cloud Foundry

Swisscom has one of the largest in-production industry standard Platform as a Service built on OpenStack. Their offering is focused on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry, and PLUMgrid. With services such as MongoDB, MariaDB, RabbitMQ, ELK, and an object storage, the PaaS cloud offers what developers need to get started right away. Join this panel for take-away lessons on Swisscom’s journey, the technologies, partnerships, and developers who are building apps everyday on Swisscom’s OpenStack cloud.

May 23, 2016How OpenStack public cloud + Cloud Foundry = a winning platform for telecoms interview on ‘OpenStack Superuser’ with Marcel Härry, chief architect, PaaS at Swisscom

Swisscom has one of the largest in-production industry standard platform-as-a-service built on OpenStack.

Their offering focuses on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom, Switzerland’s leading telecom provider, embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry and PLUMgrid.

Superuser interviewed Marcel Härry, chief architect, PaaS at Swisscom and member of theTechnical Advisory Board of the Cloud Foundry Foundation to find out more.

How are you using OpenStack?

OpenStack has allowed us to rapidly develop and deploy our Cloud Foundry-based PaaS offering, as well as to rapidly develop new features within SDN and containers. OpenStack is the true enabler for rapid development and delivery.

An example: after half a year from the initial design and setup, we already delivered two production instances of our PaaS offering built on multiple OpenStack installations on different sites. Today we are already running multiple production deployments for high-profile customers, who further develop their SaaS offerings using our platform. Additionally, we are providing the infrastructure for numerous lab and development instances. These environments allow us to harden and stabilize new features while maintaining a rapid pace of innovation, while still ensuring a solid environment.

We are running numerous OpenStack stacks, all limited – by design – to a single region, and single availability zone. Their size ranges from a handful of compute nodes, to multiple dozens of compute nodes, scaled based on the needs of the specific workloads. Our intention is not to build overly large deployments, but rather to build multiple smaller stacks, hosting workloads that can be migrated between environments. These stacks are hosting thousands of VMs, which in turn are hosting tens of thousands of containers to run production applications or service instances for our customers.

What kinds of applications or workloads are you currently running on OpenStack?

We’ve been using OpenStack for almost three years now as our infrastructure orchestrator. Swisscom built its Elastic Cloud on top of OpenStack. On top of this we run Swisscom’s Application Cloud, or PaaS, built on Cloud Foundry with PLUMgrid as the SDN layer. Together, the company’s clouds deliver IaaS to IT architects, SaaS to end users and PaaS to app developers among other services and applications. We mainly run our PaaS/Cloud Foundry environment on OpenStack as well as the correlated managed services (i.e. a kind of DBaaS, Message Service aaS etc.) which are running themselves in Docker containers.

What challenges have you faced in your organization regarding OpenStack, and how did you overcome them?

The learning curve for OpenStack is pretty steep. When we started three years ago almost no reference architectures were available, especially none with enterprise-grade requirements such as dual-site, high availability (HA) capabilities on various levels and so forth. In addition, we went directly into the SDN, SDS levels of implementation which was a big, but very successful step at the end of the day.

What were your major milestones?

Swisscom’s go-live for its first beta environment was in spring of 2014, go live for an internal development (at Swisscom) was spring of 2015, and the go-live for its public Cloud Foundry environment fully hosted on OpenStack was in the fall of 2015. The go-live date for enterprise-grade and business-critical workloads on top of our stack from various multinational companies in verticals like finance or industry is spring, 2016, and Swisscom recently announced Swiss Re as one of its first large enterprise cloud customers.

What have been the biggest benefits to your organization as a result of using OpenStack?

Pluggability and multi-vendor interoperability (for instance with SDN like PLUMgrid or SDS like ScaleIO) to avoid vendor lock in and create a seamless system. OpenStack enabled Swisscom to experiment with deployments utilizing a DevOps model and environment to deploy and develop applications faster. It simplified the move from PoC to production environments and enabled us to easily scale out services utilizing a distributed cluster-based architecture.

What advice do you have for companies considering a move to OpenStack?

It’s hard in the beginning but it’s really worth it. Be wise when you select your partners and vendors, this will help you to be online in a very short amount of time. Think about driving your internal organization towards a dev-ops model to be ready for the first deployments, as well as enabling your firm to change deployment models (e.g. going cloud-native) for your workloads when needed.

How do you participate in the community?

This year’s Austin event was our second OpenStack Summit where we provided insights into our deployment and architecture, contributing back to the community in terms of best practices, as well as providing real-world production use-cases. Furthermore, we directly contribute patches and improvements to various OpenStack projects. Some of these patches have already been accepted, while a few are in the pipeline to be further polished for publishing. Additionally, we are working very closely together with our vendors – RedHat, EMC, ClusterHQ/Flocker, PLUMgrid as well as the Cloud Foundry Foundation – and work together to further improve their integration and stability within the OpenStack project. For example, we worked closely together with Flocker for their cinder-based driver to orchestrate persistency among containers. Furthermore, we have provided many bug reports through our vendors and have worked together with them on fixes which then have made their way back into the OpenStack community.

What’s next?

We have a perfect solution for non-persistent container workloads for our customers. We are constantly evolving this product and are working especially hard to meet the enterprise- and finance-verticals requirements when it comes to the infrastructure orchestration of OpenStack.

Härry spoke about OpenStack in production at the recent Austin Summit, along with Pere Monclus of PLUMgrid, Chip Childers of the Cloud Foundry Foundation, Chris Wright of Red Hat and analyst Rosalyn Roseboro. 

May 10, 2016: Lenovo‘s Highly-Available OpenStack Enterprise Cloud Platform Practice with EasyStack press release by EasyStack

BEIJING, May 10, 2016 /PRNewswire/ — In 2015, the Chinese IT superpower Lenovo chose EasyStack to build an OpenStack-based enterprise cloud platform to carry out their “Internet Strategy”. In six months, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It is expected that by the end of 2016, 20% of the IT system will be migrated onto the Cloud.

OpenStack is the foundation for Cloud, and perhaps has matured in the overseas market. In China, OpenStack practices worthy of noticing often come from the relatively new category of Internet Companies. Though it has long been marketed as “enterprise-ready”, traditional industries still tend to hold back towards OpenStack. This article aims to turn this perception around by presenting an OpenStack practice from the Chinese IT Superpower Lenovo, detailing their journey of transformation in both the technology and business realms to a private cloud built upon OpenStack. Although OpenStack will still be largely a carrier for internet businesses, Lenovo plans to migrate 20% of its IT system onto the cloud before the end of 2016 – taking a much applauded step forward.

Be it the traditional PC or the cellphone, technology’s evolving fast amidst this move towards mobile and social networking, and the competition’s fierce. In response to rapidly changing market dynamics, the Lenovo Group made the move of going from being product-oriented to a user-oriented strategy that can only be supported by an agile, flexible and scalable enterprise-level cloud platform capable of rapid iterations. After thorough consideration and careful evaluation, Lenovo chose OpenStack as the basis for their enterprise cloud platform to carry out this “Internet Strategy”. After six months of practice, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It’s expected that 20% of the IT system will be migrated onto the Cloud by the end of 2016.

Transformation and Picking the Right Cloud

In the past, internal IT at Lenovo has always been channel- and key client-oriented, with a traditional architecture consisting of IBM Power, AIX, PowerVM, DB2 and more recently, VMware virtualization. In the move towards becoming an Internet Company, such traditional architecture was far from being able to support the user and business volume brought by the B2C model. Cost-wise, Lenovo’s large-scale deployment of commercial solutions were reliable but complex to scale and extremely expensive.

Also, this traditional IT architecture was inadequate in terms of operational efficiency, security and compliance and unable to support Lenovo’s transition towards eCommerce and mobile business. In 2015, Lenovo’s IT entered a stage of infrastructural re-vamp, in need of using a cloud computing platform to support new businesses.

To find the right makeup for the cloud platform, Lenovo performed meticulous analyses and comparisons on mainstream x86 virtualization technologies, private cloud platforms, and public cloud platforms. After evaluating stability, usability, openness and ecosystem vitality and comprehensiveness, Lenovo deemed the OpenStack cloud platform technology able to fulfill its enterprise needs and decided to use OpenStack as the infrastructural cloud platform supporting their constant businesses innovations.

Disaster recovery plans on virtual machines, cloud hard drives and databases were considered early on into the OpenStack architectural design to ensure prompt switch over when needed to maintain business availability.

A Highly Available Architectural Design

On architectural logic, Lenovo’s Enterprise Cloud Platform managed infrastructures through a software-defined-environment, using x86 servers and 10GB network at the base layer, alongside internet-style monitoring and maintenance solutions, while employing the OpenStack platform to perform overall resource management.

To ensure high availability and improve the cloud platform’s system efficiency, Lenovo designed a physical architecture, and used capable servers with advanced configurations to make up the compute, storage network all-in-one, then using OpenStack to integrate into a single resource pool, placing compute nodes and storage nodes on the same physical node.

Two-way X3650 servers and four-way ThinkServer RQ940 server as backbones at the hardware layer. For every node there are five SSD hard drivers and 12 SAS hard drives to make up the storage module. SSD not only acts as the storage buffer, but also is the high performance storage resource pool, accessing the distributed storage through the VM to achieve high availability.

Lenovo had to resolve a number of problems and overcome numerous hurdles to elevate OpenStack to the enterprise-level.

Compute

Here, Lenovo utilized high-density virtual machine deployment. At the base is KVM virtualization technology, optimized in multiple way to maximize physical server performance, isolating CPU, Memory and other hardware resources under the compute-storage convergent architecture. The outcome is the ability to have over 50 VMs running smoothly and efficiently on every two-core CPU compute node.

In the cloud environment, it’s encouraged to achieve high availability through not hardware, but solutions. Yet still there are some traditional applications that hold certain requirements to a single host server. For such applications unable to achieve High Availability, Lenovo used Compute HA technology to achieve high availability on compute nodes, performing fault detection through various methods, migrating virtual machines on faulted physical machine to other available physical machines when needed. This entire process is automated, reducing as much as possible business disruptions caused by physical machine breakdowns.

Network

Network Isolation

Using different NIC, different switch or different VLAN to isolate various networks such as stand-alone OpenStack management networks, virtual production networks, storage networks, public networks, and PXE networks, so that interferences are avoided, increasing overall bandwidth and enabling better network control.

Multi-Public Network

Achieve network agility through multiple public networks to better manage security strategies. The Public Networks from Unicom, Telecom and at the office are some examples

Network and Optimization

Better integrate with the traditional data center network through the VLAN network model, then optimize its data package processing to achieve improved capability on network data pack process, bringing closer the virtual machine bandwidth to that of the physical network.

Dual Network Protocol Bundling and Multi Switch

Achieve high availability of physical networks through dual network protocol bundling to different switches.

Network Node HA

Achieve public network load balance, high availability and high performance through multiple network nodes, at which router-level Active/Standby methodology is used to achieve HA, which is ensured through independent network router monitoring services.

Storage

The Lenovo OpenStack Cloud Platform used Ceph as the unified storage backend, in which data storage for Glance image mirroring, Nova virtual machine system disc, and Cinder cloud hard drive are provided by Ceph RBD. Using Ceph’s Copy on Write function to revise OpenStack codes can deploy virtual machines within seconds.

With Ceph as the unified storage backend, its functionality is undoubtedly a key metric on whether the critical applications of an enterprise can be virtualized and cloud-ready. In a super-convergent deployment architecture where compute and storage run alongside each other, storage function optimization not only have to maximize storage capability, but also have to ensure the isolation between storage and compute resources to maintain system stability. For the IO stack below, Lenovo conducted bottom-up layer-by-layer optimization:

On the Networks

Open the Jumbo frame, improve data transfer efficiency while use 10Gb Ethernet to carry Ceph Cluster network traffics, improving the efficiency on Ceph data replication.

On Functionality

Leverage Solid State Disc as the Ceph OSD log to improve overall cluster IO functionality, to fulfill performance demands of critical businesses ( for example the eCommerce system’s database businesses, etc.) and achieve function-cost balance. SSD is known for its low power consumption, prompt response, high IOPS, and high throughput. In the Ceph log system, these are aligned to multithread access; using SSD to replace mechanical hard drives can fully unleash SSD’s trait of random access, rapid response and high IO throughput. Appropriately optimizing IO coordination strategy and further suit it to SSD and lower overall IO latency.

Purposeful Planning

Plan the number of Ceph OSD under the super-convergent node reasonably according to virtual machine density on the server, while assign in advance CPU and other memory resources. Cgroup, taskset and other tools can be used to perform resource isolation for QEMU-KVM and Ceph OSD

Parameter Tuning

Regarding parameter tuning for Ceph, performance can be effectively improved by fine-tuning parameters on FileStore’s default sequence, OSD’s OP thread and others. Additional tuning can be done through performing iteration test to find the most suitable parameter for the current hardware environment.

Data HA

Regarding data HA, besides existing OpenStack data protection measures, Lenovo has planned a comprehensive disaster recovery protocol for its three centers at two locations:

By employing exclusive low-latency fiber-optic cable, data can be simultaneously stored in local backup centers, and started asynchronously in long-distance centers, maximizing data security.

AD Integration

In addition, Lenovo has integrated its own business demands into the OpenStack enterprise cloud platform. As a mega company with tens of thousands of employees, AD activity logs are needed for authorization so that staffs won’t need to be individually set up user commands. Through customized development by part of the collaborator, Lenovo has successfully integrated AD functions into its OpenStack Enterprise Cloud Platform.

Overall Outcomes

Lenovo’s transformation towards being “internet-driven” was able to begin after the buildup of this OpenStack Enterprise Cloud Platform. eCommerce, Big Data and Analytics, IM, Online Mobile Phone Support and other internet based businesses, all supported by this cloud platform. Judging from feedback from the team, the Lenovo OpenStack Enterprise Cloud Platform is functioning as expected.

In the process of building up this OpenStack based enterprise cloud platform, Lenovo chose EasyStack, the leading Chinese OpenStack Company to provide professional implementation and consulting services, helping to build the initial platform, fostering a number of OpenStack experts. For Lenovo, community compatibility and continuous upgrade, as well as experiences in delivering services at the enterprise level are the main factors for consideration when choosing an OpenStack business partner.

Microsoft chairman: The transition to a subscription-based cloud business isn’t fast enough. Revamp the sales force for cloud-based selling.

See also my earlier posts:
– John W. Thompson, Chairman of the Board of Microsoft: the least recognized person in the radical two-men shakeup of the uppermost leadership, ‘Experiencing the Cloud’, Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft, ‘Experiencing the Cloud’, July 23, 2014

May 17, 2016John Thompson: Microsoft Should Move Faster on Cloud Plan in an interview with Bloomberg’s Emily Chang on “Bloomberg West”

The focus is very-very good right now. We’re focused on cloud, on the hydrid model of the cloud. We’re focused on the application services we can deliver not just in the cloud but on multiple devices. If ever I would like to see something change, it’s more about pace. From my days at IBM [Thompson spent 28 years at IBM before becoming chief executive at Symantec] I can remember we never seemed to be running or moving fast enough. That is always the case in the established enterprise. While you believe that you’re moving fast in fact you’re not moving as fast as a startup.

June 2, 2016: Microsoft Ramps Up Its Cloud Efforts Bloomberg Intelligence’s Mandeep Singh reports on “Bloomberg Markets”

If you look at their segment revenue 43% from Windows and hardware devices. That part is the one where it is hard to come up with a cloud strategy to really kind of migrate that segment to the cloud very quickly. The infrastructure side is 30%, that is taken care of, and the Office is the other 30% that they have a good mix. That is really the other 43% revenue they have to figure out how to accelerate that transition to the cloud.

Then Bloomberg’s June 2, 2016 article (written by Dina Bass) came out with the following verdict:

Microsoft Board Mulls Sales Force Revamp to Speed Shift to Cloud 

Board members at Microsoft Corp. are grappling with a growing concern: that the company’s traditional software business, which makes up the majority of its sales, could evaporate in a matter of years — and Chairman John Thompson is pushing for a more aggressive shift into newer cloud-based products.

Thompson said he and the board are pleased with a push by Chief Executive Officer Satya Nadella to make more money from software and services delivered over the internet, but want it to move much faster. They’re considering ideas like increasing spending, overhauling the sales force and managing partnerships differently to step up the pace.

The cloud growth isn’t merely nice to have — it’s critical against the backdrop of declining demand for what’s known as on-premise software programs, the more traditional approach that involves installing software on a company’s own computers and networks. No one knows exactly how quickly sales of those legacy offerings will drop off, Thompson said, but it’s “inevitable that part of our business will be under continued pressure.”

The board members’ concern was born from experience. Thompson recounts how fellow director Chuck Noski, a former chief financial officer of AT&T, watched the telecom carrier’s traditional wireline business evaporate in just three years as the world shifted to mobile. Now, Noski and Thompson are asking whether something similar could happen to Microsoft.

“What’s the likelihood that could happen with on-prem versus cloud? That in three years, we look up and it’s gone?” Thompson said in an interview, snapping his fingers to make the point.

Small, but Growing

Nadella has said the company is on track to make its forecast for $20 billion in annualized sales from commercial cloud products in fiscal 2018. Still, Thompson said, the cloud business could be even further along, and the software maker should have started its push much earlier. Commercial cloud services revenue has posted impressive growth rates — with Azure product sales rising more than 100 percent quarterly — but the total business contributed just $5.8 billion of Microsoft’s $93.6 billion in sales in the latest fiscal year.

Thompson praised the technology behind smaller cloud products, such as Power BI tools for business analysis and data visualization and the enterprise mobile management service, which delivers apps and data to various corporate devices. But the latter, for example, brings in $300 million a year — just a sliver of overall annual revenue, which will soon top $100 billion, Thompson said.

The board is examining whether Microsoft has invested enough in its complete cloud lineup, Thompson said. It’s not just about developing better cloud technology — it’s a question of how the company sells those products and its strategy for recruiting partners to resell Microsoft’s services and build their own offerings on top of them. Persuading partners to develop compatible applications is a strong point for cloud market leader Amazon.com Inc., he said.

Thompson declined to be specific about what the company might change in sales and partnerships, but he said the company may need to “re-imagine” those organizations. “The question is, should it be more?” he said. “If you believe we need to run harder, run faster, be less risk-averse as a mantra, the question is how much more do you do.”

Cloud Partnerships

Analysts say Microsoft should seek to develop a deeper bench of partners making software for Azure and consultants to install and manage those services for customers who need the help. Microsoft is working on this, but is behind Amazon Web Services, said Lydia Leong, an analyst at Gartner Inc.

“They are nowhere near at the same level of sophistication, and the Microsoft partners are mostly new to the Azure ecosystem, so they don’t know it as well,” she said. “If you’re a customer and you want to migrate to AWS, you have this massive army that can help you.”

In the sales force, Microsoft’s representatives need more experience in cloud deals — which are generally subscription-based rather than one-time purchases — and how they differ from traditional software contracts, said Matt McIlwain, managing director at Seattle’s Madrona Venture Partners. “They haven’t made enough of a transition to a cloud-based selling motion,” he said. “It’s still a work in progress.”

Microsoft declined to comment on the company’s cloud strategy or any changes to sales and partnerships for this story, and director Noski couldn’t be reached for comment.

One-Time Purchases

The company’s dependence on demand for traditional software was painfully apparent in its most recent quarterly report, when revenue was weighed down by weakness in its transactional business, or one-time purchases of software that customers store and run on their own PCs and networks. Chief Financial Officer Amy Hood in April said that lackluster transactional sales were likely to continue.

Microsoft’s two biggest cloud businesses are the Azure web-based service, which trails top provider Amazon but leads Google and International Business Machines Corp., and the Office 365 cloud versions of e-mail, collaboration software, word-processing and spreadsheet software. Microsoft’s key on-premise products include Windows Server and traditional versions of Office and the SQL database server.

Slumps like last quarter’s hurt even more amid the company’s shift to the cloud, which has brought a lot of changes to its financial reporting. For cloud deals, revenue is recognized over the term of the deal rather than providing an up-front boost. They’re also lower-margin businesses, squeezed by the cost of building and maintaining data centers to deliver the services. Microsoft’s gross margin dropped from 80 percent in fiscal 2010 to 65 percent in the year that ended June 30, 2015.

“This business growing incredibly well, but the gross margin of that is substantially lower than their core products of the olden days,” said Anurag Rana, an analyst at Bloomberg Intelligence. “How low do they go?”

‘Different Model’ [of doing business for subscription-based software]

It’s jarring for some investors, but the other option is worse, said Thompson.

That’s a very different model for Microsoft and one our investors are going to have to suck it up and embrace, because the alternative is don’t embrace the cloud and you wake up one day and you look just like — guess who?” Thompson doesn’t finish the sentence, but makes it clear he’s referring to IBM, the company where he spent more than 27 years, which he says is “not relevant anymore.” IBM declined to comment.

The pressure is good for Microsoft, Thompson said — pressure tends to result in change.

“You can re-imagine things when you’re stressed. It’s a lot easier to do it when you’re stressed because you feel compelled to do something,” Thompson said. “I see a lot of stress at Microsoft.”

Scott Guthrie about changes under Nadella, the competition with Amazon, and what differentiates Microsoft’s cloud products

Scott Guthrie Microsoft

Scott Guthrie Executive Vice President Microsoft Cloud and Enterprise group As executive vice president of the Microsoft Cloud and Enterprise group, Scott Guthrie is responsible for the company’s cloud infrastructure, server, database, management and development tools businesses. His engineering team builds Microsoft Azure, Windows Server, SQL Server, Active Directory, System Center, Visual Studio and .NET. Prior to leading the Cloud and Enterprise group, Guthrie helped lead Microsoft Azure, Microsoft’s public cloud platform. Since joining the company in 1997, he has made critical contributions to many of Microsoft’s key cloud, server and development technologies and was one of the original founders of the .NET project. Guthrie graduated with a bachelor’s degree in computer science from Duke University. He lives in Seattle with his wife and two children. Source: Microsoft

From The cloud, not Windows 10, is key to Microsoft’s growth [Fortune, Oct 1, 2014]

  • about changes under Nadella:

Well, I don’t know if I’d say there’s been a big change from that perspective. I mean, I think obviously we’ve been saying for a while this mobile-first, cloud-first…”devices and services” is maybe another way to put it. That’s been our focus as a company even before Satya became CEO. From a strategic perspective, I think we very much have been focused on cloud now for a couple of years. I wouldn’t say this now means, “Oh, now we’re serious about cloud.” I think we’ve been serious about cloud for quite a while.

More information: Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft [‘Experiencing the Cloud’, July 23, 2014]

  • about the competition with Amazon:

… I think there’s certainly a first mover advantage that they’ve been able to benefit from. … In terms of where we’re at today, we’ve got about 57% of the Fortune 500 that are now deployed on Microsoft Azure. … Ultimately the way we think we do that [gain on the current leader] is by having a unique set of offerings and a unique point of view that is differentiated.

  • about uniqueness of Microsoft offering:

One is, we’re focused on and delivering a hyper-scale cloud platform with our Azure service that’s deployed around the world. …

… that geographic footprint, as well as the economies of scale that you get when you install and have that much capacity, puts you in a unique position from an economic and from a customer capability perspective …

Where I think we differentiate then, versus the other two, is around two characteristics. One is enterprise grade and the focus on delivering something that’s not only hyper-scale from an economic and from a geographic reach perspective but really enterprise-grade from a capability, support, and overall services perspective. …

The other thing that we have that’s fairly unique is a very large on-premises footprint with our existing server software and with our private cloud capabilities. …

Imagination’s MIPS based wearable and IoT ecosystem is the alternative

image… the technological alternative relative to what is given in the Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014] post

Wearable and IOT [designreuse YouTube channel, May 2, 2014]

By Mike Hopkins, Senior Technology marketig Specialist, Imagination Technologies at ChipEx 2014, Tel Aviv, Israel

Imagination highlights solutions for IoT and wearables at EE Live!

Featuring hands-on demonstrations of technologies
and end products

EE Live! Conference & Expo, San Jose, CA – 1st April, 2014 – Imagination Technologies (IMG.L) will highlight its expertise and momentum in IoT and wearables at the EE Live! Conference and Expo, being held March 31st – April 3rd at the McEnery Convention Center in San Jose, CA.

Imagination is working closely with partners to enable creation of SoCs for IoT and wearable devices that feature extended battery life and enhanced security, as well as device and infrastructure ecosystems, all driven by the right IP solutions.

Says Kevin Kitagawa, director of strategic marketing at Imagination: “Imagination has all of the IP needed to create complete, class-leading IoT and wearable solutions, and our technologies are already powering numerous SoCs designed for these applications. Through industry initiatives such as the AllSeen Alliance, and key partners including Google, Ineda, Ingenic, Microchip Technology and others, we are building the ecosystems and technologies needed for a new generation of IoT and wearable SoCs.”

In its booth number 816 at EE Live!, Imagination will feature hands-on demonstrations and highlight many of its technologies for IoT and wearables including:

  • MIPS Warrior CPUs: a highly scalable family of CPUs including the new MIPS M-class M51xx cores, which have features that make them ideal for IoT and wearables including DSP engine, small code size, hardware virtualization support and ultra-secure processing
  • PowerVR GPUs: the de facto standard for mobile and embedded graphics including the new PowerVR Rogue 6XE G6050, one of the industry’s smallest OpenGL ES 3.0-compliant GPUs delivering high fillrate and exceptional efficiency—perfect for a range of high-end IoT devices
  • Ensigma Series4 Explorer radio communications processors (RPUs): a unique universal and highly scalable solution for integrating global connectivity and broadcast communications capabilities into SoCs, including solutions for Wi-Fi and Bluetooth LE (low Energy)
  • FlowCloud: an application-independent technology platform for emerging IoT and cloud-connected devices, enabling rapid construction and management of device-to-device and device-to-cloud applications.
  • PowerVR Series5 video processors (VPUs): the most efficient multi-standard and multi-stream video decoders and encoders, which offer a range of solutions for video intensive IoT applications such as security cameras or wearable devices such as smart glasses
  • PowerVR Raptor imaging processor cores: scalable and highly-configurable solutions which join other PowerVR multimedia cores to form a complete, integrated vision platform that saves power and bandwidth for today’s camera applications and other smart sensors
  • Caskeid: unique, patented technology that delivers exceptionally accurate synchronized wireless multiroom connected audio streaming for audiophile-quality stereo playback with less than 25µs synchronization accuracy
  • Codescape: a complete, proven and powerful debug solution that supports the full range of MIPS CPUs, offers Linux and RTOS awareness features, and provides heterogeneous debug of SoCs using one or more MIPS and Ensigma processors

Imagination will also feature IoT and wearable related products and technologies including:

  • New MIPS-based IoT development platform “Newton” from Ingenic Semiconductor, which integrates CPU, Flash, LPDDR, Wi-Fi, Bluetooth, NFC, PMU and various sensors on a single board around the size of an SD card
  • imageDevelopment boards for MIPS including those for Microchip Technology’s 32-bit PIC32MZ MCUs and a new a complete low-cost MIPS-based Android and Linux platform for system developers
  • Comprehensive development tools for all MIPS CPUs, including the latest GNU tools for Linux and bare-metal embedded systems from Mentor Graphics’ Sourcery CodeBench, and Imperas’ high-speed instruction-accurate OVP models and QuantumLeap parallel simulation acceleration technology
  • Smartwatches that are shipping today based on the MIPS architecture, including the SpeedUp Smartwatch as well as those from Tomoon, HiWatch, SmartQ, Geak and others
  • Toumaz’ solutions for the SensiumVitals® System, an ultra-low power wireless patch remotely managed via Imagination’s FlowCloud technology
  • FlowTalk and FlowAudio – Imagination’s solutions for connected audio and cross-platform V.VoIP/VoLTE, leveraging the FlowCloud

Imagination’s vice president of strategic marketing, Amit Rohatgi, will participate in a Technology Workshop during EE Live!, “The Role of Embedded Systems in the Internet of Everything,” sponsored by the Chinese American Semiconductor Professionals Association (CASPA). The event will be held on Wednesday, April 2nd, from 5:00 p.m. – 8:00 p.m. For more information and to register, visit http://www.caspa.com/node/6349.

About Imagination Technologies
Imagination is a global technology leader whose products touch the lives of billions of people throughout the world. The company’s broad range of silicon IP (intellectual property) includes the key multimedia, communications and general purpose processors needed to create the SoCs (Systems on Chips) that power all mobile, consumer, automotive, enterprise, infrastructure, IoT and embedded electronics. These are complemented by its unique software and cloud IP and system solution focus, enabling its licensees and partners get to market quickly by creating and leveraging highly differentiated SoC platforms. Imagination’s licensees include many of the world’s leading semiconductor manufacturers, network operators and OEMs/ODMs who are creating some of the world’s most iconic and disruptive products. See:www.imgtec.com.

Creating next-generation chips from the ground-up for wearables and IoT [Imagination Blog, April 1, 2014]

There has been a lot of momentum lately around Imagination’s initiatives and technologies focused on creating a new generation of chips built specifically for IoT and wearable use cases. We thought we’d take a moment to fill you in.

The problem

Today, low-end IoT devices and wearables typically use multiple general purpose chips to achieve microcontroller, sensor and radio functionality, leading to expensive, compromised solutions. At the high end, devices such as smartwatches use existing smartphone chips, leading to overpowered, expensive devices.

The solution from Imagination

To reach the incredible volumes predicted by analysts, SoCs for wearable devices and IoT must be designed from the ground-up. Working with our partners, Imagination is enabling the design of new chips that extend battery life, enhance data and device security and feature the right CPU, graphics, video and multi-standard connectivity solutions. We’re also focused on building the needed standards, operating environments, and other ecosystem technologies to support these chips.

Imagination is proud to already have our IP in such SoCs, and our customers are giving us great feedback on our wearables roadmap. Together with industry initiatives such as the AllSeen Alliance or the cool new Android Wear from Google, and key partners includingIneda Systems, Ingenic Semiconductor, Microchip Technology and others, we are taking a leading role in building the ecosystems and technologies needed for a new generation of SoCs.

Extending battery life

With the always-on requirement for sensors in most wearables and IoT devices, together with their tiny form factors, battery life is a more critical concern for designers than ever before. Using power and area efficient silicon IP is therefore a must.

In wearable and IoT applications that require a CPU, an intelligent hierarchy of CPUs optimized for specific tasks can lead to extremely low power consumption. For example, an SoC can use a MIPS CPU such as a new Warrior M-class core, which achieves the highest CoreMark/MHz scores for MCU-class processors, to perform the function of monitoring sensors and also to manage the connectivity peripherals. When the SoC needs to process or analyze data, the system can wake up other CPUs in the system to perform their dedicated tasks. Such an implementation offers key benefits for extending battery life in wearables and IoT devices.

Ineda, a developer of low-power SoCs, is uniting various Imagination IP cores in its ultra-low power Wearable Processing Units (WPUs) designed to reduce power consumption in a variety of devices, including fitness bands, smartwatches and IoT. With unique combinations of Imagination’s MIPS CPUs and highly efficient PowerVR GPUs, the new Ineda WPUs represent one of the first SoC architectures built specifically for this new generation of devices.

image

Ineda Systems’ WPUs will address the wearable platforms from a ground-up manner

Enhancing security

As more and more devices are connected to the cloud and each other, security becomes an ever-growing concern. Imagination has the right IP for public key infrastructure and crypto functions needed to provide trusted execution environments, secure boot, secure code updates, key protection, device authentication and IP/transport layer data security to transmit data to the cloud. Virtualization and security features across the range of MIPS Series5 Warrior CPU cores make them ideal for meeting next-generation security needs.

In space-constrained, low-power systems such as IoT or wearable devices, a virtualization based approach could be used to implement a multiple-guest environment where one guest running a real-time kernel manages the secure transmission of sensor data, while another guest, under RTOS control, can provide the multimedia capabilities of the system. For applications that demand an even higher level of security, the new MIPS Warrior M-class cores include tamper resistant features that provide countermeasures to unwanted access to the processor operating state. A secure debug feature increases the benefit by preventing external debug probes from accessing and interrogating the core internals.

image

MIPS M51xx CPUs support multiple guest operating systems

Driving new ecosystems and standardization efforts

Due to small device size, as well as a new and different functionality required in emerging IoT and wearable devices, much of the device and infrastructure ecosystems will be different than what’s needed for smartphones and other connected products. This includes standards in the areas of APIs, device-to-device communications, data analytics, device authentication, low-power connectivity and protocols, and even operating environments, which are critical to driving consumer and industry adoption.

At Imagination we are partnering with Google and other industry players on Android Wear, a project that extends Android to wearables, beginning with smartwatches. Already a strong player in the Android ecosystem, MIPS is one of the three CPU architectures fully supported by Google in each Android release, including the latest Android 4.4 KitKat.

image

Images from the Android Wear Developer Preview site

To drive ecosystem development for IoT, we’ve also recently joined the AllSeen Alliance, which has been formed to create an open, universal development framework to drive the widespread adoption of products, systems and services that support IoT. The goal is to enable companies and individuals to create interoperable products that can discover, connect and interact directly with other nearby devices, systems and services regardless of transport layer, device type, platform, operating system or brand.

Imagination’s own application-independent FlowCloud technology platform enables rapid construction and management of M2M connected services. Designed to address the needs of emerging IoT and cloud-connected devices, FlowCloud enables easy product registration and updates as well as access to partner-enabled services including FlowAudio, a cloud-based music and radio service that includes hundreds of thousands of radio stations, on-demand programs, podcasts and more. Imagination intends for FlowCloud to be easily integrated with products using the AllSeen Alliance framework.

image

Imagination’s FlowCloud enables device-centric services including registration, security, storage, notifications, updates and remote control

Flexible, multi-standard connectivity

Wearables and IoT devices today use existing connectivity standards, such as Wi-Fi or Bluetooth LE (Low Energy), but new standards, such as ultra-low power Wi-Fi extensions, are still in development. This means that choosing future-proofed, flexible solutions is a must for companies who want to create a product today that will still be viable when new standards are ratified.

Imagination’s programmable, multi-standard Ensigma radio processors (RPUs) can accommodate such emerging standards with a powerful and uniquely optimized balance of programmability and hardware configurability, delivering impressive functionality in compact silicon area.

image

The right IP for the application

Imagination’s IP is already integrated into wearable and IoT products that are shipping today. This includes a number of smartwatches that leverage the MIPS architecture and smart glasses with PowerVR graphics and video.

image

Imagination’s IP is already integrated into wearable products such as the SpeedUp Smartwatch, the world’s first Android 4.4 KitKat smartwatch

For example, Ingenic Semiconductor is offering a new MIPS-based IoT development platform called Newton. The Ingenic Newton platform integrates a MIPS-based XBurst CPU, multimedia (2D graphics, multi-standard VPU) low-power memory (mobile DDR3/DDR2/LPDDR and flash) 4-in-1 connectivity (Wi-Fi, Bluetooth, NFC, FM) and various sensors on a single board around the size of an SD card (find out more about Ingenic Newton here).

In addition, MIPS-based 32-bit PIC32MZ MCUs from Microchip Technology [all details are given here in the 2nd half of this post] are ideal for a number of wearable and IoT applications.

For designers of next-generation SoCs, Imagination’s broad IP portfolio offers scalable solutions for their specific application. This includes our MIPS Series5 Warrior CPUs including the new MIPS M-class M51xx cores, PowerVR Rogue GPUs including the PowerVR G6050, Ensigma Series4 Explorer RPUs with solutions for Wi-Fi, Bluetooth LE and more, PowerVR Series5 video processors (VPUs), PowerVR Raptor imaging processor cores, our unique Caskeid audio synchronization technology, and of course FlowCloud.

MIPS Powered Wearables from Imagination Technologies [RCR Wireless News YouTube channel, Jan 15, 2014]

Mike Hopkins, Marketing Manager for Imagination Technologies talks about their innovation of their MIPS processor in creating smart wearable devices. All of the watches in the video are running full Android operating systems, capable of running any Android app.These smart watches are available now to the general public.

Smart watches: The first wave of wearable and connected devices integrating Imagination IP [Imagination Blog, Jan 27, 2014]

Over the past few months, we’ve seen a new wave of announcements related to Internet of Things (IoT) and other ultra-portable devices integrating Imagination IP. One of the biggest buzz words right now is wearable devices; there were several wearable concepts introduced at CES 2014, covering any and every use case, from augmented and virtual reality or entertainment to fitness, health, and many more.

At Imagination, we are well prepared to deliver innovative hardware and software IP that has been specifically designed to address the rapid growth in demand for these applications. Imagination is the only IP company that can deliver a full suite of low-power, feature-rich technologies encompassing CPU, graphics, video, vision, connectivity, cloud services and beyond. Our market-leading PowerVR GPUs and VPUs, efficient MIPS CPUs, innovative Ensigma RPUs and other IP solutions create the perfect  foundation for developing new processors for ultra low-power wearables that will be soon find their way into a myriad of devices such as smart watches, health and fitness devices and more.

MIPS and smart watches

One of the companies that have been at the forefront of innovation in the mobile and wearable market is Ingenic. Their MIPS-based XBurst SoC is an innovative MIPS32-based apps processor which redefines the performance and power consumption criteria for modern embedded SoCs.

Among the recent design wins, one interesting use case for the MIPS architecture is the smart watch. There were several smart watch designs on display on our booth at CES 2014; this article is a quick summary of what we and our partners were showcasing on the show floor.

  • imageThe GEAK smart watch runs stock Android 4.1 out of the box, can be used to monitor your heartbeat and blood pressure, and acts as a pedometer or smartphone remote to snap pictures. The GEAK smart watch is a water-resistant (IP3X) device and comes with a 1.55″ color IPS screen.
  • The NextONE smart watch from YiFang Digital uses the Android 4.1 OS to create imagean open architecture system that can run any verified third party applications. The smart watch is customizable to every aspect of a user’s life, from communicating with work and friends to health and fitness. The NextONE smartwatch improves the smartphone experience by making the information a user wants accessible at any time.
  • Tomoon T-Fire is another exciting smart watch design coming out of China. It has an innovative curved E-ink screen measuring 1.73″, it runs Android 4.3 and is expected to ship soon. It currently comes in three colors and promises to deliver on the fitness front, with a trio of sensors (gyroscope, g-sensor, compass).
    image
  • SmartQ Z Watch promises to deliver an incredible standby time, can record motion data and even analyzes the quality of your sleep. It provides good water resistance, can pair up with your smartphone and tablet and doubles as an MP3 player too.image

The smart wearables of the future

Wearable electronics cannot accommodate the larger batteries of their bigger counterparts (smartphones, tablets) so ultra-portable devices must use SoCs that have low power consumption. Because our technologies have been built around efficiency, we can help our partners design highly competitive solutions that enable them to achieve design wins in multiple markets. Companies looking for proven, low power multimedia and connectivity IP can rely on Imagination to provide the building blocks for IoT-ready chips.

A recent example is Ineda who have licensed PowerVR GPU and MIPS CPU IP to design System-on-Chip solutions for portable consumer electronics like wearable devices. Ineda CEO Dasaradha Gude says that Imagination’s IP cores provide the power efficiency required for wearable devices to succeed but also accelerate time to market, since everything they needed was provided by Imagination which simplified all the integration work.

Smart glasses: The first wave of wearable and connected devices integrating Imagination IP [Imagination Blog, Jan 23 2014]

Over the past few months, we’ve seen a new wave of announcements related to Internet of Things (IoT) and other ultra-portable devices integrating Imagination IP. One of the biggest buzz words right now is wearable devices; there were several wearable concepts introduced at CES 2014, covering any and every use case, from augmented and virtual reality or entertainment to fitness, health, and many more.

At Imagination, we are well prepared to deliver innovative hardware and software IP that has been specifically designed to address the rapid growth in demand for these applications. Imagination is the only IP company that can deliver a full suite of low-power, feature-rich technologies encompassing CPU, graphics, video, vision, connectivity, cloud services and beyond. Our market-leading PowerVR GPUs and VPUs, efficient MIPS CPUs, innovative Ensigma RPUs and other IP solutions create the perfect  foundation for developing new processors for ultra low-power wearables that will be soon find their way into a myriad of devices such as smart watches, health and fitness devices and more.

PowerVR and smart glasses

An example of a type of wearable device that has benefited from Imagination’s IP is smart glasses. Google Glass has been the first; featuring a Texas Instruments OMAP4430 processor with a PowerVR SGX540 GPU, Glass is able to take pictures, record videos, search the internet, and navigate maps.

But in the hand of ingenious developers, it can do so much more. For example, a recent article in the MIT Technology Review highlights an app that can recognize objects in front of a person wearing a Google Glass device.

image

This type of functionality opens up a whole new range of applications related to computer vision and augmented reality, two applications where wearables have clear potential.

However, there were multiple PowerVR-based smart glasses introduced at CES 2014:

  • Recon Instruments introduced Snow2, an iPhone-connected HUD (Heads-Up Display) for winter sports. The Recon Snow2 project is a collaboration between Recon and Oakley and can be found as a complete kit called Oakley Airwave 1.5. Recon however is working with multiple companies to build several products that are tuned to their requirements. Recon Snow2 features an integrated GPS and can can display your speed, altitude, location, and act as a navigation instrument. For example, there is an iOS app that allows you to share your position on a map and locate your friends or family on the slopes.

image

  • XOne is the first product from startup XOEye Technologies and took five years to design. XOne is a pair of safety glasses designed to improve efficiency and enhance safety for skilled labor jobs. The glasses rely entirely on audio and LEDs to communicate messages to the wearer. XOne integrates two 5MPx cameras (one inside each lens), speakers and a microphone, a gyroscope, and an accelerometer; the system is powered by a TI OMAP 4460 processor, running a custom version of Linux designed for enterprise use.image
  • The Vuzix M100 is one of the first commercially available smart glasses. They are an Android-based wearable computer, featuring a monocular display, recording features and wireless connectivity capabilities. Vuzis M100 has been designed to cover a range of applications; powerful, small and light, the M100 is well suited for a variety of industrial, medical, retail and prosumer users.image
  • The Epson Moverio BT-200 smart glasses are designed for users who like to enjoy their multimedia and do their gaming on a pair of glasses. Epson have put a lot of effort into integrating the technology (an OMAP processor) with the physical design. Even better, the smart glasses run Android 4.0.4 and apps from the Epson store; another unique feature is how users interact with the device, which is mainly done via a hand-held touchpad controller wired to the glasses. Epson has been named a 2014 CES Innovations Awards honoree in wearable tech for its Moverio BT-200 smart glasses.image
  • Lumus generated a lot of attention around its DK-40 wearable smart glasses at CES. They were very eager to show off the new developer unit in public focusing on how the monocular headset overlays a full VGA digital image over the right eye instead of using a small window for notifications. Lumus DK-40 runs Android, includes an OMAP processor and comes in multiple colors.image

I hope you’ve enjoyed our recap of some of the most interesting smart glass designs revealed at CES 2014. If you are interested in this category of devices and want to know more about the wearable gadgets that use our IP, make sure you follow us on Twitter (@ImaginationPR) and keep coming back to our blog.

Imagination and Google partner up for Android Wear and the wearable revolution [Imagination Blog, March 24, 2014]

Earlier this week Google announced a developer preview of Android Wear, a mobile operating system designed to extend the Android experience to wearable devices. This initiative will help jumpstart developers building innovative applications specifically targeting the next generation of innovation in wearables. The initial focus is on the smartwatch space and leverages the rich notification APIs already defined in Android.

Android Wear extends the Android platform to wearables, starting with a familiar form factor — watches. Download the developer preview at: developer.android.com/wear

Google is using this developer preview to give app developers the chance to experiment with enhanced notifications (e.g. weather, sports scores, navigation, etc.) for their applications to display on the smaller screen of smartwatches. For example, Android Wear supports notifications on a watch similar to how Google Now displays notifications on the smartphone. The next step for Google is to publish a full SDK that allows app developers to create complete, smartwatch-centric applications.

Delivering the ultimate wearable experience with MIPS  processor IP

Imagination has been a pioneer in delivering ultra-low power technologies across its entire IP portfolio. Following the acquisition of MIPS, one of the first things we did was to scrutinize all the CPUs from low end to high end to ensure we applied our leadership in low power design to MIPS CPUs. As a result, we believe MIPS is the ideal CPU for wearables, enabling our partners to build some of the most innovative solutions around for this growing market.

This year at MWC, wearables-focused startup Ineda demonstrated its ultra-low power Wearable Processor Unit (WPU) SoCs which deliver exceptional low power consumption. Ineda’s SoC devices integrate multiple IP processors from Imagination, including MIPS CPUs and PowerVR GPUs. Also, SpeedUp Technology announced its first wearable technology product, the SpeedUp SmartWatch, a revolutionary wearable device which incorporates an ultra-low power MIPS-based CPU from Ingenic.

Imagination is a Google launch partner for Android Wear – something we’re pretty proud of. Already a strong player in the Android ecosystem, Imagination’s MIPS architecture is one of the three CPU architectures fully supported by Google in every Android release including the latest Android 4.4 KitKat.

image

All MIPS CPUs are optimized to offer the best Android experience on smartphones, tablets, wearables and other mobile devices

Low power, high performance MIPS CPUs already power billions of products around the globe. Thanks to a flexible architecture that scales from entry-level 32-bit embedded processors to some of the industry’s highest performing 64-bit CPUs, MIPS CPUs pave the way for next-generation embedded designs, including a growing presence in wearables. The Series5 Warrior generation includes two new processors (MIPS M5100 and M5150) that provide key features ideal for wearables such as a high-performance DSP engine, small code size, virtualization, and ultra-secure processing. All Series5 Warrior CPUs deliver industry-leading CoreMark performance in a very efficient area and power envelope.

Look for a MIPS-based smartwatch in a store near you

Several of our licensees are working very hard to deliver MIPS-based, Android Wear-compliant devices that will be available in the market once the operating system is officially released.

By being a launch partner, we will work very closely to ensure that Android Wear will be optimized for MIPS CPUs as well as our other IP technologies such as PowerVR graphics, video and vision, and Ensigma RPUs.

The list of members in the Android Wear alliance includes several leading consumer electronics manufacturers (Asus, HTC, LG, Motorola and Samsung), chip makers (Broadcom, Intel, Mediatek and Qualcomm) and fashion brands (the Fossil Group), all keen to bring you watches powered by the new operating system later this year.

image

The list of official Android Wear partners

For more info about Android Wear and what was announced, visit:

Make sure you follow Imagination on Twitter (@ImaginationPR, @MIPSGuru) for the latest news and announcements from the wearable ecosystem.


I. Microchip Technology

From: IoT Era excites Semiconductor Players [Electronics Maker, May 6, 2014]
(other than Microsochip Technology companies are covered in the Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014] post)

image


Linear Technology

STMicroelectronics
(see in Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014])

InvenSense, Inc.
(see in Wearables Trend and Supply Chain, Samsung Gear Fit as the state-of-the-art wristband wearable, i.e. the hybrid of a smartwatch and a fitness band, as a demonstration [‘Experiencing the Cloud’, May 17, 2014])

Texas Instruments

Microchip Technology [https://www.facebook.com/microchiptechnology]

Mike Ballard, Senior Manager, Home Appliance Solutions Group, Microchip Technology Inc.

Microchip has many devices that are well situated to enable IoT functionality, such as 8, 16 and 32-bit PIC® microcontrollers, analog, mixed-signal, memory, and embedded Wi-Fi® and Bluetooth® modules.  In addition, IoT designers can take advantage of Microchip’s flexible development environment, broad connectivity solutions and product longevity.

Microchip is so broad based, with 80,000+ global customers, that we do not see any singular market or application that will drive our growth in IoT.  Our customer value proposition is that we provide a very broad embedded portfolio, including both the hardware and software solutions to help companies create their IoT products.

Microchip has a significant number of products that fit well into the IoT markets.  We have close relationships with our customers and have been incorporating these technologies into our products, based on their feedback.  Technologies such as XLP in our MCUs (which enables low-power designs), Wi-Fi Modules (Microchip offers two approaches, giving customers flexibility), and power-measurement devices, all enable our customers to meet their design and cost goals.  In addition, we have been acquiring companies and technologies to ensure that we continue to meet these markets’ needs today and in the future.

What is Deep Sleep [MicrochipTechnology YouTube channel, April 22, 2009] with which the minimal power consumption could be as low as 20 nA which allows years of operation on a single battery:

http://www.microchip.com/xlp Learn about Microchip’s extreme low power mode that can drop microcontroller currents to virtually zero. This webseminar provides an introduction to Deep Sleep mode found on these microcontrollers.

Microchip Technology Inc., December 12, 2013

Our Home Appliance Solutions Group can help you implement the new features and functionality needed for your next design. This short video introduces you to our Induction Cooktop Reference Design, which can significantly shorten your design cycle: http://mchp.us/1hI8kip

Induction Cooktop Reference Design [MicrochipTechnology YouTube channel, Dec 5, 2013]

In this video we will introduce the Microchip Induction Cooktop Reference Design. http://microchip.com/appliance

microchip.com/appliance: Home Appliance

Appliance manufacturers face numerous challenges in today’s ever-changing global market. Government regulations, customer expectations, competitive forces and application innovations are fueling the integration of new technologies into many appliances. Bringing these technology advancements to market can be even more challenging with shorter deadlines, the pressure to maintain and grow market share and the constant need to innovate. In addition, finding partners with technical solutions to enable these goals can be daunting and drain your resources.

Microchip Technology can help you implement the new features and functionality required for your next appliance design. By providing Microchip’s solutions for user interface, motor control, sensing, connectivity and more, your design teams can focus on implementing the application.

Microchip’s cost-effective tools enable your design to reach the market faster.  Our free, award winning MPLAB®X Integrated Design Environment (IDE) provides a single development platform for all of our 8-, 16- and 32-bit microcontrollers and 16-bit Digital Signal Controllers (DSCs). Microchip makes it easy to develop your code and migrate to higher performance solutions as needed. Learning curves are minimized even when changing cores due to additional features, increased code size or the need for more computing power.

MIPS MCUs Outrun ARM [Processor Watch from The Linley Group, Feb 18, 2014]

Author: Tom R. Halfhill

Microchip’s newest 32-bit microcontrollers not only match the features of their Cortex-M4 competitors but also achieve higher EEMBC CoreMark scores. The new PIC32MZ EC family is powered by a MIPS microAptiv CPU core running at 200MHz—a speed demon by MCU standards.

These MCUs have more memory than comparable chips (up to 2MB of flash and 512KB of SRAM) plus Ethernet, Hi-Speed USB2.0, an LCD interface, and a cryptography accelerator. An early sample scored 654 CoreMarks—the highest EEMBC-certified score for any 32-bit MCU executing from internal flash memory.

Microchip’s earlier PIC32MX family uses the smaller MIPS32 M4K core running at a maximum clock speed of 100MHz. The microAptiv CPU in the new family not only runs twice as fast but also supports the microMIPS 32-bit instruction-set architecture. MicroMIPS combines 16- and 32-bit instructions to achieve better code density than previous MIPS32 cores or even Cortex-M cores using 16/32-bit Thumb-2 instructions. Microchip claims the PIC32MZ family has 30% better code density than similar ARM-based MCUs. Also, microAptiv adds 159 new signal-processing instructions.

The PIC32MZ family is designed for high-end controller applications, such as vehicle dashboard systems, building environmental controls, and consumer-appliance control modules. Some PIC32MZ chips will begin volume production in March, and the remainder by mid-year. Prices for 10,000-unit volumes will range from $6.68 to about $10—relatively expensive for MCUs but reasonable for the performance and features.

Leading performance and superior code density for new microAptiv-based PIC32MZ 32-bit MCU family from Microchip [Imagination Blog, Nov 25, 2013]

Although mainly known for our leadership position in CPU IP for digital home and networking, the MIPS architecture has recently seen rapid growth in the 32-bit microcontroller space thanks to the expanding list of silicon partners that are offering high-performance, feature-rich and low-power solutions at affordable price points.

The most recent example of our expansion into MCUs is the 200MHz 32-bit PIC32MZ family from Microchip. PIC32MZ MCUs integrate our microAptiv UP CPU IP core which enables Microchip to offer industry-leading performance at 330 DMIPS and 3.28 CoreMark™/MHz.

The PIC32MZ comes fully loaded with up to 2MB of Dual-Panel Flash with Live Update, 512KB SRAM and 16KB Instruction cache and 4KB data cache memories. This newest family in the PIC32 portfolio also offers a full suite of embedded connectivity options and peripherals, including 10/100 Ethernet MAC, Hi-Speed USB MAC/PHY (a first for PIC® MCUs), audio, graphics, crypto engine (supporting AES, 3DES, SHA) and dual CAN ports, all vital in supporting today’s complex applications.

By transitioning to the new MIPS microAptiv core, the PIC32MZ family offers a more than 3x increase in performance and better signal processing capabilities over the previous M4K-based PIC32MX families. In addition, the microAptiv core includes an Instruction Set Architecture (ISA) called microMIPS that reduces code size by up to 30% compared to executing 32-bit only code. This enables the PIC32MZ to load and execute application software in less memory.

The MIPS microAptiv family is available in two versions: microAptiv UC and microAptiv UP. microAptiv UC includes a SRAM controller interface and Memory Protection Unit designed for use in real-time, high performance low power microcontroller applications that are controlled by a Real Time OS (RTOS) or application-specific kernel. microAptiv UP contains a high performance cache controller and Memory Management Unit which enables it to be designed into Linux based systems.

image
A block diagram of the microAptiv UP CPU IP core inside PIC32MZ MCUs

Why choose MIPS32-based CPU IP for your MCUs?

MIPS-based MCUs are used in a wide and very diverse set of applications including industrial, office automation, automotive, consumer electronic systems and leading-edge technologies such as wireless communications. Furthermore, we’ve recently seen growing demand from the wearable and ultra-portable market; companies targeting these markets are looking to silicon IP providers like Imagination to deliver performance and power efficient solutions that can be easily integrated in fully-featured products.

CPU IP cores for microcontrollers need to be all-round flexible designs that are able to deliver higher levels of performance efficiency, improved real-time response, lower power and a broad tools and developer ecosystem. And the requirements continue to grow, especially with the new challenges presented by designing for the Internet of Things: better security, the ability to create more complex RTOS-controlled software and the ability to support a growing number of interfaces.

The microAptiv and future MIPS Series5 ‘Warrior’ M-class cores are perfectly positioned to provide an ideal 32-bit MCU solution for these next-generation applications. We understand that picking the right processor architecture is a key decision criterion to achieving performance, cost and time-to-market objectives in a MCU product. This is why we’ve made sure that the MIPS32 architecture enables our partners to design higher performance, lower power solutions with more advanced features and superior development support.

In the words of Jim Turley from his “Micro-Super-Computer-Chip‘ article inside the EE Journal: “With sub-$10 chips and sub-$150 computer boards, it looks like MIPS took over the world after all.”

We will be demonstrating the PIC32MZ on a Microchip multimedia board at the Embedded World 2014 event (February 25th – 27th) in in Nürnberg, Germany, so make sure you drop by our booth if you are attending the conference. In the meantime, follow us on Twitter (@ImaginationPR and @MIPSGuru) for the latest news and announcements from Imagination and its partners.

Microchip’s PIC32MZ 32-bit MCUs Have Class-Leading Performance of 330 DMIPS and 3.28 CoreMarks™/MHz; 30% Better Code Density [Microchip press release, Nov 18, 2013]

New 24-Member Family Integrates 2 MB Flash, 512 KB RAM,
28 Msps ADC, Crypto Engine, Hi-Speed USB,
10/100 Ethernet, CAN and Many Serial Channels

image

Microchip Technology Inc., a leading provider of microcontroller, mixed-signal, analog and Flash-IP solutions, today announced the new 24-member PIC32MZ Embedded Connectivity (EC) family of 32-bit MCUs.  It provides class-leading performance of 330 DMIPS and 3.28 CoreMarks™/MHz, along with dual-panel, live-update Flash (up to 2 MB), large RAM (512 KB) and the connectivity peripherals—including a 10/100 Ethernet MAC, Hi-Speed USB MAC/PHY (a first for PIC® MCUs) and dual CAN ports—needed to support today’s demanding applications.  The PIC32MZ also has class-leading code density that is 30% better than competitors, along with a 28 Msps ADC that offers one of the best throughput rates for 32-bit MCUs.  Rounding out this family’s high level of integration is a full-featured hardware crypto engine with a random number generator for high-throughput data encryption/decryption and authentication (e.g., AES, 3DES, SHA, MD5 and HMAC), as well as the first SQI interface on a Microchip MCU and the PIC32’s highest number of serial channels.

image

View a brief presentation:  http://www.microchip.com/get/1WEC

image

Embedded designers are faced with ever-increasing demands for additional features that require more MCU performance and memory.  At the same time, they are looking to lower cost and complexity by utilizing fewer MCUs.  The PIC32MZ family provides 3x the performance and 4x the memory over the previous-generation PIC32MX families, along with a high level of advanced peripheral integration.  For applications requiring embedded connectivity, the family includes Hi-Speed USB, Ethernet and CAN, along with a broad set of wired and wireless protocol stacks.  Many embedded applications are adding better graphics displays, and the PIC32MZ can support up to a WQVGA [400×240] display without any external graphics chips.  Streaming/digital audio applications can take advantage of this family’s 159 DSP instructions, large memory, peripherals such as I2S, and available software.

Field updates are another growing challenge for design engineers and managers.  The PIC32MZ’s 2 MB of internal Flash enables live updates via dual independent panels that provide a fail-safe way to conduct field updates while operating at full speed.

image

“Our new PIC32MZ family was designed for high-end and next-generation embedded applications that require high levels of performance, memory and advanced-peripheral integration,” said Rod Drake, director of Microchip’s MCU32 Division.  “The PIC32MZ enables designers to add features such as improved graphics displays, faster real-time performance and increased security with a single MCU, lowering both cost and complexity.”

The PIC32MZ is Microchip’s first MCU to employ Imagination’s MIPS microAptiv™ core, which adds 159 new DSP instructions that enable the execution of DSP algorithms at up to 75% fewer cycles than the PIC32MX families.  This core also provides the microMIPS® instruction-set architecture, which improves code density while operating at near full rate, instruction and data cache, and its 200 MHz/330 DMIPS offers 3x the performance of the PIC32MX.

Microchip is a flag-bearer for the MIPS architecture in microcontrollers, having created its performance-leading PIC32 line around MIPS.  Additionally, Microchip was a valued partner in defining the feature set for the new MIPS microAptiv CPU, which is designed to fulfill next-generation application demands for increased performance and functionality,” said Tony King-Smith, EVP Marketing, Imagination Technologies.  “With its new microAptiv-based PIC32MZ family, Microchip is again taking MCU performance and feature innovation to new levels.  Imagination is delighted with this latest achievement of our strategic relationship with Microchip to address ever-evolving market needs.”

Development Support

Microchip is making four new PIC32MZ development tools available today.  The complete, turn-key PIC32MZ EC Starter Kit costs $119, and comes in two flavors to support family members with the integrated crypto engine (Part # DM320006-C) and those without (Part # DM320006).  The Multimedia Expansion Board II (Part # DM320005-2), which is available at the introductory rate of $299 for the first six months and can be used with either Starter Kit to develop graphics HMI, connectivity and audio applications.  The 168-pin to132-pin Starter Kit Adapter (Part # AC320006, $59) enables development with Microchip’s extensive portfolio of application-specific daughter boards.  The PIC32MZ2048EC Plug-in Module (Part # MA320012, $25) is available for existing users of the Explorer 16 Modular Development Board.  For more information and to purchase these tools, visit http://www.microchip.com/get/JDVB.

Pricing & Availability

The first 12 members of the PIC32MZ family are expected starting in December for sampling and volume production, while the remaining 12, along with additional package options, are expected to become available at various dates through May 2014.  The crypto engine is integrated into eight of the PIC32MZ MCUs, and there is an even split of 12 MCUs with 1 MB of Flash and 12 MCUs with 2 MB of Flash.  Pricing starts at $6.68 each in 10,000-unit quantities.  The superset family members and their package options are the 64-pin QFN (9×9 mm) and TQFP (9×9 mm) for the PIC32MZ2048ECH064; 100-pin TQFP (12×12 and 14×14 mm) for the PIC32MZ2048ECH100; 124-pin VTLA (9×9 mm) for the PIC32MZ2048ECH124; and 144-pin TQFP (16×16 mm) and LQFP (20×20 mm) for the PIC32MZ2048ECH144.  The superset versions with an integrated crypto engine are the PIC32MZ2048ECM064, PIC32MZ2048ECM100, PIC32MZ2048ECM124 and PIC32MZ2048ECM144.

PIC32MZ EC Family
Device Details (Non Crypto)
image

Device Details (Crypto Engine)
image

For more information, contact any Microchip sales representative or authorized worldwide distributor, or visit Microchip’s Web site athttp://www.microchip.com/get/ESJG.  To purchase products mentioned in this press release, go to microchipDIRECT or contact one of Microchip’s authorized distributors.

Follow Microchip

RSS Feed for Microchip Product News: http://www.microchip.com/get/E09A

Twitter:  http://www.microchip.com/get/VR8V

Facebook:  http://www.microchip.com/get/H7DH

YouTube:  http://www.microchip.com/get/KMKU

Microchip’s New Cloud-Based Development Platform Now Available on Amazon Web Services Marketplace [Microchip press release, Oct 22, 2013]

Allows Embedded Engineers to Easily Connect Designs
to Amazon EC2 Instances;
Bridges Cloud and Embedded Worlds, Enabling Internet of Things

imageMicrochip Technology Inc., a leading provider of microcontroller, mixed-signal, analog and Flash-IP solutions, today announced a simple Cloud Development Platform that is available on the Amazon Web Services (AWS) Marketplace and enables embedded engineers to quickly learn cloud based communication.  Microchip’s platform provides designers with the ability to easily create a working demo that connects an embedded application with the Amazon Elastic Compute Cloud (EC2) service.  At the heart of this platform is Microchip’s Wi-Fi® Client Module Development Kit (Part # DM182020), which offers developers a simple way to bridge the embedded world and the cloud, to create applications encompassing the Internet of Things.

A rapidly growing number of embedded engineers need to add cloud connectivity to their designs, but have limited experience in this area.  Microchip’s new Cloud Development Platform builds designer confidence by making it quick and easy for them to get up and running on the proven Amazon EC2 cloud infrastructure.

Amazon EC2 is a Web service that provides scalable, pay-as-you-go compute capacity in the cloud.  It is designed to make Web-scale computing easier for developers.

“I view this as a huge step forward for corporations who produce embedded products, to quickly develop infrastructure and connect their devices to the cloud,” said Mike Ballard, senior manager of Microchip’s Home Appliance Solutions Group and leader of its Cloud Enablement Team.  “With the vast amount of expertise and scalability provided by AWS, developers can easily customize their connectivity instances and the user’s experience.”

“With Microchip’s Wi-Fi Client Module Development Kit available via our AWS Marketplace, customers can easily learn to connect embedded products to AWS,” said Sajai Krishnan, GM, AWS Marketplace.  “This is an effective step to help bridge the embedded world and the cloud.”

Pricing & Availability

Microchip’s Cloud Development Platform is available today at http://www.microchip.com/get/R837.  As part of this platform, its Wi-Fi Client Module Development Kit (Part # DM182020) is available for purchase today for $99, at http://www.microchip.com/get/0D84.  For additional information, contact any Microchip sales representative or authorized worldwide distributor, or visit Microchip’s Web site athttp://www.microchip.com/get/ST1C.  To purchase products mentioned in this press release, go to microchipDIRECT or contact one of Microchip’s authorized distribution partners.


Ineda Systems

Smart Move [Business Today [India], May 11, 2014]

Why venture funds are rushing to back Ineda, maker of chips for wearable devices.

image

Ineda Systems is just the sort of company you’d expect from Dasaradha R. Gude, who has spent a large part of his career in the world of processors. “We are processors” is how he describes himself and his team of nearly 200 people.

Gude, or GD as he is known to many of his colleagues and business associates, is clearly excited about the power of wearable chips. Ineda – the name is derived from ‘integrated electronics designs for advanced systems’ – designs chips for use in wearable devices.

From 2007 to 2010, Gude was Corporate Vice President at Advanced Micro Devices (AMD) Inc, and later Managing Director at AMD India. He founded Ineda in 2011, and members of his team have previously worked in global companies such as AMD and Intel. He says: “They are people with courage to leave big companies and step out to do something innovative.”

To his customers, he plans to offer chips in sizes of five, seven, nine and 12 square millimetres, which can fit into wearable devices such as smart watches, health and fitness trackers, and pretty much anything that needs to be connected to the emerging ‘Internet of things’ which allows users to monitor connected devices from a long distance.

He promises chips that not only go easy on battery life, but also versions that can provide a range of features, almost like a smartphone. He says his potential customers are leaders in wearable technology, who would need tens of millions of chips a year, and this would bring his costs down.

The going has been good so far for Ineda. The company has just received funding from the US-based Walden Riverwood Ventures, from the venture capital arms of Samsung and Qualcomm, and a UK-based research and development company called Imagination Technologies. The total funding is to the tune of $17 million or Rs 103 crore, and Gude intends to use the money to ensure that the chips attain stability for mass production. In April 2013, Ineda raised $10 million (more than Rs 60 crore), with Imagination Technologies as the lead investor.

imageThe chips will be manufactured in Taiwan, and Gude is in talks with about two dozen potential customers, big names in the wearable technology market such as Nike and Fitbit. “Because we have a unique proposition and will need huge volumes, we are talking to the really big guys,” he says.

Clearly, wearable technology is a growing market. Gude says it is already worth a couple of billion dollars globally, and is expected to be a $10-billion industry by 2016. Everyone, from Google to Intel to fitness companies, has its eye on this market. For instance, Theatro, a US-based company, is developing voice-controlled wearable computers for the retail and hospitality segments of the enterprise market. It emerged from stealth mode in December 2013 when it announced its product’s commercial availability and relationship with its first customer, The Container Store. Its tiny 35-gm WiFi-based wearable device enables voice-controlled human-to-human interaction (one-to-one, group and store-to-store) and replaces two-way radios. It also enables voice-controlled human-to-machine interaction with, say, in-store systems for inventory, pricing and loyalty programmes. Another potential use is in-store employee location-based services and analytics.

There is so much excitement about wearable technology that some companies are even crowdsourcing ideas. For instance, Intel has launched its ‘Make It Wearable’ challenge, which offers prize money to the best real-world applications submitted by designers, scientists and innovators.

So Ineda’s chips could be used in devices such as Google Glass, smart watches, and Nike’s FuelBand. And when does Ineda expect its chips to become commercially available? “End of this year or the by the first quarter of 2015,” says Gude.

He says that at the moment, he has no direct competitor with whom he can do an apples-to-apples comparison. His rivals are either too big and expensive, or too small with few functionality options. He positions Ineda somewhere in between in terms of functionality and price. How the market will respond remains to be seen, but investors are clearly interested.

Ineda Systems Delivers Breakthrough Power Consumption for Wearable Devices and the Internet of Things [press release, April 8, 2014]

Extends Battery Life for Wearable Devices Up to a Month

Ineda Systems, a leader of low-power SoCs (system on a chip) for use in both consumer and enterprise applications, today announced its Dhanush family of Wearable Processing Units (WPU™). The Dhanush WPU family supports a large range of wearable devices including fitness bands, smart watches, glasses, athletic video recorders and the Internet of Things. The Dhanush WPUs will enable a new industry milestone for always-on battery life of up to one month.

image

The Dhanush WPU is powered by Ineda’s patent pending Hierarchical Computing architecture. Dhanush is sampling to tier-one customers now, and will be available in volume production in the second half of 2014.

The Hierarchical Computing architecture, along with low power, high-performance MIPS-based microprocessor cores and PowerVR mobile graphics and video processors, enable the Dhanush WPU to offer leading performance with unprecedented low power consumption. The Dhanush family of SoCs also supports a scalable range of connectivity from Bluetooth LE through Bluetooth and Wi-Fi to address a range of applications.

“The Ineda engineering team in India has developed an innovative, low-power architecture designed specifically for wearable devices,” said Dasaradha Gude, CEO of Ineda Systems.

“The Dhanush family of WPUs offers better power consumption by an order of magnitude than smart phone processors that are currently being retrofitted for wearable devices.”

“The smart phone market grew substantially with the advent of smartphone-specific dedicated application processors. Dhanush WPU SoCs will enable a similar transformation in the wearable market segment,” Gude added.

Dhanush WPU

imageThe Dhanush WPU is an industry-first wearable SoC that addresses all the needs of the wearable device market. It features Hierarchical Computing architecture that allows applications and tasks to run at the right power optimized performance and memory footprint and has an always-on sensor hub optimized for wearable devices. The Dhanush WPU family consists of products – Nano, Micro, Optima and Advanced – which are designed for specific applications and product segments. Each of these products will aim to provide 30-day always-on battery life, up to 10x power consumption reduction compared to the current generation of application processors and be available at consumer price points.

“Ineda Systems is bringing the first wearable-specific chipset design to market,” said Chris Jones, VP and principal analyst at Canalys. “Strict power constraints are the greatest technological challenge for smart wearables, and Ineda is the first company taking this challenge truly seriously at the SoC level with Dhanush. Always-on sensor functionality is also critical and inherent to its design.”

The Dhanush family of SoCs comes in four different tiers that are designed for specific implementations:

  • Dhanush Advanced: Designed to include all the features required in a high-end wearable device – rich graphic and user interface – along with the capability to run a mobile class operating system such as Android™.
  • Dhanush Optima: This is a subset of the Dhanush Advanced and retains all the same features except the capability of running a mobile class operating system. It offers enough compute and memory footprint required to run mid-range wearable devices.
  • Dhanush Micro: Designed for use in low-end smartwatches that have increased compute and memory footprint. This contains a sensor hub CPU subsystem that takes care of the always-on functionality of wearable devices.
  • Dhanush Nano: Designed for simple wearable devices that require microcontroller-class compute and memory footprint.

Hierarchical Computing Architecture

Hierarchical Computing is a tiered multi-CPU architecture with shared peripherals and memory. This architecture allows multiple CPUs to run independently and together to create a unified application experience for the user – allowing optimal use of CPUs per use-case for power efficient performance.

With Hierarchical Computing, all the CPUs can be individually or simultaneously active, working in sync while handling specific tasks assigned to them independently. Based on the mode of operation and the applications being used, the corresponding CPU is enabled to provide optimal performance at optimal power consumption. Resource sharing further enables Hierarchical Computing to work on the same hardware resources at different performance and power levels.

Ineda’s reference design, SDK and APIs enable OEMs and third-party application developers to seamlessly realize the benefits of the Hierarchical Computing architecture and provide a better user experience for their end products.

Ineda Systems plans to begin producing its WPU this year and will offer multiple SoC variations that will correspond with a specific class of wearable device. Ineda’s development kits are available for evaluation to select customers today.

About Ineda Systems

Ineda Systems, Inc. (pronounced “E-ne-da”) is a startup company founded by industry veterans from the United States and India with an ultimate goal of becoming a leader in developing low power SoCs for use in both consumer and enterprise applications. The advisory and management team has world-class experience working in both blue chip companies as well as fast-paced technology startups. Ineda’s expertise is in the area of SoC/IP development, architecture and software that is necessary to design silicon and systems for next generation of low power consumer and enterprise applications.

The company has offices in Santa Clara, California, USA and Hyderabad, India.

Ineda Systems, Inc. has applied for the trademark of WPU. Android is a trademark of Google Inc. All other trademarks used herein are the property of their respective owners.

Justin Rosenstein of Asana: Be happy in a project-oriented teamwork environment made free of e-mail based communication hassle

Get Organized: Using Asana in Business [PCMag YouTube channel, Febr 24, 2014]

Tired of “work about work?” Some businesses are using Asana to streamline their communication and workflow. Here’s a bit about the tool and how it works

Steven Sinofsky, former head of Microsoft Office and (later) Windows at Microsoft:

We’ve all seen examples of the collaborative process playing out poorly by using email. There’s too much email and no ability to track and manage the overall work using the tool. Despite calls to ban the process, what is really needed is a new tool. So Asana is one of many companies working to build tools that are better suited to the work than one we currently all collectively seem to complain about.
in Don’t ban email—change how you work! [Learning by Shipping, Jan 31, 2014]

Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
in You’re doing it wrong [Learning by Shipping, April 10, 2014] and Shipping is a Feature: Some Guiding Principles for People That Build Things [Learning by Shipping, April 17, 2014]

image
Making e-mail communication easier [Fox Business Video]
May. 06, 2014 – 3:22 – Asana co-founder Justin Rosenstein weighs in on his new email business.

How To Collaborate Effectively With Asana [Forbes YouTube channel, Feb 26, 2013]

Collaboration tool Asana provides a shared task list for companies to get work done. That means taking on the challenge of slaying the email inbox.

Dustin Moskovitz: How Asana Gets Work Done [Forbes YouTube channel, Feb 26, 2013]

Asana cofounder Dustin Moskovitz, who previously cofounded Facebook, talks about Asana’s company culture, which includes an emphasis on transparency, a company-wide one week strategy session every four months and employee perks.

Do Great Things: Keynote by Justin Rosenstein of Asana | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]

Asana’s Justin Rosenstein thinks we’re poised to make the greatest change possible for the largest number of people: what are we going to do with that potential? What should we do? For the full interview click here: http://techcrunch.com/video/do-great-things-keynote-by-justin-rosenstein-of-asana/518220046/

Asana’s Justin Rosenstein: “I Flew Coach Here.” | Disrupt NY 2014 [TechCrunch YouTube channel, May 5, 2014]

At the end of their chat, Asana’s Justin Rosenstein and TechCrunch’s Alex Wilhelm failed to reconcile their views, but manage to land a high five, Click here to watch the full interview: http://techcrunch.com/video/do-great-things-keynote-by-justin-rosenstein-of-asana/518220046/

How we use Asana [asana blog, Oct 9, 2013]

We love to push the boundaries of what Asana can do. From creating meeting agendas to tracking bugs to maintaining snacks in the refrigerator, the Asana product is (unsurprisingly) integral to everything we do at Asana. We find many customers are also pushing the boundaries of Asana to fit their teams’ needs and processes. Since Asana was created to be flexible and powerful enough for every team, nothing makes us more excited than hearing about these unique use cases.

Recently, we invited some of our Bay Area-based customers to our San Francisco HQ to share best practices with one another and hear from our cofounder Justin Rosenstein about the ways we use Asana at Asana. We’re excited to pass on this knowledge through some video highlights from the event. You can watch the entire video here: The Asana Way to Coordinate Ambitious Projects with Less Effort

Capture steps in a Project
“The first thing we always do is create a Project that names what we’re trying to accomplish. Then we’ll get together as a team and think of, ‘What is every single thing we need to accomplish between now and the completion of that Project?’ Over the course of the Project, all of the Tasks end up getting assigned.”

Organize yourself
“Typically when I start my day, I’ll start by looking at all the things that are assigned to me. I’ll choose a few that I want to work on today. I try to be as realistic as possible, which means adding half as many things as I am tempted to add. After putting those into my ‘Today’ view, there are often a couple of other things I need to do. I just hit enter and add a few more tasks.”

Forward emails to Asana
“Because I want Asana to be the source of truth for everything I do, I want to put emails into my task list and prioritize them. I’ll just take the email and forward it to x@mail.asana.com. We chose ‘x’ so it wouldn’t conflict with anything else in your address book. Once I send that, it will show up in Asana with the attachments and everything right intact.”

Run great meetings
“We maintain one Project per meeting. If I’m looking at my Task list and see a Task I want to discuss at the meeting, I’ll just use Quick Add (tab + Q) to put the Task into the correct Project. Then when the meeting comes around, everything that everyone wants to talk about has already been constructed ahead of time.”

Track responsibility
“Often a problem comes up and someone asks, ‘Who’s responsible for that?’ So instead, we’ve built out a list of areas of responsibility (AoRs), which is all the things that someone at the company has to be responsible for. By having AoRs, we distribute responsibility. We can allow managers to focus on things that are more specific to management and empower everyone at the company to be a leader in their own field.”


Background on https://asana.com/

Asana

About Us

Connect

Support

How it all started and progressed?

asana demo & vision talk [Robert Marquardt YouTube channel, Feb 15, 2011]

“First public demo of Asana and deep-dived into the nuances of the product, the long-term mission that drives us, how the beta’s going, and more. We were really excited to be able to share what we’ve been working on and why we’re so passionate about it, and hope you enjoy this video of the talk.” http://blog.asana.com/2011/02/asana-demo-vision-talk/

The Asana Vision & Demo [asana blog, Feb 7, 2011]

We recently hosted an open house at our offices in San Francisco, where we showed the first public demo of Asana and deep-dived into the nuances of the product, the long-term mission that drives us, how the beta’s going, and more. We were really excited to be able to share what we’ve been working on and why we’re so passionate about it, and hope you enjoy this the above video of the talk:

Asana will be available more broadly later this year. In the meantime,

  • if you’re interested in participating in the beta program, sign up here.
  • if these sound like problems you’d like to help tackle, we’re hiring.
  • and if you’d just like to receive updates about Asana going forward, use the form in the upper right of this page.

Introducing Asana: The Modern Way to Work Together [asana blog, Nov 2, 2011]


Asana is a modern web application that keeps teams in sync, a shared task list where everyone can capture, organize, track, and communicate what they are working on in service of their common goal. Rather than trying to stay organized through the tedious grind of emails and meetings, teams using Asana can move faster and do more — or even take on bigger and more interesting goals.

How Asana Works:

Asana re-imagines the way we work together by putting the fundamental unit of productivity – the task – at the center. Breaking down ambitious goals into small pieces, assigning ownership of those tasks, and tracking them to completion is how things get built, from software to skyscrapers. With Asana, you can:

  • capture everything your team is planning and doing in one place. When tasks and the conversations about them are collected together, instead of spread around emails, documents, whiteboards, and notebooks, they become the shared, trusted, collective memory for your organization.
  • keep your team in sync on the priorities, and what everyone is working on. When you have a single shared view of a project’s priorities, along with an accurate view into what each person is working on and when, everyone on the team knows exactly what matters, and what work remains between here and the goal.
  • get the right information at the right time. Follow tasks, and you’ll receive emails as their status evolves. Search, and you’ll see the full activity feed of all the discussions and changes to a task over its history. Now, it’s easy to stay on top of the details — without asking people to forward you a bunch of email threads.

Building tools for teamwork [asana blog, Nov 22, 2013]

Our co-founder, Justin, recently wrote in Wired about why we need to rethink the tools we use to work together. The article generated a lot of interesting comments, from ideas on knowledge management to fatigue with the “meeting lifestyle,” to this protest on the typical office culture:

“Isn’t the root of this problem that, within our own organizations, we fiercely guard information and our decision-making processes? Email exchanges and invite-only meetings shut out others– forcing the need for follow-up conversations, summary reports, and a trail of other status/staff meetings to relay content already covered some place/some time before.”

To reach its goals, we think a team needs clarity of purpose, plan and responsibility. Technology and tools can help us reach that kind of clarity, but only if they target the right problem. From their roles at Facebook, Asana’s founders have extensive knowledge of social networks, and the social graph technology they rely on. But Asana isn’t a social network. Why? Because, as Justin outlines, the social graph doesn’t target the problem of work:

image

Our personal and professional lives, even if they overlap, have two distinct goals — and they require different “graphs.”

For our personal lives, the goal is love (authentic interpersonal connection), and that requires a social graph with people at the center. For our work lives, the goal is creation (working together to realize our collective potential), and that requires a work graph, with the work at the center.

Don’t get me wrong: Human connection is valuable within a business. But it should be in service to the organizational function of getting work done, and doesn’t need to be the center of the graph.

So, how does this change the experience for you and your teammates? A work graph means having all the information you need when you need it. Instead of blasting messages at the whole team, like “Hey, has anyone started working on this yet?”, you should be able to efficiently find out exactly who’s working on that task and how much progress they’ve made. That’s the target Asana is aiming for. Read Justin’s full Wired article.

Organizations in Asana [asana blog, May 1, 2013]

Today, we’re excited to be launching a collection of new features aimed at helping companies use and support Asana across their entire enterprise. We call it Organizations.

Since we began, Asana has been on a mission to help great teams achieve more ambitious goals. We started 18 months ago with our free service, targeted at smaller teams and even individuals – helping them get and stay organized.

When we launched our first premium tiers six months later, we enabled medium sized teams and companies – think 10s to 100s of people – to go further with Asana. In the year between then and now, we’ve been continuously amazed by all the places and ways Asana is being used to organize a team: in industries as diverse as education, healthcare, finance, technology, and manufacturing; in companies from two-person partnerships to Fortune 100 enterprises; and in dozens of countries representing every continent but the frozen one. There’s a lot of important work being organized in Asana.

But we’re still just getting started – there remain teams that we haven’t been ready to support: the largest teams, those that grow from 100s to 1,000s of people. While it would be remarkable if it only took a small number of coworkers to design and manufacture electric cars, synthesize DNA, or deliver healthcare to villages across the globe – these missions are complex, and require more people to be involved in them to succeed. Many of the teams using Asana today are inside these bigger organizations, and they’ve been asking for Asana to work at enterprise-scale. So for the past several months, we’ve been working on just that.

Stories from our first year [asana blog, Nov 12, 2012]

… When we launched a year go, we had an ambitious mission: to create a shared task management platform that empowers teams of like-minded people to do great things. … In the course of our first year, tens of thousands of teams looking for a better way to work together have adopted Asana. …

… we collected three of these stories from three distinct kinds of teams:
– a tech startup [Foursquare],
– a fast-growing organic food company [Bare Fruit & Sundia] and
– a leading Pacific Coast aquarium [Aquarium of the Bay].

Foursquare Launches 5.0

imageRight around the time Foursquare passed 100 employees over the last year, we started building Foursquare 5.0. This update was a big deal: we were overhauling Foursquare’s core mechanics, evolving from check-ins towards the spontaneous discovery of local businesses. As we built the new app, we needed a way to gather feedback from the entire team.

We tried what felt like every collaboration tool around. Group emails were a mess. Google Docs was impossible to parse. We’d heard about Asana and decided to give it a shot.

Using Asana, we were easily able to collect product feedback and bugs from everyone in the company, then parse, discuss, distribute and prioritize the work. It became an indispensable group communication tool.

Foursquare 5.0 was a giant success, and we couldn’t have done it without Asana.

Noah Weiss, Product Manager

Then, Of Course, There Is Us

It’s an understatement to say that we rely on Asana. We use our own product to manage every function of our business. Asana is where we plan, capture ideas, build meeting agendas, prioritize our product roadmap, document which bugs to fix and list the snacks to buy. It’s our CRM, our editorial calendar, our Applicant Tracking System, and our new-hire orientation system. Every team in the company – from product, design, and engineering to sales and marketing to recruiting and user operations – relies on the product we are building to stay in sync, connect our individual tasks to the bigger picture and accomplish our collective goals.

Q&A: Rising Realty Partners builds their business with Asana [asana blog, Feb 7, 2014]

…The  Los Angeles development firm Rising Realty Partners, shared with us how they used Asana, and our integration with Dropbox, to close a massive ten-property deal.

As our business expanded, we found ourselves relying heavily on email, faxes, and even FedEx to communicate with each other and collaborate with outside parties. We needed a better way to organize, prioritize and communicate around our work, and we found the answer in Asana.


I can’t image how complex our communications would have been if we weren’t using Asana. We had dozens of people internally, and more than 50 people externally, all involved in making this deal happen. Having all of that communication in Asana significantly cut down on the craziness.

Because of Asana’s Dropbox integration, our workflow is now fast, intuitive and organized — something that was impossible to achieve over email. For the acquisition, we used Asana and Dropbox simultaneously to keep track of everything; from what each team member was doing, to the current status of each transaction, to keeping a history of all related documents. We had more than 18,000 items in Dropbox that we would link to in Asana instead of attaching them in email. We removed more than 30 gigabytes of information per recipient from our inboxes and everything was neatly organized around the work we were doing in Asana. This meant that the whole team always had the latest and most relevant information.
 

For this entire project, maybe one percent of our total internal communication was happening in email. With Asana, anyone in the company could look at any aspect of the project, see where it stood, and add their input. No one had to remember to cc’ or ‘reply all’.

….
The success of this deal was largely due to Asana and we plan to use it in future acquisitions –Asana has become essential to our team’s success.
….

Our iPhone App Levels Up [asana blog, Sept 6, 2012]

Until recently, we’ve focused most of our energy on the browser-based version of Asana. But, in the last few months, even as we’ve launched major new features in our web application, we’ve been putting much more time into improving the mobile experience. In June, we made several meaningful architectural improvements to pave the way for bigger and better things and hinted that these changes were in the works.

Today, we’ve taken the next step in that direction: Version 2.0 of our iPhone app is in the App Store now. We are really proud of this effort – almost everyone at Asana played a part in this release. This new version is a top-to-bottom redesign that really puts the power of the desktop web version of Asana right in your pocket.

Asana comes to Android [asana blog, Feb 28, 2013]

Five months ago, we launched our first bonafide mobile app, for the iPhone, and we’ve been steadily improving it ever since. Focusing on a single platform at first allowed us to be meticulous about our mobile experience, adding new features and honing the design until we knew it was something people loved. After strong positive feedback from our customers and a solid rating in the iTunes App Store, we knew it was time.

Today, we are happy to announce that Asana for Android is here. You can get it right now in the Google Play store


As of today (May 8, 2014) there are 70 employees and 15 open positions. The company has 4 investors: Benchmark Capital, Andreessen-Horowitz, Founders Fund and Peter Thiel. The first two put $9 million in November 2009. Then Founders Fund and Peter Thiel added to that $28 million in July 2012. Reuters reported that with Facebook alumni line up $28 million for workplace app Asana [July 23, 2012]:

Asana, a Silicon Valley start-up, has lined up $28 million in a financing round led by PayPal co-founder Peter Thiel and his Founders Fund, the company said.

The funding round values the workplace-collaboration company at $280 million, a person familiar with the matter said.

“This investment allows us to attract the best and brightest designers and engineers,” said Asana co-founder Justin Rosenstein, who said that in turn would help the company build on its goal of making interaction among its client-companies’ employees easier.

Asana launched the free version last year of its company management software that makes it easier to collaborate on projects. It introduced a paid, premium service earlier this year. It declined to give revenue figures, but said “hundreds” of customers had upgraded to the premium version.

Although Rosenstein and co-founder Dustin Moskovitz are alumni of social-network Facebook– Moskovitz co-founded the service with his Harvard roommate Mark Zuckerberg – they were quick to distance Asana from social networking.

Instead, they say, they view the company as an alternative to email, in-person meetings, physical whiteboards, and spreadsheets.

“That’s what we see as our competition,” said Rosenstein. “Replacing those technologies.”

With its latest funding round, Asana has now raised a total of $38 million from investors including Benchmark Capital and Andreessen Horowitz.

Thiel, who got to know Moskovitz and Rosenstein thanks to his early backing of Facebook, had already invested in Asana when it raised its “angel” round in early 2009. Now, his high-profile Founders Fund is investing and Thiel is joining Asana’s board.

Facebook has 901 million monthly users and revenue last year of $3.7 billion. But its May initial public offering disappointed many investors after it priced at $38 per share and then quickly fell. It closed on Friday at $28.76.

Many investors speculate that start-ups will have to accept lower valuations in the wake of the Facebook IPO. The Asana co-founders said the terms of their latest funding round were set before Facebook debuted on public markets.

A few of Facebook’s longtime employees have gone on to work on their own ventures.

Bret Taylor, formerly chief technology officer, said last month he was leaving to start his own company.

Dave Morin, who joined Facebook in 2008 from Apple, left in 2010 to found social network Path. Facebook alumni Adam D’Angelo and Charlie Cheever left in 2009 to start Quora, their question-and-answer company, which is also backed by Thiel.

Another former roommate of Zuckerberg’s, Chris Hughes, also left a few years ago and coordinated online organizing for Barack Obama’s 2008 presidential campaign. Now, he is publisher of the New Republic magazine.

Matt Cohler, who joined Facebook from LinkedIn early in 2005, joined venture capital firm Benchmark Capital in 2008. His investments there include Asana and Quora.

Core technology used

Luna, our in-house framework for writing great web apps really quickly [asana blog, Feb 2, 2010]

At Asana, we’re building a Collaborative Information Manager that we believe will make it radically easier for groups of people to get work done. Writing a complex web application, we experienced pain all too familiar to authors of “Web 2.0″ software (and interactive software in general): there were all kinds of extremely difficult programming tasks that we were doing over and over again for every feature we wanted to write. So we’re developing Lunascript — an in-house programming language for writing rich web applications in about 10% of the time and code you can today.

Check out the video we made » 
[rather an article about Luna as of Nov 2, 2011]

Update: For now we’ve tabled using the custom DSL syntax in favor of a set of Javascript idioms and conventions on top of the “Luna” runtime. So while the contents of this post still accurately present the motivation and capabilities of the Luna framework, we’re using a slightly more cumbersome (JavaScript) syntax than what you see below, in exchange for having more control over the “object code” (primarily for hand-tuning performance).

Release the Kraken! An open-source pub/sub server for the real-time web [asana blog, March 5, 2013]

Today, we are releasing Kraken, the distributed pub/sub server we wrote to handle the performance and scalability demands of real-time web apps like Asana.

Before building Kraken, we searched for an existing open-source pub/sub solution that would satisfy our needs. At the time, we discovered that most solutions in this space were designed to solve a much wider set of problems than we had, and yet none were particularly well-suited to solve the specific requirements of real-time apps like Asana. Our team had experience writing routing-based infrastructure and ultimately decided to build a custom service that did exactly what we needed – and nothing more.

The decision to build Kraken paid off. For the last three years, Kraken has been fearlessly routing messages between our servers to keep your team in sync. During this time, it has yet to crash even once. We’re excited to finally release Kraken to the community!

Issues Moving to Amazon’s Elastic Load Balancer [asana blog, June 5, 2012]


Asana’s infrastructure runs almost entirely on top of Amazon Web Services (AWS). AWS provides us with the ability to launch managed production infrastructure in minutes with simple API calls. We use AWS for servers, databases, monitoring, and more. In general, we’ve been very happy with AWS. A month ago, we decided to use Amazon’s Elastic Load Balancer service to balance traffic between our own software load balancers.

Announcing the Asana API [asana blog, April 19, 2012]

Today we are excited to share that you can now add and access Asana data programmatically using our simple REST API.

The Asana API lets you build a variety of applications and scripts to integrate Asana with your business systems, show Asana data in other contexts, and create tasks from various locations.

Here are some examples of the things you can build:

  • Source Control Integration to mark a Task as complete and add a link to the code submission as a comment when submitting code.
  • A desktop app that shows the Tasks assigned to you
  • A dashboard page that shows a visual representation of complete and incomplete Tasks in a project

Asana comes to Internet Explorer [asana blog, Oct 16, 2013]


Asana is a fast and versatile web-based application that pushes the boundaries of what’s possible inside a browser. Our sophisticated Javascript app requires a modern browser platform, and up until now we could only provide the right user experience on Chrome, Firefox, and Safari. With IE10, Internet Explorer has drastically improved their developer tools and made a marked improvement in standards compliance. With these improvements, we were able to confidently develop Asana for IE10, and we’ve been pleasantly surprised by the process. Check out the blog post on our developer site to see what we learned during this project.

Amazon Web Services not only achieved the clear and far dominant leader status in the Cloud Infrastructure as a Service (Cloud IaaS) market, but “the balance of new projects are going to AWS, not the other providers” – according to Gartner

According to the latest analysis by Gartner, Amazon Web Services (AWS) is:

  1. overwhelmingly the dominant vendor” of the Cloud Infrastructure as a Service (Cloud IaaS) market
  2. a clear leader, with more than five times the compute capacity in use than the aggregate total of the other fourteen providers included in the so called Magic Quadrant (MQ)
  3. appreciated for “innovative, exceptionally agile and very responsive to the market and the richest IaaS product portfolio” which puts AWS into a quite far ahead position even against CSC, the only other in the Leaders quadrant currently

In addition Amazon Web Services has come up in July with a price cut that reaches 80% on its EC2 cloud computing platform.

Note that Gartner’s ranking is a complex evaluation, based on various point of views deemed to be most important from vendor-supplier point of view (see in the 3d party explanation of Gartner’s Magic Quadrant included in the Details part). It is not based on any kind of banchmarking, not even those run buy customers according to their specific application requirements. Therefore it is a well know fact that from pure cloud engineering point of view, especially in terms of focussed benchmarks Amazon EC2 is far from being a leader. The latest example of that:
image

About the Test
UnixBench runs a set of individual benchmark tests, aggregates the scores, and creates a final, indexed score to gauge the performance of UNIX-like systems,which include Linux and its distributions (Ubuntu, CentOS, and Red Hat). From the Unixbench homepage:
The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple tests are used to test various aspects of the system’s performance. These test results are then compared to the scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. The entire set of index values is then combined to make an overall index for the system.
The UnixBench suite used for these tests ran tests that include: Dhrystone 2, Double-precision Whetstone, numerous File Copy tests, Pipe Throughput, ProcessCreation, Shell Scripts, System Call Overhead, and Pipe-based Context Switching.

image

Price-Performance Value: The CloudSpecs Score
The CloudSpecs score calculates the relationship between the cost of a virtual server per hour and the performance average seen from each provider. The scores are relational to each other; e.g., if Provider A scores 50 and Provider B scores 100, then Provider B delivers 2x the performance value in terms of cost. The highest value provider will always receive a score of 100, and every additional provider is pegged in relation to that score. The calculation is:
  • (Provider Average Performance Score) / (Provider Cost per Hour) = VALUE
  • The largest VALUE is then taken as the denominator to peg other VALUES.
  • [(Provider’s VALUE) / (Largest VALUE)] * 100 = CloudSpecs Score (CS Score)
Source: IaaS Price Performance Analysis: Top 14 Cloud Providers – A study of performance among the Top 14 public cloud infrastructure providers [Cloud Spectator and the Cloud Advisory Council, Oct 15, 2013] where—in addition of Unixbench—even more focussed benchmark results are reported as well from the Phoronix Test Suite (i.e. one of benchmark suites in PTS):
For ‘”CPU Performance” the 7-Zip File Compression benchmark which runs p7zip’s integrated benchmark feature to calculate the number of instructions a CPUcan handle per second (measured in millions of instructions per second, or MIPS) when compressing a file
For “Disk Performance” the Dbench benchmark which can be used to stress a filesystem or a server to see which workload it becomes saturated and can also be used for prediction analysis to determine “How many concurrent clients/applications performing this workload can my server handle before response starts to lag?” It is an open source benchmark that contains only file-system calls for testing the disk performance. For the purpose of comparing disk performance, write results are recorded.
For “RAM Performance” the RAMspeed/SMP which is a memory performance benchmark for multi-processor machines running UNIX-like operating systems, which include Linux and its distributions(Ubuntu, CentOS, and Red Hat). Within the RAMspeed/SMP suite, the Phoronix Test Suite conducts benchmarks using a set of Copy, Scale, Add, and Triad testsfrom the *mem benchmarks (INTmem, FLOATmem, MMXmem, and SSEmem) in BatchRun mode to enable high-precision memory performance measurementthrough multiple passes with averages calculated per pass and per run.
For “Internal Network” the Iperf benchmark which is a tool used to measure bandwidth performance. For the purpose of this benchmark, Cloud Spectator set up 2 virtual machines within thesame availability zone/data center to measure internal network throughput.
Amazon EC2 performed “equally bad” in these particular bechnmarks. Check the published report.

THE DETAILS BEHIND 

The 2013 Cloud IaaS Magic Quadrant [by Lydia Leong on Gartner blog, Aug 21, 2013]

Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, 2013, has just been released (see the client-only interactive version, or the free reprint). Gartner clients can also consult the related charts, which summarize the offerings, features, and data center locations.

the best image obtained from the web:

image

We’re now updating this Magic Quadrant on a nine-month basis, and quite a bit has changed since the 2012 update (see the client-only 2012, or the free 2012 reprint).

In particular, market momentum has strongly favored Amazon Web Services. Many organizations have now had projects on AWS for several years, even if they hadn’t considered themselves to have “done anything serious” on AWS. Thus, as those organizations get serious about cloud computing, AWS is their incumbent provider — there are relatively few truly greenfield opportunities in cloud IaaS now. Many Gartner clients now actually have multiple incumbent providers (the most common combination is AWS and Terremark), but nearly all such customers tell us that the balance of new projects are going to AWS, not the other providers.

Little by little, AWS has systematically addressed the barriers to “mainstream”, enterprise adoption. While it’s still far from everything that it could be, and it has some specific and significant weaknesses, that steady improvement over the last couple of years has brought it to the “good enough” point. While we saw much stronger momentum for AWS than other providers in 2012, 2013 has really been a tipping point. We still hear plenty of interest in competitors, but AWS is overwhelmingly the dominant vendor.

At the same time, many vendors have developed relatively solid core offerings. That means that the number of differentiators in the market has decreased, as many features become common “table stakes” features that everyone has. It means that most offerings from major vendors are now fairly decent, but only a few are really stand out for their capabilities.

That leads to an unusual Magic Quadrant, in which the relative strength of AWS in both Vision and Execution essentially forces the whole quadrant graphic to rescale. (To build an MQ, analysts score providers relative to each other, on all of the formal evaluation criteria, and the MQ tool automatically plots the graphic; there is no manual adjustment of placements.) That leaves you with centralized compression of all of the other vendors, with AWS hanging out in the upper right-hand corner.

Note that a Magic Quadrant is an evaluation of a vendor in the market; the actually offering itself is only a portion of the overall score. I’ll be publishing a Critical Capabilities research note in the near future that evaluates one specific public cloud IaaS offering from each of these vendors, against its suitability for a set of specific use cases. My colleagues Kyle Hilgendorf and Chris Gaun have also been publishing extremely detailed technical evaluations of individual offerings — AWS, Rackspace, and Azure, so far.

A Magic Quadrant is a tremendous amount of work — for the vendors as well as for the analyst team (and our extended community of peers within Gartner, who review and comment on our findings). Thanks to everyone involved. I know this year’s placements came as disappointments to many vendors, despite the tremendous hard work that they put into their offerings and business in this past year, but I think the new MQ iteration reflects the cold reality of a market that is highly competitive and is becoming even more so.

A 3d party explanation of the GARTNER IaaS MAGIC QUADRANT 2013 [cloud☁mania, Aug 29, 2013]

Gartner just released the 2013 update of his traditionally Magic Quadrant for Cloud Infrastructure-as-a-Service. Here are some consideration about the evaluation methodology and MQ players.

In the context of this Magic Quadrant, IaaS is defined by Gartner as “a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and API optionally. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s datacentre.”

To be included in Magic Quadrant IaaS providers should target enterprise and midmarket customers, offering high-quality services, with excellent availability, good performance, high security and good customer support. For each IaaS provider included in MQ Gartner is offering deep description related to services offer like: datacentre locations, computing issues, storage & network features, special notes, and recommended users. Also deep comments about Strengths & Caution in Cloud adoption are offered for each IaaS provider, despite the MQ positioning.

Gartner Magic Quadrant for IaaS is a more than eloquent picture of actual status of IaaS major players. IaaS market momentum is strongly dominated by Amazon Web Services both Vision and Execution essentially directions. According Garner analysts, AWS is a clear leader, with more than five times the compute capacity in use than the aggregate total of the other fourteen providers included in MQ. AWS is appreciated for “innovative, exceptionally agile and very responsive to the market and the richest IaaS product portfolio”.

The Leaders Quadrant is positioning CSC as second player, a traditional IT outsourcer with a broad range of datacentre outsourcing capabilities. CSC is appreciated for his commitment to embrace the highly standardized cloud model, and his solid platform attractive to traditional IT operations organizations that still want to retain control, but need to offer greater agility to the business

The Challengers Quadrant is including Verizon Terremark – the market share leader in VMware-virtualized public cloud IaaS, Dimension Data – a large SI and VAR entering in the cloud IaaS market through the 2011 acquisition of OpSource, and Savvis – a CenturyLink company with a long track record of leadership in the hosting market.

Big surprise for Visionaries Quadrant is the comfortable positioning of Microsoft with his Windows Azure platformPreviously strictly PaaS, Azure is becoming IaaS also in April 2013 when Microsoft launched Windows Azure Infrastructure Services which include Virtual Machines and Virtual Networks.  Microsoft place in Visionary Quadrant is motivated by Gartner by the global vision of infrastructure and platform services “that are not only leading stand-alone offerings, but also seamlessly extend and interoperate with on-premises Microsoft infrastructure (rooted in Hyper-V, Windows Server, Active Directory and System Center) and applications, as well as Microsoft’s SaaS offerings.” 

Between the IaaS providers from the Niche Players Quadrant, we have to note the presence of heawy playes triade:IBM, HP, and Fujitsu. Gartner appreciate IBM for his wide range of cloud-related products and services, IaaS MQ analyse including only cloud offering from SmartCloud Enterprise (SCE) and cloud-enabled infrastructure service IBM SmartCloud Enterprise+. In the same way, from HP’s range of cloud-related products and services Gartner is considered only HP Public Cloud and some cloud-enabled infrastructure services, such HP Enterprise Services Virtual Private Cloud. Fujitsu is one of the few non-American cloud providers, being appreciated by Gartner for the large cloud IaaS offerings, including the Fujitsu Cloud IaaS Trusted Public S5 (formerly the Fujitsu Global Cloud Platform), multiple regional offerings based on a global reference architecture (Fujitsu Cloud IaaS Private Hosted, formerly known as Fujitsu Local Cloud Platform), and multiple private cloud offerings, especially in Asia-Pacific area and Europe.

Speaking about non-America regions we should observe that significant European-based providers like CloudSigma, Colt, Gigas, Orange Business Services, OVH and Skyscape Cloud Services was not included in this Magic Quadrant. The same for Asia/Pacific region with major players like Datapipe, NTT and Tata Communications.

Gartner considered also two offerings that are currently in beta stage, and therefore could not be included in this evaluation, but could be considered as prospective players of next MQ edition: Google Compute Engine (GCE)a model similar to Amazon EC2′s, and VMware vCloud Hybrid Service (vCHS) – a full-featured offering with more functionality than vCloud Datacenter Service.

Additional Gartner blog posts related to that:

Cloud IaaS market share and the developer-centric world [by Lydia Leong on Gartner blog, Sept 4, 2013]

Bernard Golden recently wrote a CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. He raised a number of good questions that I thought it would be useful to address. This is part 1 of my response. (See part 2 for more.)
(Broadly, as a matter of Gartner policy, analysts do not debate Magic Quadrant results in public, and so I will note here that I’m talking about the market, and not the MQ itself.)
Bernard: “Why is there such a distance between AWS’s offering and everyone else’s?”
In the Magic Quadrant, we rate not only the offering itself in its current state, but also a whole host of other criteria — the roadmap, the vendor’s track record, marketing, sales, etc. (You can go check out the MQ document itself for those details.) You should read the AWS dot positioning as not just indicating a good offering, but also that AWS has generally built itself into a market juggernaut. (Of course, AWS is still far from perfect, and depending on your needs, other providers might be a better fit.)
But Bernard’s question can be rephrased as, “Why does AWS have so much greater market share than everyone else?”
Two years ago, I wrote two blog posts that are particularly relevant here:
These posts were followed up wih two research notes (links are Gartner clients only):
I have been beating the “please don’t have contempt for developers” drum for a while now. (I phrase it as “contempt” because it was often very clear that developers were seen as lesser, not real buyers doing real things — merely ignoring developers would have been one thing, but contempt is another.) But it’s taken until this past year before most of the “enterprise class” vendors acknowledged the legitimacy of the power that developers now hold.
Many service providers held tight to the view espoused by their traditional IT operations clientele: AWS was too dangerous, it didn’t have sufficient infrastructure availability, it didn’t perform sufficiently well or with sufficient consistency, it didn’t have enough security, it didn’t have enough manageability, it didn’t have enough governance, it wasn’t based on VMware — and it didn’t look very much like an enterprise’s data center architecture. The viewpoint was that IT operations would continue to control purchases, implementations would be relatively small-scale and would be built on traditional enterprise technologies, and that AWS would never get to the point that they’d satisfy traditional IT operations folks.
What they didn’t count on was the fact that developers, and the business management that they ultimately serve, were going to forge on ahead without them. Or that AWS would steadily improve its service and the way it did business, in order to meet the needs of the traditional enterprise. (My colleagues in GTP — the Gartner division that was Burton Group — do a yearly evaluation of AWS’s suitability for the enterprise, and each year, AWS gets steadily, materially better. Clients: see the latest.)
Today, AWS’s sheer market share speaks for itself. And it is definitely not just single developers with a VM or two, start-ups, or non-mission-critical stuff. Through the incredible amount of inquiry we take at Gartner, we know how cloud IaaS buyers think, source, succeed, and sometimes suffer. And every day at Gartner, we talk to multiple AWS customers (or prospects considering their options, though many have already bought something on the click-through agreement). Most are traditional enterprises of the G2000 variety (including some of the largest companies in the world), but over the last year, AWS has finally cracked the mid-market by working with systems integrator partners. The projected spend levels are clearly increasing dramatically, the use cases are extremely broad, the workloads increasingly have sensitive data and regulatory compliance concerns, and customers are increasingly thinking of AWS as a strategic vendor.
(Now, as my colleagues who cover the traditional data center like to point out, the spend levels are still trivial compared to what these customers are spending on the rest of their data center IT, but I think what’s critical here is the shift in thinking about where they’ll put their money in the future, and their desire to pick a strategic vendor despite how relatively early-stage the market is.)
But put another way — it is not just that AWS advanced its offering, but it convinced the market that this is what they wanted to buy (or at least that it was a better option than the other offerings), despite the sometimes strange offering constructs. They essentially created demand in a new type of buyer — and they effectively defined the category. And because they’re almost always first to market with a feature — or the first to make the market broadly aware of that capability — they force nearly all of their competitors into playing catch-up and me-too.
That doesn’t mean that the IT operations buyer isn’t important, or that there aren’t an array of needs that AWS does not address well. But the vast majority of the dollars spent on cloud IaaS are much more heavily influenced by developer desires than by IT operations concerns — and that means that market share currently favors the providers who appeal to development organizations. That’s an ongoing secular trend — business leaders are currently heavily growth-focused, and therefore demanding lots of applications delivered as quickly as possible, and are willing to spend money and take greater risks in order to obtain greater agility.
This also doesn’t mean that the non-developer-centric service providers aren’t important. Most of them have woken up to the new sourcing pattern, and are trying to respond. But many of them are also older, established organizations, and they can only move so quickly. They also have the comfort of their existing revenue streams, which allow them the luxury of not needing to move so quickly. Many have been able to treat cloud IaaS as an extension of their managed services business. But they’re now facing the threat of systems integrators like Cognizant and Capgemini entering this space, combining application development and application management with managed services on a strategic cloud IaaS provider’s platform — at the moment, normally AWS. Nothing is safe from the broader market shift towards cloud computing.
As always, every individual customer’s situation is different from another’s, and the right thing to do (or the safe, mainstream thing to do) evolves through the years. Gartner is appropriately cautionary when it discusses such things with clients. This is a good time to mention that Magic Quadrant placement is NEVER a good reason to include or exclude a vendor from a short list. You need to choose the vendor that’s right for your use case, and that might be a Niche Player, or even a vendor that’s not on the MQ at all — and even though AWS has the highest overallplacement, they might be completely unsuited to your use case.

Where are the challengers to AWS? [by Lydia Leong on Gartner blog, Sept 4, 2013]

This is part of 2 of my response to Bernard Golden’s recent CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. (Part 1 was posted yesterday.)

Bernard: “What skill or insight has allowed AWS to create an offering so superior to others in the market?”

AWS takes a comprehensive view of “what does the customer need”, looks at what customers (whether current customers or future target customers) are struggling with, and tries to address those things. AWS not only takes customer feedback seriously, but it also iterates at shocking speed. And it has been willing to invest massively in engineering. AWS’s engineering organization and the structure of the services themselves allows multiple, parallel teams to work on different aspects of AWS with minimal dependencies on the other teams. AWS had a head start, and with every passing year their engineering lead has grown larger. (Even though they have a significant burden of technical debt from having been first, they’ve also solved problems that competitors haven’t had to yet, due to their sheer scale.)

Many competitors haven’t had the willingness to invest the resources to compete, especially if they think of this business as one that’s primarily about getting a VM fast and that’s all. They’ve failed to understand that this is a software business, where feature velocity matters. You can sometimes manage to put together brilliant, hyper-productive small teams, but this is usually going to get you something that’s wonderful in the scope of what they’ve been able to build, but simply missing the additional capabilities that better-resourced competitors can manage (especially if a competitor can muster both resources and hyper-productivity). There are some awesome smaller companies in this space, though.

Bernard: “Plainly stated, why hasn’t a credible competitor emerged to challenge AWS?”

I think there’s a critical shift happening in the market right now. Three very dangerous competitors are just now entering the marketMicrosoft, Google, and VMware. I think the real war for market share is just beginning.

For instance, consider the following, off the cuff, thoughts on those vendors. These are by no means anything more than quick thoughts and not a complete or balanced analysis. I have a forthcoming research note called “Rise of the Cloud IaaS Mega-Vendors” that focuses on this shift in the competitive landscape, and which will profile these four vendors in particular, so stay tuned for more. So, that said:

Microsoft has brand, deep customer relationships, deep technology entrenchment, and a useful story about how all of those pieces are going to fit together, along with a huge army of engineers, and a ton of money and the willingness to spend wherever it gains them a competitive advantage; its weakness is Microsoft’s broader issues as well as the Microsoft-centricity of its story (which is also its strength, of course). Microsoft is likely to expand the market, attracting new customers and use cases to IaaS — including blended PaaS models.

Google has brand, an outstanding engineering team, and unrivaled expertise at operating at scale; its weakness is Google’s usual challenges with traditional businesses (whatever you can say about AWS’s historical struggle with the enterprise, you can say about Google many times over, and it will probably take them at least as long as AWS did to work through that). Google’s share gain will mostly come at the expense of AWS’s base of HPC customers and young start-ups, but it will worm its way into the enterprise via interactive agencies that use its cloud platform; it should have a strong blended PaaS model.

VMware has brand, a strong relationship with IT operations folks, technology it can build on, and a hybrid cloud story to tell; whether or not its enterprise-class technology can scale to global-class clouds remains to be seen, though, along with whether or not it can get its traditional customer base to drive sufficient volume of cloud IaaS. It might expand the market, but it’s likely that much of its share gain will come at the expense of VMware-based “enterprise-class” service providers.

Obviously, it will take these providers some time to build share, and there are other market players who will be involved, including the other providers that are in the market today (and for all of you wondering “what about OpenStack”, I would classify that under the fates of the individual providers who use it). However, if I were to place my bets, it would be on those four at the top of market share, five years from now. They know that this is a software business. They know that innovative capabilities are vitally necessary. And they know that this has turned into a market fixated on developer productivity and business benefits. At least for now, that view is dominating the actual spending in this market.

You can certainly argue that another market outcome should have happened, that users shouldhave chosen differently, or even that users are making poor decisions now that they’ll regret later. That’s an interesting intellectual debate, but at this point, Sisyphus’s rock is rolling rapidly downhill, so anyone who wants to push it back up is going to have an awfully difficult time not getting crushed.

Verizon Cloud is technically innovative, but is it enough? [by Lydia Leong on Gartner blog, Oct 4, 2013]

Verizon Terremark has announced the launch of its new Verizon Cloud service built using its own technology stack.

Verizon already owns a cloud IaaS offering — in fact, it owns several. Terremark was an early AWS competitor with the Terremark Enterprise Cloud, a VMware-based offering that got strong enterprise traction during the early years of this market (and remains the second-most-common cloud provider amongst Gartner’s clients, with many companies using both AWS and Terremark), as well as a vCloud Express offering. Verizon entered the game later with Verizon Compute as a Service (now called Enterprise Cloud Managed Edition), also VMware-based. Since Verizon’s acquisition of Terremark, the company has continued to operate all the existing platforms, and intends to continue to do so for some time to come.

However, Verizon has had the ambition to be a bigger player in cloud; like many other carriers, it believes that network services are a commodity and a carrier needs to have stickier, value-added, higher-up-the-stack services in order to succeed in the future. However, Verizon also understood that it would have to build technology, not depend on other people’s technology, if it wanted to be a truly competitive global-class cloud player versus Amazon (and Microsoft, Google, etc.).

With that in mind, in 2011, Verizon went and made a manquisitionacquiring CloudSwitch not so much for its product (essentially hypervisor-within-a-hypervisor that allows workloads to be ported across cloud infrastructures using different technologies), as for its team. It gave them a directive to go build a cloud infrastructure platform with a global-class architecture that could run enterprise-class workloads, at global-class scale and at fully competitive price points.

Back in 2011, I conceived what I called the on-demand infrastructure fabric (see my blog post No World of Two Clouds, or, for Gartner clients, the research note, Market Trends: Public and Private Cloud Infrastructure Converge into On-Demand Infrastructure Fabrics) — essentially, a global-class infrastructure fabric with self-service selectable levels of availability, performance, and isolation. Verizon is the first company to have really built what I envisioned (though their project predates my note, and my vision was developed independently of any knowledge of what they were doing).

The Verizon Cloud architecture is actually very interesting, and, as far as I know, unique amongst cloud IaaS providers. It is almost purely a software-defined data center. Components are designed at a very low level — a custom hypervisor, SDN augmented with the use of NPUs, virtualized distributed storage. Verizon has generally tried to avoid using components for which they do not have source code. There are very few hardware components — there’s x86 servers, Arista switches, and commodity Flash storage (the platform is all-SSD). The network is flat, and high bandwidth is an expectation (Verizon is a carrier, after all). Oh, and there’s object-based storage, too (which I won’t discuss here).

The Verizon Cloud has a geographically distributed control plane designed for continuous availability, and it, along with the components, are supposed to be updatable without downtime (i.e., maintenance should not impact anything). It’s intended to provide fine-grained performance controls for the compute, network, and storage resource elements. It is also built to allow the user to select fault domains, allowing strong control of resource placement (such as “these two VMs cannot sit on the same compute hardware”); within a fault domain, workloads can be rebalanced in case of hardware failure, thus offering the kind of high availability that’s often touted in VMware-based clouds (including Terremark’s previous offerings). It is also intended to allow dynamic isolation of compute, storage, and networking components, allowing the creation of private clouds within a shared pool of hardware capacity.

The Verizon Cloud is intended to be as neutral as possible — the theory is that all VM hypervisors can run natively on Verizon’s hypervisor, many APIs can be supported (including its own API, the existing Terremark API, and the AWS, CloudStack, and OpenStack APIs), and there’ll be support for the various VM image formats. Initially, the supported hypervisor is a modified Xen. In other words, Verizon wants to take your workloads, wherever you’re running them now, and in whatever form you can export them.

It’s an enormously ambitious undertaking. It is, assuming it all works as promised, a technical triumph — it’s the kind of engineering you expect out of an organization like AWS or Google, or a software company like Microsoft or VMware, not a staid, slow-moving carrier (the mere fact that Verizon managed to launch this is a minor miracle unto itself). It is actually, in a way, what OpenStack might have aspired to be; the delta between this and the OpenStack architecture is, to me, full of sad might-have-beens of what OpenStack had the potential to be, but is not and is unlikely to become. (Then again, service providers have the advantage of engineering to a precisely-controlled environment. OpenStack, and for that matter, VMware, need to run on whatever junk the customer decides to use, instantly making the problem more complex.)

Unfortunately, the question at this stage is: Will anybody care?

Yes, I think this is an important development in the market, and the fact that Verizon is already a credible cloud player in the enterprise, with an entrenched base in the Terremark Enterprise Cloud, will help it. But in a world where developers control most IaaS purchasing, the bare-bones nature of the new Verizon offering means that it falls short of fulfilling the developer desire for greater productivity. In order to find a broader audience, Verizon will need to commit to developing all the richness of value-added capabilities that the market leaders will need — which likely means going after the PaaS market with the same degree of ambition, innovation, and investment, but certainly means committing to rapidly introducing complementing capabilities and bringing a rich ecosystem in the form of a software marketplace and other partnerships. Verizon needs to take advantage of its shiny new IaaS building blocks to rapidly introduce additional capabilities — much like Microsoft is now rapidly introducing new capabilities into Azure.

With that, assuming that this platform performs as designed, and Verizon can continue to treat Terremark’s cloud folks like they belong to a fast-moving start-up and not an ossified pipe provider, Verizon may have a shot at being one of the leaders in this market. Without that, the Verizon Cloud is likely to be relegated to a niche, just like every other provider whose capabilities stop at the level of offering infrastructure resources.


From: Amazon.com Announces Third Quarter Sales up 24% to $17.09 Billion [press release, Oct 24, 2013]

  • Amazon Web Services (AWS) introduced more than 15 new features and enhancements to its fully managed relational and NoSQL database services. Amazon Relational Database Service (RDS) now supports Oracle Statspack performance diagnostics and has expanded MySQL support, including capabilities for zero downtime data migration. Enhancements to Amazon DynamoDB include new cross-region support, a local test tool, and location-based query capabilities.
  • AWS continued to bolster its management services, making it easier to provision and manage more AWS resources with AWS CloudFormation and AWS OpsWorks, which both added support for Amazon Virtual Private Cloud (VPC). AWS also enhanced the AWS Console mobile app and introduced a new Command Line Interface.
  • AWS continued to gain momentum in the public sector and now has more than 2,400 education institutions and 600 government agencies as customers, including recent new projects with customers such as the U.S. Federal Drug Administration.

THE JULY PRICE CUT

From Amazon.com Announces Second Quarter Sales up 22% to $15.70 Billion [press release, July 25, 2013]

  • AWS announced it had lowered prices by up to 80% on Amazon EC2 Dedicated Instances, instances that run on single-tenant hardware dedicated to a single customer account. In addition, AWS lowered prices on Amazon RDS instances with On-Demand price reductions of up to 28% and Reserved Instance (RI) price reductions of up to 27%.
  • Amazon Web Services (AWS) became the first major cloud provider to achieve FedRAMP Compliance which recognizes the ability of AWS to meet extensive security requirements and compliance mandates for running sensitive US government applications and protecting data. FedRAMP certification simplifies and speeds the ability for government agencies to evaluate and adopt AWS for a wide range of applications and workloads.
  • AWS announced the launch of the AWS Certification Program, which recognizes IT professionals that possess the skills and technical knowledge necessary for building and maintaining applications and services on the AWS Cloud. AWS Certifications help organizations identify candidates and consultants who are proficient at architecting and developing for the cloud.
  • AWS further enhanced its security and identity management capabilities across several services – introducing resource-level permissions for Amazon Elastic Compute Cloud (EC2) and Amazon Relational Database Service (RDS), adding identity federation to AWS Identity and Access Management (IAM), extending Amazon Simple Storage Service (S3) Server Side Encryption support to Amazon Elastic Map Reduce (EMR), and adding custom SSL certificate support for CloudFront. These enhancements give customers more granular security controls over their AWS deployments, applications and sensitive data.

Some directly related and general/major previous press releases from that overall list:

$199 Kindle Fire: Android 2.3 with specific UI layer and cloud services

Follow-up: Kindle Fire with its $200 price pushing everybody up, down or out of the Android tablet market [Dec 8, 2011]

Suggested preliminary reading (although the 7″ Kindle Fire has an IPS screen, the 10″ coming in 2012 may have the FFS?):  Amazon Tablet PC with E Ink Holdings’ Hydis FFS screen [May 3, 2011]

Updates: Chimei Innolux to Supply Panels to 2nd-Gen. Kindle Fire [Dec 21, 2011]

Chimei Innolux Corp., the largest maker of thin film transistor-liquid crystal display (TFT-LCD) panels in Taiwan, recently won Amazon`s order for panels used in its Kindle Fire second-generation tablet PCs.

The company is already a panel supplier to Apple`s iPad 2, and the new order from Kindle Fire would further consolidate Chimei Innolux`s leading position in Taiwan in supplying tablet-use panels.

Industry sources said that tablet-PC panel is one of a few panel models still generating profits now for panel suppliers, so the new order is expected to have positive effects on Chimei Innolux`s operation.

The first-generation Kindle Fire was contract assembled by local Quanta Computer Inc. using panels supplied by Korean company LG Display and Taiwanese maker E Ink Holdings Inc. (formerly known as Prime View International Co., Ltd., who contracted local Chunghwa Picture Tubes, Ltd., or CPT to produce the panels).

Hon Hai Group of Taiwan reportedly won the contract-assembly order for the second-generation Kindle Fire, allowing its affiliate Chimei Innolux to supply the panels.

Data compiled by market research firm iSuppli showed that Chimei Innolux ranked as the world`s No. 3 supplier of tablet-PC panels, trailing only LG Display and Samsung. With the new order from Amazon, Chimei Innolux`s market share is expected to rise further, industry sources said.

Jeff Bezos Owns the Web in More Ways Than You Think [Wired, Nov 13, 2011]

Bezos doesn’t consider the Fire a mere device, preferring to call it a “media service.” While he takes pride in the Fire, he really sees it as an advanced mobile portal to Amazon’s cloud universe. That’s how Amazon has always treated the Kindle: New models simply offer improved ways of buying and reading the content. Replacing the hardware is no more complicated or emotionally involved than changing a flashlight battery.

Competing Visions

The Kindle Fire isn’t just a rival to the iPad. It represents an alternate model of computing: It’s Apple’s post-PC vs. Amazon’s post-web.

Apple: Post-PC

Amazon: Post-Web

Device-centric

Cloud-centric

Own the OS

Forget the OS

Specialized apps

Specialized browser

Hardware is king

Content is king

Downloaded media

Streamed media

How Amazon Powers the Internet

It began as a way for Amazon’s engineers to work together efficiently. Now Amazon Web Services hosts some of the most popular sites on the web and is responsible for a significant amount of the world’s online traffic. Here’s a look at some of the companies that rely on Amazon’s cloud computing platform.

Customer

What it uses Amazon Web Services for

Foursquare

3 million check-ins a day

Harvard Medical School

Vast database for developing genome-analysis models

NASA Jet Propulsion Lab

Processing of hi-res satellite images to help guide its robots

Netflix

Video streaming service that accounts for 25% of US Internet traffic

Newsweek/The Daily Beast

1 million pageviews every hour

PBS

More than 1 petabyte of streaming video a month

SmugMug

Storage for 70 million photos

US Department of Agriculture

Geographic information system for food-stamp recipients

Virgin Atlantic

Crowdsourced travel review service

Yelp

Data storage for its 22 million-plus reviews

Levy: You’ve leveraged Amazon Web Services by making use of it in your new Silk browser. Why?

Bezos: One of the things that makes mobile web browsing slow is the fact that the average website pulls content from 13 different places on the Internet. On a mobile device, even with a good Wi-Fi connection, each round trip is typically 100 milliseconds or more. Some of that can be done in parallel, but you typically have a whole bunch, as many as eight or more round trips that each take 100 milliseconds. That adds up. We’ve broken apart this process. If you can be clever enough to move the computation onto our cloud platform, you get these huge computational resources. Our cloud services are really fast. What takes 100 milliseconds on Wi-Fi takes less than 5 milliseconds on Amazon’s Elastic Compute Cloud. So by moving some of the computation onto that cloud, we can accelerate a lot of what makes mobile web browsing slow.

Levy: Was it difficult to turn yourself from a retail company into a consumer electronics company?

Bezos: It’s not as different as you might think. A lot of our original approaches and techniques carried over very well. For example, we’ve always focused on reducing the time between order and delivery. In hardware, it’s the same principle. An example is the time between when we take delivery on a processor to when it’s being used in a device by a customer. That’s waste. Why would we own a processor that’s supposed to go into a Kindle Fire that’s not actually in a customer’s hands? That’s inventory management.

Levy: By the way, how many Kindles have you sold?

[Bezos gives a long, loud example of his famous laugh.]

Levy: You don’t even answer!

Bezos: I know you don’t expect me to.

Levy: For years you’ve been touting e-ink as superior to a backlit device for reading. But the Fire is backlit. Why should Kindle users switch?

Bezos: They should buy both. When you’re reading long-form, there’s no comparison. You want the e-ink. But you can’t watch a movie with that. And you can’t play Android games. And so on.

Levy: And you now are selling a new version of the basic Kindle for $79. At this point, why not give it away—offer a deal where if people buy a certain amount of books, they get a free Kindle?

Bezos: It’s an interesting marketing idea, and we should think about it over time. But $79 is low enough that it’s not a big deal for many people.

Levy: Speaking of pricing, I wanted to ask about your decision to include streaming video as part of Amazon Prime. Why not charge separately for that? It’s a completely different service, isn’t it?

Bezos: There are two ways to build a successful company. One is to work very, very hard to convince customers to pay high margins. The other is to work very, very hard to be able to afford to offer customers low margins. They both work. We’re firmly in the second camp. It’s difficult—you have to eliminate defects and be very efficient. But it’s also a point of view. We’d rather have a very large customer base and low margins than a smaller customer base and higher margins.

Media Powerhouse

Amazon has stealthily become a major player in the competitive content business, with a major footprint in every medium. Meanwhile, its web services division owns one-fifth of the cloud computing market.

Amazon increases Kindle Fire orders [Nov 10, 2011]

Amazon has recently increased its Kindle Fire orders to more than five million units before the end of 2011 as pre-orders for the machine remain strong, according to sources from upstream component suppliers.

Amazon already raised its order volume once in the middle of the third quarter, up from 3.5 million units originally to four million units.

Since the company estimates that demand for Kindle Fire will become even stronger at the end of 2011, Amazon has further increased its orders. Amazon’s upstream partners including Wintek, Chunghwa Picture Tubes (CPT), LG Display, Ilitek, Quanta Computer, Aces Connectors and Wah Hong Industrial will all benefit from the short-term orders.

UMC Becomes Exclusive Supplier of Kindle Fire’s Processors [Nov 10, 2011]

Benefitting from the launch of Amazon’s tablet PC Kindle Fire, Taiwan-based United Microelectronics Corp. (UMC), one of world’s largest semiconductor foundries, has landed orders from Texas Instruments to exclusively supply ARM processors for the devices, becoming part of Amazon’s supply chain.

With some 215,000 Kindle Fire tablets sold in the first week of launch, the device, ranked in the top-10 gifts for Christmas, is regarded the biggest challenger to the Apple iPad. Optimistic about its constantly growing popularity, market researchers have also raised fourth-quarter sales projections for the Kindle Fire to 5 million units.

Hot sales of Kindle Fire bodes well for UMC as the Taiwanese company is to exclusively supply Texas Instruments OMAP4430 through the 45-nano process. The OMAP4430 is a dual-core 1GHz processor based on ARM architecture, and is widely adopted in a variety of smartphones and tablet PCs, including Motorola’s Droid 3 and Droid RAZR, Fujitsu-Toshiba’s Arrows Z, Panasonic’s Lumix and Toshiba’s Regza.

UMC’s business ties with Texas Instruments have increasingly grown recently, reflected in the influx of orders for the new OMAP4 series processors, contrasted against TI’s erstwhile reliance on mainly Korea’s Samsung Electronics for its older OMAP3 series processors.

Industry insiders indicated that UMC’s capacity utilization rate at the 12-inch wafer foundry will improve significantly in the fourth quarter, thanks to TI’s increasing orders.

Amazon.com Management Discusses Q3 2011 Results – Earnings Call Transcript – Q@A – Seeking Alpha [Oct 25, 2011]
HEAVY Amazon investments into the future:

We’re seeing the best growth which we’ve seen since 2000, meaning in 2010 and so far over the past 12 months ending September.

1. And so with this strong growth, we’re investing in a lot of capacity … we had announced 15 new fulfillment centers this year that’s on a basis of 52 from last year. And then we’d likely open one or two more. We are actually going to be opening 17 new fulfillment centers. …

2. We’re investing to support retail growth fulfilled by Amazon growth, fast-growing AWS business, as well as infrastructure to support our retail business.

3. We’re investing in our Kindle and Digital business. … if you take a look at our Kindle business, for example, we’ve launched 4 new products at the end of September, and we’re very, very excited about those products. They’re at great prices, and they are certainly premium products. And so we’re very excited about those. And we think about the economics of the Kindle business, we think about the totality. We think of the lifetime value of those devices. So we’re not just thinking about the economics of the device and the accessories. We’re thinking about the content. We are selling quite a bit of Special Offers devices which includes ads. We’re thinking about the advertisement and those Special Offers and those lifetime values.

Because according to Amazon.com Management Discusses Q3 2011 Results – Earnings Call Transcript [Oct 25, 2011]:

North America segment operating income decreased 23% to $144 million, a 2.4% operating margin. … Consolidated segment operating income decreased 35% to $260 million or 2.4% of revenue down approximately 290 basis points year-over-year. … For Q4 2011 … We anticipate consolidated segment operating income, which excludes stock-based compensation and other operating expense, to be between $0 and $450 million or between 100% decline and 28% decline.

End of Updates

Amazon Kindle Fire Official Presentation [Sept 28, 2011]

Check out the official presentation of the new Amzon Kindle Fire tablet.

Kindle Fire [product site]


Fast, Dual-Core Processor [1GHz TI OMAP 4, 512MB RAM]

Kindle Fire features a state-of-the-art dual-core processor for fast, powerful performance. Stream music while browsing the web or read books while downloading videos.

Amazon Whispersync

Like Kindle e-readers, Kindle Fire uses Amazon’s Whispersync technology to automatically sync your library, last page read, bookmarks, notes, and highlights across your devices. On Kindle Fire, Whispersync extends to video. Start streaming a movie on Kindle Fire, then pick up right where you left off on your TV – avoid the frustration of having to find your spot. Learn more

Free Month of Amazon Prime

Experience the benefits that millions of Amazon Prime members already enjoy, including unlimited, instant streaming of over 10,000 popular movies and TV shows and Free Two-Day Shipping on millions of items. Learn more

Technical Details

Display 7″ multi-touch display with IPS (in-plane switching) technology and anti-reflective treatment, 1024 x 600 pixel resolution at 169 ppi, 16 million colors.
Size (in inches) 7.5″ x 4.7″ x 0.45″ (190 mm x 120 mm x 11.4 mm).
Weight 14.6 ounces (413 grams).
System Requirements None, because it’s wireless and doesn’t require a computer.
On-device Storage 8GB internal. That’s enough for 80 apps, plus either 10 movies or 800 songs or 6,000 books.
Cloud Storage Free cloud storage for all Amazon content
Battery Life Up to 8 hours of continuous reading or 7.5 hours of video playback, with wireless off. Battery life will vary based on wireless usage, such as web browsing and downloading content.
Charge Time Fully charges in approximately 4 hours via included U.S. power adapter. Also supports charging from your computer via USB.
Wi-Fi Connectivity Supports public and private Wi-Fi networks or hotspots that use the 802.11b, 802.11g, 802.11n, or 802.1X standard with support for WEP, WPA and WPA2 security using password authentication; does not support connecting to ad-hoc (or peer-to-peer) Wi-Fi networks.
USB Port USB 2.0 (micro-B connector)
Audio 3.5 mm stereo audio jack, top-mounted stereo speakers.
Content Formats Supported Kindle (AZW), TXT, PDF, unprotected MOBI, PRC natively, Audible (Audible Enhanced (AA, AAX)), DOC, DOCX, JPEG, GIF, PNG, BMP, non-DRM AAC, MP3, MIDI, OGG, WAV, MP4, VP8.
Documentation Quick Start Guide(included in box); Kindle User’s Guide (pre-installed on device)
Warranty and Service 1-year limited warranty and service included. Optional 2-year Extended Warranty available for U.S. customers sold separately. Use of Kindle is subject to the terms found here.
Included in the Box Kindle Fire device, U.S. power adapter (supports 100-240V), and Quick Start Guide.

Amazon launches Kindle Fire [The Telegraph, Sept 28, 2011]

Amazon chief executive Jeff Bezos shows off the Kindle Fire, a tablet device designed to build on the success of the company’s e-reader and to challenge the dominance of Apple’s iPad.

… Decked out in jeans, white shirt and a jacket, Amazon’s founder and chief executive, Jeff Bezos, told an audience in New York that “this is unbelievable value. What we’re doing is making premium products and offering them at non-premium prices.”

Mr Bezos also claimed that the ability of Amazon to store all the content users download on the internet will prove a key selling point. “All of the content on this device is backed up on the cloud,” said Mr Bezos. “The model where you have to back up your own content is a broken model.”

Live from the Amazon Kindle Fire Launch [Mashable, Sept 28, 2011]

Mashable gets up close with Amazon’s new Kindle Fire tablet at the official unveiling event in New York City.

The Fire’s interface bears no resemblance to any Android tablet (or phone) on the market. Its home screen looks like a bookshelf, with access to recently accessed content and Apps (books, movies and music) and another shelf to pin favorites or frequently used items. At the top of the screen is search and menu accessto Newsstand (for magazines), books, music, movies, apps and docs.

… There are no ports to connect the Fire to your HDTV, but if you have a device that supports Amazon Prime connected to your TV, you can switch from watching a movie on the Fire to your TV. Whispersync will ensure that the movie starts just where you left off.

… The biggest innovation of all may be Amazon Silk, the company’s home-grown browser that uses the power of Amazon’s own cloud servers to offload Web page building duties. It can even, Amazon promised, prefetch the next page it thinks you’ll view.

Kindle Fire Tablet: The 3 Biggest Disappointments [Sept 29, 2011]

… the Kindle Fire lacks three really important features that a tablet needs to have.

#1. No memory expansion. There are no memory card slots, and no USB host (it has a mini USB port for transferring files). No matter what you are stuck with the 8GB of storage that it comes with. Sure, the Kindle Fire comes with free cloud storage, but that only applies to Amazon’s content.

#2. No HDMI port. I can’t believe the Kindle Fire with it’s access to 100,000 movies and TV shows doesn’t have an HDMI port. Even crappy sub-$150 tablets like the Pandigital Starhave an HDMI out port for connecting to a TV.

#3. The Kindle Fire runs on Android 2.3 Gingerbread, but it is closed off. It’s not like a regular open Android tablet with a customizable homescreen, widgets, Android Market, or any of that. It has Amazon’s customized interface and the Amazon appstore. The Kindle Fire may run Android but it is an Amazon tablet, not an Android tablet (hackers will fix that in about 2 days after its release).

Don’t get me wrong, the Kindle Fire is a good starter tablet for Amazon. It has a lot of nice features, especially the IPS screen and dual-core processor, and will compete with the Nook Color very well, but it certainly isn’t breaking any new ground in the tablet world.

Amazon: The Kindle Fire Will Get Rooted [Sept 28, 2011]

Amazon’s new Kindle Fire tablet has a great user interface, but many of our readers already want to get rid of it. That’s OK. Amazon isn’t doing anything special to prevent techies from “rooting” and rewriting the software on its powerful yet inexpensive new tablet, Jon Jenkins, director of Amazon’s Silk browser projectsaid.

“It’s going to get rooted, and what you do after you root it is up to you,” Jenkins said.

(Curious about rooting? Check out our Concise Guide to Android Rooting, which explains what the fuss is about.)

Amazon’s Kindle Fire is powered by the cloud [GigaOM, Sept 28, 2011]

The Kindle Fire also taps into Amazon’s cloud infrastructure to offer free cloud storage and backup of all content, so users don’t have to worry about irrevocably deleting something from local storage. And there’s also simple wireless syncing and integration of Amazon’s Whispersync technology in movies and TV shows, so users can keep their places in videos when they switch from one device to another.

Amazon has built its own interface layer that hides the Android underpinnings. It’s an approach that Barnes & Noble also undertook with its Nook Color. The interface on the Fire looks great and seems extremely snappy. Users get a search bar at the top and then a selection of books, music, video, docs, apps and the web. There’s a carousel of recently added content and then a shelf for favorites.

UPDATE: Here are some more details on the Kindle Fire. It will ship with its own email application that supports IMAP and POP3, but the Fire will rely on third-party apps to provide Exchange support for email. The device will also ship with contacts, shopping and gallery apps but no calendar app. Users will be able to sideload their own content, including photos and videos, with most of the popular formats accepted.

Amazon will go through its Appstore for Android, which has more than 15,000 apps, and filter out those apps that won’t work on the Kindle Fire for users who visit the store from a Kindle Fire. The company is approaching app developers to build new apps and optimize existing titles for the Kindle Fire, but it’s not putting out its own SDK. Instead it will encourage them to use Google’s existing tools. Amazon has started talks with Twitter, Facebook, Pandora and Netflix to optimize apps for Kindle Fire, but it’s too early to say what will happen.

Kindle Fire Live Demo [Sept 28, 2011]

A very detailed 4:39 long demo video with a lot of details.

Introducing Amazon Silk [Amazon Silk blog, Sept 28, 2011]

Today in New York, Amazon introduced Silk, an all-new web browser powered by Amazon Web Services (AWS) and available exclusively on the just announced Kindle Fire.  You might be asking, “A browser?  Do we really need another one?”  As you’ll see in the video below, Silk isn’t just another browser.  We sought from the start to tap into the power and capabilities of the AWS infrastructure to overcome the limitations of typical mobile browsers.  Instead of a device-siloed software application, Amazon Silk deploys a split-architectureAll of the browser subsystems are present on your Kindle Fire as well as on the AWS cloud computing platform.  Each time you load a web page, Silk makes a dynamic decision about which of these subsystems will run locally and which will execute remotely.  In short, Amazon Silk extends the boundaries of the browser, coupling the capabilities and interactivity of your local device with the massive computing power, memory, and network connectivity of our cloud.

We’ll have a lot more to say about Amazon Silk in the coming weeks and months, so please check back with us often.  You can also follow us on Twitter at @AmazonSilk.  Finally, if you’re interested in learning more about career opportunities on the Amazon Silk team, please visit our jobs page.

Amazon Silk—Amazon’s Revolutionary Cloud-Accelerated Web Browser [Kindle, Sept 28, 2011]

The web browser on Kindle Fire introduces a radical new paradigm — a “split browser” architecture that accelerates the power of the mobile device hardware by using the computing speed and power of the Amazon Web Services Cloud. The result is a faster web browsing experience, and it’s available exclusively on Kindle Fire.

Amazon Silk: Bridging the gap between desktop and mobile web browsers [ExtremeTech article, Sept 28, 2011]

… Silk is WebKit-based, uses Google’s SDPY HTTP-replacement protocol, supports Flash 10 — and no, despite what it sounds like, Silk is not comparable to Opera Mini.

If you’ve used Opera Mini — an existing browser that you can use on almost every phone platform — Amazon Silk certainly sounds similar, but it’s important to note that Silk does not send out images of the content; all of the assets arrive on your Kindle Fire tablet, so you get a full browsing experience. With regards to video content, we are told that Amazon Silk doesn’t transcode content — but presumably the dual-core processor in the Kindle Fire and Flash support is enough to handle most YouTube videos.

By leveraging EC2 and S3, Amazon can also do a few other clever things with Silk. For a start, Amazon can cache static files in the cloud — images, CSS, JavaScript — further speeding up page load times on the Kindle Fire. Amazon says that EC2 keep permanent connections open to popular sites like Facebook and Google, too, reducing latency by a few more milliseconds — and if that wasn’t enough, Amazon EC2 will also use predictive algorithms to pre-download the link that it thinks you will click next. Finally, the use of SPDYinstead of HTTP between Kindle Fire and EC2 should result in Silk being much, much faster than comparable Android or iOS browsers.

With regards to privacy, because all of your web requests will go through the cloud, your surfing will effectively be fully anonymous — target websites will see Amazon’s IP addresses, not yours. If you’re worried about Amazonsniffing your data, though, you can turn off “EC2 acceleration” in the browser’s settings.

All in all, then, Amazon Silk will be faster than the competition, it will save everyone (except Amazon) bandwidth costs, and it will even provide a little more security. One important fact is unknown, though: what version of WebKit is Amazon Silk using? Is it closer to desktop versions of Chrome and Safari, or is it like Android 2.3′s stock browser? Has Amazon designed the Kindle Fire to be a first-rate device for HTML5 web apps, or merely a content-consumption machine? We probably won’t find out until we receive a review unit for some real hands-on testing and benchmarks — which will hopefully be in the next few weeks.

Opera: Amazon’s Silk Browser is Flattering, But Five Years Late [Sept 28, 2011]

According to Mahi de Silva, executive vice president for Consumer Mobile at Opera Software ASA, however, the concept of rendering a complex Web page in the cloud and sending an optimized version down to the client is already in several Opera products today. Opera Mini applies compression to most interactions with the Internet while on a mobile device, and Opera Mobile refines this for the Web. Opera’s desktop browseralso has a “turbo mode” that allows the optimization to take place on the desktop, as well.

In all, Opera already does the sort of cloud optimization that Amazon Silk claims to do, deSilva said. OnLive’s Steve Perlman, who runs a cloud gaming service, has also talked about how easy it would be to provide a cloud-based browser, given the fact that it can push an entire remotely-rendered video game down to the client. However, de Silva endorsed the Silk concept.

“It’s very helpful for the consumer because you get a snappier, consistent quality, and also a less expensive experience,” as well as a boon to operators to reduce their own network congestion, de Silva said.

“We’re very flattered that Amazon chose to replicate something that we’ve had in the marketplace for a long time,” de Silva added. “It’s a good reflection of sort of that value proposition of having cloud-based browsing solutions, and also having the ability to switch full featured version – for example, [within Opera] if you want to support full HTML 5 interaction, Javascript, and Flash, you’re in a native browsing mode, but if you don’t encounter a lot of that content, you can be in [an optimized] browsing mode, and you can overlay that to some extent.”

“We’ve been doing this in mobile for five years as a key feature, and with the Opera browser, even longer,” de Silva said.

The performance of Silk is accelerated by the fact that users who need to wait for a browser to connect and download to dozens of Web objects, many of them relying on different domains, Amazon engineers said. The portion of the Amazon Silk browser that lies on the Amazon EC2 infrastructure can quickly negotiate and fetch those objects, connecting to the Web through Amazon’s “fat pipes”. Those who wish can also surf in “off-cloud” mode, somewhat anonymizing the experience.

“I’m sure you’ve had the experience, where you’re on a page, and you’re hanging, and you’re saying, I wish I was on a better network,” said Peter Voshall, a distinguished engineer for Amazon. “We’re on a better network. Our back end has some of the fattest pipes you’ll ever find, and we do all the heavy listing on the back end.”

Still, de Silva said it was doubtful that users will ever see a marked difference in performance between Opera’s implementation and what Amazon offers, based on its infrastructure connections alone. Opera also caches data that’s frequently accessed by many users in a content delivery network (CDN) close by, so that all of Opera’s users don’t have to ping cnn.com to constantly download the logo graphic.

De Silva called Silk a “smart move” for Amazon, one that will provides an always-on, connected experience. Consumers will have to decide for themselves what the effect of Silk will be on their browsing experience, and whether or not it will differentiate it from other manufacturers.

“Over 200 million unique users per month use this,” de Silva said of the Opera cloud browser technology. “Will Amazon ship 200 million devices anytime soon? Probably not.”

Amazon’s Kindle Fire is powered by the cloud [GigaOM, Sept 28, 2011]

The Kindle Fire also taps into Amazon’s cloud infrastructure to offer free cloud storage and backup of all content, so users don’t have to worry about irrevocably deleting something from local storage. And there’s also simple wireless syncing and integration of Amazon’s Whispersync technology in movies and TV shows, so users can keep their places in videos when they switch from one device to another.

Amazon has built its own interface layer that hides the Android underpinnings. It’s an approach that Barnes & Noble also undertook with its Nook Color. The interface on the Fire looks great and seems extremely snappy. Users get a search bar at the top and then a selection of  books, music, video, docs, apps and the web. There’s a carousel of recently added content and then a shelf for favorites.

UPDATE: Here are some more details on the Kindle Fire. It will ship with its own email application that supports IMAP and POP3, but the Fire will rely on third-party apps to provide Exchange support for email. The device will also ship with contacts, shopping and gallery apps but no calendar app. Users will be able to sideload their own content, including photos and videos, with most of the popular formats accepted.

Amazon will go through its Appstore for Android, which has more than 15,000 apps, and filter out those apps that won’t work on the Kindle Fire for users who visit the store from a Kindle Fire. The company is approaching app developers to build new apps and optimize existing titles for the Kindle Fire, but it’s not putting out its own SDK. Instead it will encourage them to use Google’s existing tools. Amazon has started talks with Twitter, Facebook, Pandora and Netflix to optimize apps for Kindle Fire, but it’s too early to say what will happen.

Introducing the All-New Kindle Family: Four New Kindles, Four Amazing Price Points [Amazon press release, Sept 28, 2011]

  • New latest generation Kindle – world’s bestselling e-reader now lighter, faster, and more affordable than ever – only $79
  • New “Kindle Touch” with easy-to-use touch screen – only $99
  • New “Kindle Touch 3G” with free 3G – the top of the line Kindle e-reader – only $149
  • New “Kindle Fire” – the Kindle for movies, TV shows, music, books, magazines, apps, games, and web browsing with all the content, free storage in the Amazon Cloud, Whispersync, Amazon’s new revolutionary cloud-accelerated web browser, vibrant color touch screen, and powerful dual-core processor – all for only $199

… and Kindle Firea new class of Kindle that brings the same ease-of-use and deep integration of content that helped Kindle re-invent readingto movies, TV shows, music, magazines, apps, books, games, and more.

… said Jeff Bezos, Amazon.com Founder and CEO. “Kindle Fire brings together all of the things we’ve been working on at Amazon for over 15 years into a single, fully-integrated service for customers. With Kindle Fire, you have instant access to all the content, free storage in the Amazon Cloud, the convenience of Amazon Whispersync, our revolutionary cloud-accelerated web browser, the speed and power of a state-of-the-art dual-core processor, a vibrant touch display with 16 million colors in high resolution, and a light 14.6 ounce design that’s easy to hold with one handall for only $199. We’re offering premium products, and we’re doing it at non-premium prices.”

New Class of Kindle–“Kindle Fire”–Only $199

All The Content–Over 18 Million Movies, TV Shows, Songs, Apps, Games, Books, and Magazines

Kindle Fire puts Amazon’s incredible selection of digital content at your fingertips:

  • Over 100,000 movies and TV shows from Amazon Instant Video, including thousands of new releases and popular TV shows, available to stream or download, purchase or rent – all just one tap away. Amazon Prime Members enjoy instant, unlimited, commercial-free streaming of over 11,000 movies and TV shows at no additional cost. Kindle Fire comes with one free month of Amazon Prime.
  • Over 17,000,000 songs from Amazon MP3, including new and bestselling albums from just $7.99 and individual songs from $0.69.
  • Over 1,000,000 Kindle books, including thousands of bestsellers, children’s books, comic books and cookbooks in rich color.
  • 100 exclusive graphic novels, including Watchmen, the bestselling – and considered by many to be the greatest – graphic novel of all time, which has never before been available in digital format, as well as Batman: Arkham City, Superman: Earth OneGreen Lantern: Secret Originand 96 others from DC Entertainment.
  • Hundreds of magazines and newspapers – including The Wall Street Journal, The New York Times, USA Today, Wired, Elle, The New Yorker, Cosmopolitan and Martha Stewart Living – with full-color layouts, photographs, illustrations, built-in video, audio and other interactive features are available from the new Kindle Fire “Newsstand.” Kindle Fire customers will enjoy an exclusive free three-month trial to 17 Condé Nast magazines, including Vanity Fair, GQ and Glamour.
  • All the most popular Android apps and games, such as Angry Birds, Plants vs. Zombies, Cut the Rope and more. All apps are Amazon-tested on Kindle Fire to ensure quality and Amazon offers a new free paid app every day.

Cloud-Accelerated Web Browser – “Amazon Silk

The Kindle Fire web browser Amazon Silk introduces a radical new paradigm – a “split browser” architecture that accelerates the power of the mobile device hardware by using the computing speed and power of the Amazon Web Services Cloud. The Silk browser software resides both on Kindle Fire and on the massive server fleet that comprises the Amazon Elastic Compute Cloud (Amazon EC2). With each page request, Silk dynamically determines a division of labor between the mobile hardware and Amazon EC2 (i.e. which browser sub-components run where) that takes into consideration factors like network conditions, page complexity, and cached content. The result is a faster web browsing experience, and it’s available exclusively on Kindle Fire. Additional technical details are available in the Amazon Silk press release, released today at www.amazon.com/pr. To see a video about Amazon Silk go to www.amazon.com/silk.

Simple and Easy-To-Use

Amazon designed the Kindle Fire user interface from the ground upto make it easier than ever to purchase, manage, and enjoy your digital content. Just like with Kindle e-readers, Kindle Fire comes automatically pre-registered to your Amazon.com account so you can immediately start enjoying your digital content purchased from Amazon or shop for new content. All of your digital content is instantly available to enjoy and manage with a simple, consistent experience across all content types.

Free Cloud Storage

Just like Kindle e-readers, Kindle Fire offers free storage for all your Amazon digital content in the Amazon Cloud. Amazon digital content is automatically backed up for free in the Amazon Cloud’s Worry-Free Archive where it’s available for re-downloading anytime.

Amazon Whispersync Now for Movies & TV Too

Just like Kindle e-readers, Kindle Fire uses Amazon’s popular Whispersync technology to automatically synchronize your Kindle library, last page read, bookmarks, notes, and highlights across the widest range of devices and platforms. With the introduction of Kindle Fire, Amazon is expanding this technology to include video. Start streaming a movie on your Kindle Fire, and when you get home, you can resume streaming right where you left off on your TVavoid the frustration of needing to find your spot.

Easy to Hold in One Hand

Just like Kindle e-readers, Kindle Fire was designed to disappear so you can lose yourself in the content. Weighing in at just 14.6 ounces, Kindle Fire is small and light enough to hold in just one hand and carry everywhere you go. The lightweight, compact design makes Kindle Fire perfect for web browsing, playing games, reading and shopping on-the-go.

Brilliant Color Touchscreen

Content comes alive in rich color on a 7-inch full color LCD touchscreen that delivers 16 million colors in high resolution and 169 pixels per inch. Kindle Fire uses IPS (in-plane switching) technologysimilar technology as used on the iPad, for an extra-wide viewing angle – perfect for sharing your screen with others. In addition, the Kindle Fire display is chemically strengthened to be 20 times stiffer and 30 times harder than plastic, which means it is incredibly durable and will stand up to accidental bumps and scrapes.

Fast, Powerful Dual-Core Processor

Kindle Fire features a state-of-the-art dual-core processor for fast, powerful performance. Stream music while browsing the web or read books while downloading videos.

Free Month of Amazon Prime

Right out of the box, Kindle Fire users will experience the benefits that millions of Amazon Prime members already enjoy unlimited, commercial-free, instant streaming of over 11,000 movies and TV shows with Prime Instant Video and the convenience of Free Two-Day Shipping on millions of items from Amazon.com.

Only $199

The all-new Kindle Firewith all the content, Amazon’s revolutionary cloud-accelerated browser, free storage in the Amazon Cloud, Whispersync, 14.6 ounce design that’s easy to hold with one hand, brilliant color touchscreen, and a fast and powerful dual core processoris only $199. Customers in the U.S. can pre-order Kindle Fire starting today at www.amazon.com/kindlefireand it ships November 15.

For high resolution images and video of the all-new Kindle family, visit www.amazon.com/pr/kindle.

Introducing “Amazon Silk”: Amazon’s Revolutionary Cloud-Accelerated Web Browser, Available Exclusively on Kindle Fire [Amazon press release, Sept 28, 2011]

Amazon’s cloud computing infrastructure and eight years of cloud computing expertise come together in new web browser for Kindle FireAmazon’s new Kindle for movies, music, books, magazines, apps, games, and web browsing

Amazon Silk introduces a radical new paradigm – a “split browser” architecture that accelerates the power of the mobile device hardware by using the computing speed and power of the Amazon Web Services cloud (AWS). The Silk browser software resides both on Kindle Fire and on the massive server fleet that comprises the Amazon Elastic Compute Cloud (Amazon EC2). With each page request, Silk dynamically determines a division of labor between the mobile hardware and Amazon EC2 (i.e. which browser sub-components run where) that takes into consideration factors like network conditions, page complexity and the location of any cached content. The result is a faster web browsing experience, and it’s available exclusively on Kindle Fire, Amazon’s new Kindle for movies, music, books, magazines, apps, games, and web browsing.

“Kindle Fire introduces a revolutionary new web browser called Amazon Silk,” said Jeff Bezos, Amazon.com Founder and CEO. “We refactored and rebuilt the browser software stack and now push pieces of the computation into the AWS cloud. When you use Silk – without thinking about it or doing anything explicit – you’re calling on the raw computational horsepower of Amazon EC2 to accelerate your web browsing.”

Modern websites have become complex. For example, on a recent day, constructing the CNN.com home page required 161 files served from 25 unique domains. This degree of complexity is common. In fact, a typical web page requires 80 files served from 13 different domains. Latency over wireless connections is high – on the order of 100 milliseconds round trip. Serving a web page requires hundreds of such round trips, only some of which can be done in parallel. In aggregate, this adds seconds to page load times.

Conversely, Amazon EC2 is always connected to the backbone of the internet where round-trip latency is 5 milliseconds or less to most web sites rather than the 100 milliseconds seen over wireless connections. In addition, EC2 servers have massive computational power. On EC2, available CPU, storage, and available memory can be orders of magnitudes larger than on mobile devices. Silk uses the power and speed of the EC2 server fleet to retrieve all of the components of a website and deliver them to Kindle Fire in a single, fast stream.

In addition to having more horsepower than a mobile processor, AWS has peering relationships with major internet service providers, and many top sites are hosted on EC2. This means that many web requests will never leave the extended infrastructure of AWS, reducing transit times to only a few milliseconds. Further, while processing and memory constraints lead most mobile browsers to limit the amount of work they attempt at any one time, using EC2 frees Silk from these constraints. If hundreds of files are required to build a web page across dozens of domains, Silk can request all of this content simultaneously with EC2, without overwhelming the mobile device processor or impacting battery life.

Traditional browsers must wait to receive the HTML file in order to begin downloading the other page assets. Silk is different because it learns these page characteristics automatically by aggregating the results of millions of page loads and maintaining this knowledge on EC2. While another browser might still be setting up a connection with the host server, Silk has already pushed content that it knows is associated with the page to the Kindle Fire before the site has even instructed the browser where to find it.

A typical web request begins with resolving the domain names associated with the server and establishing a TCP connection to issue the http request. Establishing TCP connections for each request consumes time and resources that slow down traditional browsers. Silk keeps a persistent connection open to EC2 so that there is always a connection at the ready to start loading the next page. Silk also uses EC2 to maintain a persistent connection to the top sites on the web. This approach reduces latency that would otherwise result from constantly establishing TCP connections. Further, Silk’s split architecture uses a pipelined, multiplexing protocol that can send all the content over a single connection.

Finally, Silk leverages the collaborative filtering techniques and machine learning algorithms Amazon has built over the last 15 years to power features such as “customers who bought this also bought…” As Silk serves up millions of page views every day, it learns more about the individual sites it renders and where users go next. By observing the aggregate traffic patterns on various web sites, it refines its heuristics, allowing for accurate predictions of the next page request. For example, Silk might observe that 85 percent of visitors to a leading news site next click on that site’s top headline. With that knowledge, EC2 and Silk together make intelligent decisions about pre-pushing content to the Kindle Fire. As a result, the next page a Kindle Fire customer is likely to visit will already be available locally in the device cache, enabling instant rendering to the screen.

“Silk”

The name “Silk” is inspired by the idea that a thread of silk is an invisible yet incredibly strong connection between two different things. In the case of Amazon Silk, it’s the connection between the Kindle Fire and Amazon EC2 that creates a better, faster browsing experience. For more information on Amazon Silk, visit www.amazon.com/silk.

Exclusively on Kindle Fire

Silk is available exclusively on Kindle Fire. To pre-order Kindle Fire, visit www.amazon.com/kindlefire.

About Amazon Web Services

Launched in 2006, Amazon Web Services (AWS) provides Amazon’s developer customers with access to in-the-cloud infrastructure services based on Amazon’s own back-end technology platform, which developers can use to enable virtually any type of business. As one of the world’s most reliable, scalable, and cost-efficient web infrastructures, AWS has changed the way businesses think about technology infrastructure–there are no up-front expenses or long-term commitments, capital expense is turned into variable operating expense, resources can be added or shed as quickly as needed, and engineering resources are freed up from the undifferentiated heavy lifting of running onsite infrastructure – all without sacrificing operational performance, reliability, or security. AWS now offers over 21 different services, including Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (Amazon S3), and Amazon SimpleDB. AWS services are used by hundreds of thousands of enterprise, government, and startup customers in more than 190 countries around the world, powering everything from the most popular games on Facebook to NASA’s Mars Rover project to pharmaceutical drug research.

Search

From: 
To: 
Advanced search
Site map
The role of PV in China
Members

Save
Password reminder
Join!!
Sections
Home
Systems
Mobos
Displays
Bits + chips
Telecom
Finance
Photos
News calendar
S M T W T F S
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30
DIGITIMES
About
Advertising
Membership
Terms & conditions
Privacy policy
Contact us
Notes & corrections
Miscellaneous
Make my home page
Add to my favorites
Add to My Yahoo!

There are now [16] new stories posted on our member site. Please sign in to read real-time news updates.

Amazon increases Kindle Fire orders

12.00

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

MicrosoftInternetExplorer4

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:”Times New Roman”;
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}
table.MsoTableGrid
{mso-style-name:”Table Grid”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-priority:59;
mso-style-unhide:no;
border:solid windowtext 1.0pt;
mso-border-alt:solid windowtext .5pt;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-border-insideh:.5pt solid windowtext;
mso-border-insidev:.5pt solid windowtext;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

How Amazon Powers the Internet

It began as a way for Amazon’s engineers to work together efficiently. Now Amazon Web Services hosts some of the most popular sites on the web and is responsible for a significant amount of the world’s online traffic. Here’s a look at some of the companies that rely on Amazon’s cloud computing platform.

Customer

What it uses Amazon Web Services for

Foursquare

3 million check-ins a day

Harvard Medical School

Vast database for developing genome-analysis models

NASA Jet Propulsion Lab

Processing of hi-res satellite images to help guide its robots

Netflix

Video streaming service that accounts for 25% of US Internet traffic

Newsweek/The Daily Beast

1 million pageviews every hour

PBS

More than 1 petabyte of streaming video a month

SmugMug

Storage for 70 million photos

US Department of Agriculture

Geographic information system for food-stamp recipients

Virgin Atlantic

Crowdsourced travel review service

Yelp

Data storage for its 22 million-plus reviews