Home » Posts tagged 'OpenStack'

Tag Archives: OpenStack

OpenStack adoption (by Q1 2016)

OpenStack Promise as per Moogsoft -- June 3, 2015For information on OpenStack provided earlier on this blog see:
– Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others, ‘Experiencing the Cloud’, Dec 10, 2013
– Red Hat Enterprise Linux OpenStack Platform 4 delivery and Dell as the first company to OEM it co-engineered on Dell infrastructure with Red Hat, ‘Experiencing the Cloud’, Feb 19, 2014
To understand the OpenStack V4 level state-of-technology-development as of June 25, 2015:
– go to my homepage: https://lazure2.wordpress.com/
– or to the OpenStack related part of Microsoft Cloud state-of-the-art: Hyper-scale Azure with host SDN — IaaS 2.0 — Hybrid flexibility and freedom, ‘Experiencing the Cloud’, July 11, 2015

May 19, 2016:

Oh, the places you’ll go with OpenStack! by Mark Collier, OpenStack Foundation COO on ‘OpenStack Superuser’:

With OpenStack in tow you’ll go far — be it your house, your bank, your city or your car.

Just look at all of the exciting places we’re going:

From the phone in your pocket

The telecom industry is undergoing a massive shift, away from hundreds of proprietary devices in thousands of central offices accumulated over decades, to a much more efficient and flexible software plus commodity hardware approach. While some carriers like AT&T have already begun routing traffic from the 4G networks over OpenStack powered clouds to millions of cellphone users, the major wave of adoption is coming with the move to 5G, including plans from AT&T, Telefonica, SK Telekom, and Verizon.

We are on the cusp of a revolution that will completely re-imagine what it means to provide services in the trillion dollar telecom industry, with billions of connected devices riding on OpenStack-powered infrastructure in just a few years.

To the living room socket

The titans of TV like Comcast, DirecTV, and Time Warner Cable all rely on OpenStack to bring the latest entertainment to our homes efficiently, and innovators like DigitalFilm Tree are producing that content faster than ever thanks to cloud-based production workflows.

Your car, too, will get smart

Speaking of going places, back here on earth many of the world’s top automakers, such as BMW and the Volkswagen group, which includes Audi, Lamborghini, and even Bentley, are designing the future of transportation using OpenStack and big data. The hottest trends to watch in the auto world are electric zero emissions cars and self-driving cars. Like the “smart city” mentioned above, a proliferation of sensors plus connectivity call for distributed systems to bring it all together, creating a huge opportunity for OpenStack.

And your bank will take part

Money moves faster than ever, with digital payments from startups and established players alike competing for consumer attention. Against this backdrop of enormous market change, banks must meet an increasingly rigid set of regulatory rules, not to mention growing security threats. To empower their developers to innovate while staying diligent on regs and security, financial leaders like PayPal, FICO, TD Bank, American Express, and Visa are adopting OpenStack.

Your city must keep the pace

Powering the world’s cities is a complex task and here OpenStack is again driving automation, this time in the energy sector. State Grid Corporation, the world’s largest electric utility, serves over 120 million customers in China while relying on OpenStack in production.

Looking to the future, cities will be transformed by the proliferation of fast networks combined with cheap sensors. Unlocking the power of this mix are distributed systems, including OpenStack, to process, store, and move data. Case in point: tcpcloud in Prague is helping introduce “smart city” technology by utilizing inexpensive Raspberry Pis embedded in street poles, backed by a distributed system based on Kubernetes and OpenStack. These systems give city planners insight into traffic flows of both pedestrians and cars, and even measure weather quality. By routing not just packets but people, cities are literally load balancing their way to lower congestion and pollution.

From inner to outer space

The greatest medical breakthroughs of the next decade will come from analyzing massive data sets, thanks to the proliferation of distributed systems that put supercomputer power into the hands of every scientist. And OpenStack has a huge role to play empowering researchers all over the globe: from Melbourne to Madrid, Chicago to Chennai, or Berkeley to Beijing, everywhere you look you’ll find OpenStack.

To explore this world, I recently visited the Texas Advanced Computing Center (TACC) at the University of Texas at Austin where I toured a facility that houses one of the top 10 supercomputers in the world, code named “Stampede

But what really got me excited about the future was the sight of two large OpenStack clusters: one called Chameleon, and the newest addition, Jetstream, which put the power of more than 1,000 nodes and more than 15,000 cores into the hands of scientists at 350 universities. In fact, the Chameleon cloud was recently used in a class at the University of Arizona by students looking to discover exoplanets. Perhaps the next Neil deGrasse Tyson is out there using OpenStack to find a planet to explore for NASA’s Jet Propulsion Laboratories.

Where should we go next?

Mark Collier is OpenStack co-founder, and currently the OpenStack Foundation COO. This article was first published in Superuser Magazine, distributed at the Austin Summit.

May 9, 2016:

From OpenStack Summit Austin, Part 1: Vendors digging in for long haul by Al Sadowski, 451 Research, LLC:  This report provides highlights from the most recent OpenStack Summit

THE 451 TAKE OpenStack mindshare continues to grow for enterprises interested in deploying cloud-native applications in greenfield private cloud environments. However, its appeal is limited for legacy applications and enterprises sold on hyperscale multi-tenant cloud providers like AWS and Azure. There are several marquee enterprises with OpenStack as the central component of cloud transformations, but many are still leery of the perceived complexity of configuring, deploying and maintaining OpenStack-based architectures. Over the last few releases, processes for installation and upgrades, tooling, and API standardization across projects have improved as operators have become more vocal during the requirements phase. Community membership continues to grow on a global basis, and the supporting organization also depicts a similar geographic trend.

…  Horizontal scaling of Nova is much improved, based on input from CERN and Rackspace. CERN, an early OpenStack adopter, demonstrated the ability for the open source platform to scale – it now has 165,000 cores running OpenStack. However, Walmart, PayPal and eBay are operating larger OpenStack environments.

May 18, 2015:

Walmart‘s Cloud Journey by Amandeep Singh Juneja
Sr. Director, Cloud Engineering and Operations, WalmartLabs: Introduction to World’s largest retailer and its journey to build a large private Cloud.

Amandeep Singh Juneja is Senior Director for Cloud Operations and Engineering at WalmartLabs. In his current role, Amandeep is responsible for the build out of elastic cloud used by various Walmart Ecommerce properties. Prior to his current role at Walmart Labs, Amandeep has held various leadership roles at HP, WebOS (Palm) and eBay.

May 19, 2015:

OpenStack Update from eBay and PayPal by Subbu Allamaraju
Chief Engineer, Cloud, eBay Inc: Journey and future of OpenStack eBay and PayPal

Subbu is the Chief Engineer of cloud at eBay Inc. His team builds and operates a multi-tenant geographically distributed OpenStack based private cloud. This cloud now serves 100% of PayPal web and mid tier workloads, significant parts of eBay front end and services, and thousands of users for their dev/test activities.

May 18, 2015:

Architecting Organizational Change at TD Bank by Graeme Peacock, VP Engineering, TD Bank Group

Graeme cut his teeth in the financial services consulting industry by designing and developing real-time Trading, Risk and Clearing applications. He then joined NatWest Markets and J.P. Morgan in executive level roles within the Equity Derivatives business lines.
Graeme then moved to a Silicon Valley Startup to expand his skillset as V.P. of Engineering at Application Networks. His responsibility extended to Strategy, Innovation, Product Development, Release Management and Support to some of the biggest names in the Financial Services Sector.
For the last 10 years, he has held Divisional CIO roles at Citigroup and Deutsche Bank, both of which saw him responsible for Credit, Securitized and Emerging Market businesses.
Graeme moved back to a V.P. of Engineering role at TD Bank Group several years ago. He currently oversees all Infrastructure Innovation — everything form Mobile and Desktop to Database, Middleware and Cloud.  His focus is on the transformational: software development techniques, infrastructure design patterns, and DevOps processes.

TD Bank uses cloud as catalyst for cultural change in IT
May 18, 2015 Written by Jonathan Brandon for Business Cloud News

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.
Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.
“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.
But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.
“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.
“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”
Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.
In order to standardise and reduce the number of services the firm’s developers use, the bank  created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.
The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.
“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.
“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:

While OpenStack may have been conceived as an open source multi-tenant IaaS, its future success will mainly come from hosted and on-premises private cloud deployments. Yes, there are many pockets of success with regional or vertical-focused public clouds based on OpenStack, but none with the scale of AWS or the growth of Microsoft Azure. Hewlett Packard Enterprise shuttered its OpenStack Helion-based public cloud, and Rackspace shifted engineering resources away from its own public cloud. Rackspace, the service provider with the largest share of OpenStack-related revenue, says its private cloud is growing in the ‘high double digits.’ Currently, 56% of OpenStack’s service-provider revenue total is public cloud-based, but we expect private cloud will account for a larger portion over the next few years.

October 21, 2015:

A new model to deliver public cloud by Bill Hill, SVP and GM, HP Cloud

Over the past several years, HP has built its strategy on the idea that a hybrid infrastructure is the future of enterprise IT. In doing so, we have committed to helping our customers seamlessly manage their business across traditional IT and private, managed or public cloud environments, allowing them to optimize their infrastructure for each application’s unique requirements.
The market for hybrid infrastructure is evolving quickly. Today, our customers are consistently telling us that in order to meet their full spectrum of needs, they want a hybrid combination of efficiently managed traditional IT and private cloud, as well as access to SaaS applications and public cloud capabilities for certain workloads. In addition, they are pushing for delivery of these solutions faster than ever before.
With these customer needs in mind, we have made the decision to double-down on our private and managed cloud capabilities. For cloud-enabling software and solutions, we will continue to innovate and invest in our HP Helion OpenStack®platform. HP Helion OpenStack® has seen strong customer adoption and now runs our industry leading private cloud solution, HP Helion CloudSystem, which continues to deliver strong double-digit revenue growth and win enterprise customers. On the cloud services side, we will focus our resources on our Managed and Virtual Private Cloud offerings. These offerings will continue to expand, and we will have some very exciting announcements on these fronts in the coming weeks.

Public cloud is also an important part of our customers’ hybrid cloud strategy, and our customers are telling us that the lines between all the different cloud manifestations are blurring. Customers tell us that they want the ability to bring together multiple cloud environments under a flexible and enterprise-grade hybrid cloud model. In order to deliver on this demand with best-of-breed public cloud offerings, we will move to a strategic, multiple partner-based model for public cloud capabilities, as a component of how we deliver these hybrid cloud solutions to enterprise customers.

Therefore, we will sunset our HP Helion Public Cloud offering on January 31, 2016. As we have before, we will help our customers design, build and run the best cloud environments suited to their needs – based on their workloads and their business and industry requirements.

To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments. To enable this flexibility, we are helping customers build cloud-portable applications based on HP Helion OpenStack® and the HP Helion Development Platform. In Europe, we are leading the Cloud28+ initiative that is bringing together commercial and public sector IT vendors and EU regulators to develop common cloud service offerings across 28 different countries.
For customers who want access to existing large-scale public cloud providers, we have already added greater support for Amazon Web Services as part of our hybrid delivery with HP Helion Eucalyptus, and we have worked with Microsoft to support Office 365 and Azure. We also support our PaaS customers wherever they want to run our Cloud Foundry platform – in their own private clouds, in our managed cloud, or in a large-scale public cloud such as AWS or Azure.
All of these are key elements in helping our customers transform into a hybrid, multi-cloud IT world. We will continue to innovate and grow in our areas of strength, we will continue to help our partners and to help develop the broader open cloud ecosystem, and we will continue to listen to our customers to understand how we can help them with their entire end-to-end IT strategies.

 December 1, 2015:

Hewlett Packard Enterprise and Microsoft announce plans to deliver integrated hybrid IT infrastructure press release

London, U.K. – December 1, 2015 – Today at Hewlett Packard Enterprise Discover, HPE and Microsoft Corp. announced new innovation in Hybrid Cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft’s hybrid cloud offerings.

“Hewlett Packard Enterprise is committed to helping businesses transform to hybrid cloud environments in order to drive growth and value,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “Public cloud services, like those Azure provides, are an important aspect of a hybrid cloud strategy and Microsoft Azure blends perfectly with HPE solutions to deliver what our customers need most.”
The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models and accelerate their business further, faster.
“Our mission to empower every organization on the planet is a driving force behind our broad partnership with Hewlett Packard Enterprise that spans Microsoft Azure, Office 365 and Windows 10,” said Satya Nadella, CEO, Microsoft. “We are now extending our longstanding partnership by blending the power of Azure with HPE’s leading infrastructure, support and services to make the cloud more accessible to enterprises around the globe.”
Product Integration and Collaboration HPE and Microsoft are introducing the first hyper-converged system with true hybrid cloud capabilities, the HPE Hyper-Converged 250 for Microsoft Cloud Platform System StandardBringing together industry leading HPE ProLiant technology and Microsoft Azure innovation, the jointly engineered solution brings Azure services to customers’ datacenters, empowering users to choose where and how they want to leverage the cloud. An Azure management portal enables business users to self-deploy Windows and Linux workloads, while ensuring IT has central oversight. Azure services provide reliable backup and disaster recovery, and with HPE OneView for Microsoft System Center, customers get an integrated management experience across all system components. HPE offers hardware and software support, installation and startup services to customers to speed deployment to just a matter of hours, lower risk and decrease total cost of ownership. The CS 250 is available to order today.
As part of the expanded partnership, HPE will enable Azure consumption and services on every HPE server, which allows customers to rapidly realize the benefits of hybrid cloud.
Extended Support and Services to Simplify Cloud
HPE and Microsoft will create HPE Azure Centers of Excellence in Palo Alto, Calif. and Houston, Texas, to ensure customers have a seamless hybrid cloud experience when leveraging Azure across HPE infrastructure, software and services. Through the work at these centers, both companies will invest in continuing advancements in Hybrid IT and Composable Infrastructure.
Because Azure is a preferred provider of public cloud for HPE customers, HPE also plans to certify an additional 5,000 Azure Cloud Architects through its Global Services Practice. This will extend its Enterprise Services offerings to bring customers an open, agile hybrid cloud with improved security that integrates with Azure.
Partner Program Collaboration
Microsoft will join the HPE Composable Infrastructure Partner Program to accelerate innovation for the next-generation infrastructure and advance the automation and integration of Microsoft System Center and HPE OneView orchestration tools with today’s infrastructure.
Likewise, HPE joined two Microsoft programs that help customers accelerate their hybrid cloud journey through end-to-end cloud, mobility, identity and productivity solutions. As a participant in Microsoft’s Cloud Solution Provider program, HPE will sell Microsoft cloud solutions across Azure, the Microsoft Enterprise Mobility Suite and Office 365.

May 9, 2016: From OpenStack Summit Austin, Part 1: Vendors digging in for long haul continued:

VENDOR DEVELOPMENTS

As of the Mitaka release, two new gold members were added: UnitedStack and EasyStack, both from China. Other service providers and vendors shared their customer momentum and product updates with 451 Research during the summit. Among the highlights are: ƒ

  • AT&T has cobbled together a DevOps team from 67 different organizations, in order to transform into a software company. ƒ
  • All of GoDaddy’s new servers are going into its OpenStack environment. It is also using the Ironic (bare metal) project and exploring containers on OpenStack. ƒ
  • SwiftStack built a commercial product with an AWS-like consumption model using the Swift (object storage) project. It now has over 60 customers, including eBay, PayPal, Burton Snowboards and Ancestry.com. ƒ
  • OVH is based in France and operates a predominately pan-Europe public cloud. It added Nova compute in 2014, and currently has 75PB on Swift storage. ƒ
  • Unitas Global says OpenStack-related enterprise engagements are a large part of its 100% Y/Y growth. While it does not contribute code, it is helping to develop operational efficiencies and working with Canonical to deploy ‘vanilla’ OpenStack using Juju charms. Tableau Software is a client. ƒ
  • DreamHost is operating an OpenStack public cloud, DreamCompute, and is a supporter of the Astara (network orchestration) project. It claims 2,000 customers for DreamCompute and 10,000 customers for its object storage product. ƒ
  • Platform9 is a unique OpenStack in SaaS startup with 20 paying customers. Clients bring their own hardware, and the software provides the management functions and takes care of patching and upgrades. ƒ
  • AppFormix is a software startup focused on cloud operators and application developers that has formed a licensing agreement with Rackspace. Its analytics and capacity-planning dashboard software will now be deployed on Rackspace’s OpenStack private cloud. The software also works with Azure and AWS. ƒ
  • Tesora is leveraging the Trove project to offer DBaaS. The vendor built a plug-in for Mirantis’ Fuel installer. The collaboration claims to make commercial, open source relational and NoSQL databases easier for administrators to deploy.

April 25, 2016:

AT&T’s Cloud Journey with OpenStack by Sorabh Saxena SVP, Software Development & Engineering, AT&T

OpenStack + AT&T Innovation = AT&T Integrated Cloud.

AT&T’s network has experienced enormous growth in traffic in the last several years and the trend continues unabated. Our software defined network initiative addresses the escalating traffic demands and brings greater agility and velocity to delivering features to end customers. The underlying fabric of this software defined network is AT&T Integrated Cloud (AIC).

Sorabh Saxena, AT&T’s SVP of Software Development & Engineering, will share several use cases that will highlight a multi-dimensional strategy for delivering an enterprise & service provider scale cloud. The use cases will illustrate OpenStack as the foundational element of AIC, AT&T solutions that complement it, and how it’s integrated with the larger AT&T ecosystem.

http://att.com/ecomp


As the Senior Vice President of Software Development and Engineering at AT&T, Sorabh Saxena is leading AT&T’s transformation to a software-based company.  Towards that goal, he is leading the development of platforms that include AT&T’s Integrated Cloud (AIC), API, Data, and Business Functions. Additionally, he manages delivery and production support of AT&T’s software defined network.

Sorabh and his organization are also responsible for technology solutions and architecture for all IT projects, AT&T Operation Support Systems and software driven business transformation programs that are positioning AT&T to be a digital first, integrated communications company with a best in class cost structure. Sorabh is also championing a cultural shift with a focus on workforce development and software & technology skills development.

Through Sorabh and his team’s efforts associated with AIC, AT&T is implementing an industry leading, highly complex and massively scaled OpenStack cloud.  He is an advocate of OpenStack and his organization contributes content to the community that represents the needs of large enterprises and communication services providers.

April 25, 2016And the Superuser Award goes to… AT&T takes the fourth annual Superuser Award.

AUSTIN, Texas — The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.

NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list–read their full application here.

mkih1tshgq4senfbijhh[1]

Sorabh Saxena gives a snapshot of AT&Ts OpenStack projects during the keynote.

The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.

The legacy telecom is in the top 20 percent for upstream contributions with plans to increase this significantly in 2016.

It’s time for the community to determine the winner of the Superuser Award to be presented at the OpenStack Austin Summit. Based on the nominations received, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to four finalists.

Now, it’s your turn.

The team from AT&T is one of the four finalists. Review the nomination criteria below, check out the other nominees and cast your vote before the deadline, Friday, April 8 at 11:59 p.m.Pacific Daylight Time. Voting is limited to one ballot per person.

How has OpenStack transformed your business?

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond.

  1. Virtualization and virtual network functions (VNFs) are of critical importance to the Telecom industry to address growth and agility. AT&T’s Domain 2.0 Industry Whitepaper released in 2013 outlines the need as well as direction.
  2. AT&T chose OpenStack as the core foundation of their cloud and virtualization strategy
  3. OpenStack has reinforced AT&T’s open source strategy and strengthened our dedication to the community as we actively promote and invest resources in OpenStack
  4. AT&T is committing staff and resources to drive the vision and innovation in the OpenStack and OPNFV communities to help drive OpenStack as the default cloud orchestrator for the Telecom industry
  5. AT&T as a founding member of the ETSI ISG network functions virtualization (NFV) helped drive OpenStack as the cloud orchestrator in the NFV platform framework. OpenStack was positioned as the VIM – Virtual Infrastructure Manager. This accelerated the convergence of the Telco industry onto OpenStack.

OpenStack serves as a critical foundation for AT&T’s software-defined networking (SDN) and NFV future and we take pride in the following:

  • AT&T has deployed 70+ OpenStack (Juno & Kilo based) clouds globally, which are currently operational. Of the 70+ clouds 57 are production application and network clouds.
  • AT&T plans 90% growth, going to 100+ production application and network clouds by the end of 2016.
  • AT&T connects more than 14 million wireless customers via virtualized networks, with significant subscriber cut-over planned again in 2016
  • AT&T controls 5.7% of our network resources (29 Telco production grade VNFs) with OpenStack, with plans to reach 30% by the end of 2016 and 75% by 2020.
  • AT&T trained more than 100 staff in OpenStack in 2015

AT&T plans to expand to expand its community team of 50+ employees in 2016 As the chosen cloud platform OpenStack enabled AT&T in the following SDN and NFV related initiatives:

  • Our recently announced 5G field trials in Austin
  • Re-launch of unlimited data to mobility customers
  • Launch of AT&T Collaborate a next generation communication tool for enterprise
  • Provisioning of a Network on Demand platform to more than 500 enterprise customers
  • Connected Car and MVNO (Mobile Virtual Network Operator)
  • Mobile Call Recording
  • Internally we are virtualizing our control services like DNS, NAT, NTP, DHCP, radius, firewalls, load balancers and probes for fault and performance management.

Since 2012, AT&T has developed all of our significant new applications in a cloud native fashion hosted on OpenStack. We also architected OpenStack to support legacy apps.

  • AT&T’s SilverLining Cloud (predecessor to AIC) leveraged the OpenStack Diablo release, dating as far back as 2011
  • OpenStack currently resides on over 15,000 VMs worldwide, with the expectation of further, significant growth coming in 2016-17
  • AT&T’s OpenStack integrated Orchestration framework has resulted in a 75% reduction in turnaround time for requests for virtual resources
  • AT&T Plans to move 80% of our Legacy IT into the OpenStack based virtualized cloud environment within coming years
  • Uniform set of APIs exposed by OpenStack allows AT&T business units to leverage a “develop-once-run-everywhere” set of tools OpenStack helps AT&T’s strategy to begin to adopt best of the breed solutions at five 9’s of reliability for:
    • NFV
    • Internet-scale storage service
    • SDN
  • Putting all AT&T’s workloads on one common platform Deployment Automation: OpenStack modules have enabled AT&T to cost-effectively manage the OpenStack configuration in an automated, holistic fashion.
  • Using OpenStack Heat, AT&T pushed rolling updates and incremental changes across 70+ OpenStack clouds. Doing it manually would be take many more people and a much longer schedule.
  • Using OpenStack Fuel as a pivotal component in its cloud deployments AT&T accelerates the otherwise consuming, complex, and error-prone process of deploying, testing, and maintaining various configuration flavors of OpenStack at scale. AT&T was a major contributor towards Fuel 7.0 and Fuel 8.0 requirements. OpenStack has been a pivotal driver of AT&T’s overall culture shift. AT&T as an organization is in the midst of a massive culture shift from a Legacy Telco to a company where new skills, techniques and solutions are embraced.

OpenStack has been a key driver of this transformation in the following ways:

  • AT&T is now building 50 percent of all software on open source technologies
  • Allowing for the adoption of a dev ops model that creates a more unified team working towards a better end product
  • Development transitioned from a waterfall to cloud-native CICD methodologies
  • Developers continue to support OpenStack and make their applications cloud-native whenever possible.

How has the organization participated in or contributed to the OpenStack community?

AT&T was the first U.S. telecom service provider to sign up for and adopt the then early stage NASA-spawned OpenStack cloud initiative, back in 2011.

  • AT&T has been an active OpenStack contributor since the Bexar release.
  • AT&T has been a Platinum Member of the OpenStack Foundation since its origins in 2012 after helping to create its bylaws.
  • Toby Ford, AVP AT&T Cloud Technology has provided vision, technology leadership, and innovation to OpenStack ecosystem as an OpenStack Foundation board member since late 2012.
  • AT&T is founding member of ETSI, and OPNFV.
  • AT&T has invested in building an OpenStack upstream contribution team with 25 current employees and a target for 50+ employees by the end of 2016.
  • During the early years of OpenStack, AT&T brought many important use-cases to the community. AT&T worked towards solving those use-cases by leveraging various OpenStack modules, in turn encouraging other enterprises to have confidence in the young ecosystem.
  • AT&T drove these following Telco-grade blueprint contributions to past releases of OpenStack:
    • VLAN aware VMs (i.e. Trunked vNICs) – Support for BGP VPN, and shared volumes between guest VMs
    • Complex query support for statistics in Ceilometer
    • Spell checker gate job
    • Metering support for PCI/PCIe per VM tenant
    • PCI passthrough measurement in Ceilometer – Coverage measurement gate job
    • Nova using ephemeral storage with cinder
    • Climate subscription mechanism
    • Access switch port discovery for bare metal nodes
    • SLA enforcement per vNIC – MPLS VPNaaS
    • NIC-state aware scheduling
  • Toby Ford has regularly been invited to present keynotes, sessions, and panel talks at a number of OpenStack summits. For instance: Role of OpenStack in a Telco: User case study – at Atlanta Summit May 2014 – Leveraging OpenStack to Solve Telco needs: Intro to SDN/NFV – Atlanta Summit May 2014 – Telco OpenStack Roadmap Panel Talk – Tokyo Summit October 2015 – OpenStack Roadmap Software Trajectory – Atlanta Summit May 2014 – Cloud Control to Major Telco – Paris Summit November 2014.
  • Greg Stiegler, assistant vice president – AT&T cloud tools & development organization represented the AT&T technology development organization at the Tokyo Summit.
  • AT&T Cloud and D2 Architecture team members were invited to present various keynote sessions, summit sessions and panel talks including: – Participation at the Women of OpenStack Event – Tokyo Summit 2015 – Empower Your Cloud Through Neutron Service Function Chaining – Tokyo Summit Oct 2015 – OPNFV Panel – Vancouver Summit May 2015 – OpenStack as a Platform for Innovation – Keynote at OpenStack Silicon Valley – Aug 2015 – Taking OpenStack From Zero to Production in a Fortune-500 – Tokyo Summit October 2015 – Operating at Web-scale: Containers and OpenStack Panel Talk – Tokyo Summit October 2015 * AT&T strives to collaborate with other leading industry partners in the OpenStack ecosystem. This has led to the entire community benefiting from AT&T’s innovation.
  • Margaret Chiosi gives talks worldwide on AT&T’s D2.0 vision at many Telco conferences ranging from Optics (OFC) to SDN/NFV conferences advocating OpenStack as the de-facto cloud orchestrator.
  • AT&T Entertainment Group (DirecTV) architected multi-hypervisor hybrid OpenStack cloud by designing Neutron ML2 plugin. This innovation helped achieve integration between legacy virtualization and OpenStack.
  • AT&T is proud to drive OpenStack adoption by sharing knowledge back to the OpenStack community in the form of these summit sessions at the upcoming Austin summit:
    • Telco Cloud Requirements: What VNFs Are Asking For
    • Using a Service VM as an IPv6 vRouter
    • Service Function Chaining
    • Technology Analysis Perspective
    • Deploying Lots of Teeny Tiny Telco Clouds
    • Everything You Ever Wanted to Know about OpenStack At Scale
    • Valet: Holistic Data Center Optimization for OpenStack
    • Gluon: An Enabler for NFV
    • Among the Cloud: Open Source NFV + SDN Deployment
    • AT&T: Driving Enterprise Workloads on KVM and vCenter using OpenStack as the Unified Control Plane
    • Striving for High-Performance NFV Grid on OpenStack. Why you, and every OpenStack community member should be excited about it
    • OpenStack at Carrier Scale
  • AT&T is the “first to marketwith deployment of OpenStack supported carrier-grade Virtual Network Functions. We provide the community with integral data, information, and first-hand knowledge on the trials and tribulations experienced deploying NFV technology.
  • AT&T ranks in the top 20 percent of all companies in terms of upstream contribution (code, documentation, blueprints), with plans to increase this significantly in 2016.
    • Commits: 1200+
    • Lines of Code: 116,566
    • Change Requests: 618
    • Patch Sets: 1490
    • Draft Blueprints: 76
    • Completed Blueprints: 30
    • Filed Bugs: 350
    • Resolved Bugs: 250

What is the scale of the OpenStack deployment?

  • AT&T’s OpenStack based AIC is deployed at 70+ sites across the world. Of the 70+ 57 are production app and network clouds.
  • AT&T plans 90% growth, going to 100+ production app and network clouds by end of 2016.
  • AT&T connects more than 14 million of the 134.5 million wireless customers via virtualized networks with significant subscriber cutover planned again in 2016
  • AT&T controls 5.7% of our network resources (29 Telco production grade VNF) with a goal of high 80s by end of 2016) on OpenStack.
  • Production workloads also include AT&T’s Connected Car, Network on Demand, and AT&T Collaborate among many more.

How is this team innovating with OpenStack?

  • AT&T and AT&T Labs are leveraging OpenStack to innovate with Containers and NFV technology.
  • Containers are a key part of AT&Ts Cloud Native Architecture. AT&T chairs the Open Container Initiative (OCI) to drive the standardization around container formats.
  • AT&T is leading the effort to improve Nova and Neutron’s interface to SDN controllers.
  • Margaret Chiosi, an early design collaborator to Neutron, ETSI NFV, now serves as President of OPNFV. AT&T is utilizing its position with OPNFV to help shape the future of OpenStack / NFV. OpenStack has enabled AT&T to innovate extensively.

The following recent unique workloads would not be possible without the SDN and NFV capabilities which OpenStack enables: * Our recent announcements of 5G field trials in Austin * Re-launch of unlimited data to mobility customers * Launch of AT&T Collaborate * Network on Demand platform to more than 500 enterprise customers * Connected Car and MVNO (Mobile Virtual Network Operator) * Mobile Call Recording New services by AT&T Entertainment Group (DirecTV) that would use OpenStack based cloud infrastructure in coming years: * NFL Sunday Ticket with up to 8 simultaneous games * DirecTV Streaming Service Without Need For satellite dish

In summary – the innovation with OpenStack is not just our unique workloads, but also to support them together under the same framework, management systems, development/test, CI/CD pipelines, and deployment automation toolset(s).

Who are the team members?

  • AT&T Cloud and D2 architecture team
  • AT&T Integrated Cloud (AIC) Members: Margaret Chiosi, distinguished member of technical staff, president of OPNFV; Toby Ford, AVP – AT&T cloud technology & D2 architecture – strategy, architecture & pPlanning, and OpenStack Foundation Board Member; Sunil Jethwani – director, cloud & SDN architecture, AT&T Entertainment Group; Andrew Leasck – director – AT&T Integrated cloud development; Janet Morris – director – AT&T integrated cloud development; Sorabh Saxena, senior vice president – AT&T software development & engineering organization; Praful Shanghavi – director – AT&T integrated cloud development; Bryan Sullivan – director member of technical staff; Ryan Van Wyk – executive director – AT&T integrated cloud development.
  • AT&T’s project teams top contributors: Paul Carver, Steve Wilkerson, John Tran, Joe D’andrea, Darren Shaw.

April 30, 2016Swisscom in Production with OpenStack and Cloud Foundry

Swisscom has one of the largest in-production industry standard Platform as a Service built on OpenStack. Their offering is focused on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry, and PLUMgrid. With services such as MongoDB, MariaDB, RabbitMQ, ELK, and an object storage, the PaaS cloud offers what developers need to get started right away. Join this panel for take-away lessons on Swisscom’s journey, the technologies, partnerships, and developers who are building apps everyday on Swisscom’s OpenStack cloud.

May 23, 2016How OpenStack public cloud + Cloud Foundry = a winning platform for telecoms interview on ‘OpenStack Superuser’ with Marcel Härry, chief architect, PaaS at Swisscom

Swisscom has one of the largest in-production industry standard platform-as-a-service built on OpenStack.

Their offering focuses on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom, Switzerland’s leading telecom provider, embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry and PLUMgrid.

Superuser interviewed Marcel Härry, chief architect, PaaS at Swisscom and member of theTechnical Advisory Board of the Cloud Foundry Foundation to find out more.

How are you using OpenStack?

OpenStack has allowed us to rapidly develop and deploy our Cloud Foundry-based PaaS offering, as well as to rapidly develop new features within SDN and containers. OpenStack is the true enabler for rapid development and delivery.

An example: after half a year from the initial design and setup, we already delivered two production instances of our PaaS offering built on multiple OpenStack installations on different sites. Today we are already running multiple production deployments for high-profile customers, who further develop their SaaS offerings using our platform. Additionally, we are providing the infrastructure for numerous lab and development instances. These environments allow us to harden and stabilize new features while maintaining a rapid pace of innovation, while still ensuring a solid environment.

We are running numerous OpenStack stacks, all limited – by design – to a single region, and single availability zone. Their size ranges from a handful of compute nodes, to multiple dozens of compute nodes, scaled based on the needs of the specific workloads. Our intention is not to build overly large deployments, but rather to build multiple smaller stacks, hosting workloads that can be migrated between environments. These stacks are hosting thousands of VMs, which in turn are hosting tens of thousands of containers to run production applications or service instances for our customers.

What kinds of applications or workloads are you currently running on OpenStack?

We’ve been using OpenStack for almost three years now as our infrastructure orchestrator. Swisscom built its Elastic Cloud on top of OpenStack. On top of this we run Swisscom’s Application Cloud, or PaaS, built on Cloud Foundry with PLUMgrid as the SDN layer. Together, the company’s clouds deliver IaaS to IT architects, SaaS to end users and PaaS to app developers among other services and applications. We mainly run our PaaS/Cloud Foundry environment on OpenStack as well as the correlated managed services (i.e. a kind of DBaaS, Message Service aaS etc.) which are running themselves in Docker containers.

What challenges have you faced in your organization regarding OpenStack, and how did you overcome them?

The learning curve for OpenStack is pretty steep. When we started three years ago almost no reference architectures were available, especially none with enterprise-grade requirements such as dual-site, high availability (HA) capabilities on various levels and so forth. In addition, we went directly into the SDN, SDS levels of implementation which was a big, but very successful step at the end of the day.

What were your major milestones?

Swisscom’s go-live for its first beta environment was in spring of 2014, go live for an internal development (at Swisscom) was spring of 2015, and the go-live for its public Cloud Foundry environment fully hosted on OpenStack was in the fall of 2015. The go-live date for enterprise-grade and business-critical workloads on top of our stack from various multinational companies in verticals like finance or industry is spring, 2016, and Swisscom recently announced Swiss Re as one of its first large enterprise cloud customers.

What have been the biggest benefits to your organization as a result of using OpenStack?

Pluggability and multi-vendor interoperability (for instance with SDN like PLUMgrid or SDS like ScaleIO) to avoid vendor lock in and create a seamless system. OpenStack enabled Swisscom to experiment with deployments utilizing a DevOps model and environment to deploy and develop applications faster. It simplified the move from PoC to production environments and enabled us to easily scale out services utilizing a distributed cluster-based architecture.

What advice do you have for companies considering a move to OpenStack?

It’s hard in the beginning but it’s really worth it. Be wise when you select your partners and vendors, this will help you to be online in a very short amount of time. Think about driving your internal organization towards a dev-ops model to be ready for the first deployments, as well as enabling your firm to change deployment models (e.g. going cloud-native) for your workloads when needed.

How do you participate in the community?

This year’s Austin event was our second OpenStack Summit where we provided insights into our deployment and architecture, contributing back to the community in terms of best practices, as well as providing real-world production use-cases. Furthermore, we directly contribute patches and improvements to various OpenStack projects. Some of these patches have already been accepted, while a few are in the pipeline to be further polished for publishing. Additionally, we are working very closely together with our vendors – RedHat, EMC, ClusterHQ/Flocker, PLUMgrid as well as the Cloud Foundry Foundation – and work together to further improve their integration and stability within the OpenStack project. For example, we worked closely together with Flocker for their cinder-based driver to orchestrate persistency among containers. Furthermore, we have provided many bug reports through our vendors and have worked together with them on fixes which then have made their way back into the OpenStack community.

What’s next?

We have a perfect solution for non-persistent container workloads for our customers. We are constantly evolving this product and are working especially hard to meet the enterprise- and finance-verticals requirements when it comes to the infrastructure orchestration of OpenStack.

Härry spoke about OpenStack in production at the recent Austin Summit, along with Pere Monclus of PLUMgrid, Chip Childers of the Cloud Foundry Foundation, Chris Wright of Red Hat and analyst Rosalyn Roseboro. 

May 10, 2016: Lenovo‘s Highly-Available OpenStack Enterprise Cloud Platform Practice with EasyStack press release by EasyStack

BEIJING, May 10, 2016 /PRNewswire/ — In 2015, the Chinese IT superpower Lenovo chose EasyStack to build an OpenStack-based enterprise cloud platform to carry out their “Internet Strategy”. In six months, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It is expected that by the end of 2016, 20% of the IT system will be migrated onto the Cloud.

OpenStack is the foundation for Cloud, and perhaps has matured in the overseas market. In China, OpenStack practices worthy of noticing often come from the relatively new category of Internet Companies. Though it has long been marketed as “enterprise-ready”, traditional industries still tend to hold back towards OpenStack. This article aims to turn this perception around by presenting an OpenStack practice from the Chinese IT Superpower Lenovo, detailing their journey of transformation in both the technology and business realms to a private cloud built upon OpenStack. Although OpenStack will still be largely a carrier for internet businesses, Lenovo plans to migrate 20% of its IT system onto the cloud before the end of 2016 – taking a much applauded step forward.

Be it the traditional PC or the cellphone, technology’s evolving fast amidst this move towards mobile and social networking, and the competition’s fierce. In response to rapidly changing market dynamics, the Lenovo Group made the move of going from being product-oriented to a user-oriented strategy that can only be supported by an agile, flexible and scalable enterprise-level cloud platform capable of rapid iterations. After thorough consideration and careful evaluation, Lenovo chose OpenStack as the basis for their enterprise cloud platform to carry out this “Internet Strategy”. After six months of practice, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It’s expected that 20% of the IT system will be migrated onto the Cloud by the end of 2016.

Transformation and Picking the Right Cloud

In the past, internal IT at Lenovo has always been channel- and key client-oriented, with a traditional architecture consisting of IBM Power, AIX, PowerVM, DB2 and more recently, VMware virtualization. In the move towards becoming an Internet Company, such traditional architecture was far from being able to support the user and business volume brought by the B2C model. Cost-wise, Lenovo’s large-scale deployment of commercial solutions were reliable but complex to scale and extremely expensive.

Also, this traditional IT architecture was inadequate in terms of operational efficiency, security and compliance and unable to support Lenovo’s transition towards eCommerce and mobile business. In 2015, Lenovo’s IT entered a stage of infrastructural re-vamp, in need of using a cloud computing platform to support new businesses.

To find the right makeup for the cloud platform, Lenovo performed meticulous analyses and comparisons on mainstream x86 virtualization technologies, private cloud platforms, and public cloud platforms. After evaluating stability, usability, openness and ecosystem vitality and comprehensiveness, Lenovo deemed the OpenStack cloud platform technology able to fulfill its enterprise needs and decided to use OpenStack as the infrastructural cloud platform supporting their constant businesses innovations.

Disaster recovery plans on virtual machines, cloud hard drives and databases were considered early on into the OpenStack architectural design to ensure prompt switch over when needed to maintain business availability.

A Highly Available Architectural Design

On architectural logic, Lenovo’s Enterprise Cloud Platform managed infrastructures through a software-defined-environment, using x86 servers and 10GB network at the base layer, alongside internet-style monitoring and maintenance solutions, while employing the OpenStack platform to perform overall resource management.

To ensure high availability and improve the cloud platform’s system efficiency, Lenovo designed a physical architecture, and used capable servers with advanced configurations to make up the compute, storage network all-in-one, then using OpenStack to integrate into a single resource pool, placing compute nodes and storage nodes on the same physical node.

Two-way X3650 servers and four-way ThinkServer RQ940 server as backbones at the hardware layer. For every node there are five SSD hard drivers and 12 SAS hard drives to make up the storage module. SSD not only acts as the storage buffer, but also is the high performance storage resource pool, accessing the distributed storage through the VM to achieve high availability.

Lenovo had to resolve a number of problems and overcome numerous hurdles to elevate OpenStack to the enterprise-level.

Compute

Here, Lenovo utilized high-density virtual machine deployment. At the base is KVM virtualization technology, optimized in multiple way to maximize physical server performance, isolating CPU, Memory and other hardware resources under the compute-storage convergent architecture. The outcome is the ability to have over 50 VMs running smoothly and efficiently on every two-core CPU compute node.

In the cloud environment, it’s encouraged to achieve high availability through not hardware, but solutions. Yet still there are some traditional applications that hold certain requirements to a single host server. For such applications unable to achieve High Availability, Lenovo used Compute HA technology to achieve high availability on compute nodes, performing fault detection through various methods, migrating virtual machines on faulted physical machine to other available physical machines when needed. This entire process is automated, reducing as much as possible business disruptions caused by physical machine breakdowns.

Network

Network Isolation

Using different NIC, different switch or different VLAN to isolate various networks such as stand-alone OpenStack management networks, virtual production networks, storage networks, public networks, and PXE networks, so that interferences are avoided, increasing overall bandwidth and enabling better network control.

Multi-Public Network

Achieve network agility through multiple public networks to better manage security strategies. The Public Networks from Unicom, Telecom and at the office are some examples

Network and Optimization

Better integrate with the traditional data center network through the VLAN network model, then optimize its data package processing to achieve improved capability on network data pack process, bringing closer the virtual machine bandwidth to that of the physical network.

Dual Network Protocol Bundling and Multi Switch

Achieve high availability of physical networks through dual network protocol bundling to different switches.

Network Node HA

Achieve public network load balance, high availability and high performance through multiple network nodes, at which router-level Active/Standby methodology is used to achieve HA, which is ensured through independent network router monitoring services.

Storage

The Lenovo OpenStack Cloud Platform used Ceph as the unified storage backend, in which data storage for Glance image mirroring, Nova virtual machine system disc, and Cinder cloud hard drive are provided by Ceph RBD. Using Ceph’s Copy on Write function to revise OpenStack codes can deploy virtual machines within seconds.

With Ceph as the unified storage backend, its functionality is undoubtedly a key metric on whether the critical applications of an enterprise can be virtualized and cloud-ready. In a super-convergent deployment architecture where compute and storage run alongside each other, storage function optimization not only have to maximize storage capability, but also have to ensure the isolation between storage and compute resources to maintain system stability. For the IO stack below, Lenovo conducted bottom-up layer-by-layer optimization:

On the Networks

Open the Jumbo frame, improve data transfer efficiency while use 10Gb Ethernet to carry Ceph Cluster network traffics, improving the efficiency on Ceph data replication.

On Functionality

Leverage Solid State Disc as the Ceph OSD log to improve overall cluster IO functionality, to fulfill performance demands of critical businesses ( for example the eCommerce system’s database businesses, etc.) and achieve function-cost balance. SSD is known for its low power consumption, prompt response, high IOPS, and high throughput. In the Ceph log system, these are aligned to multithread access; using SSD to replace mechanical hard drives can fully unleash SSD’s trait of random access, rapid response and high IO throughput. Appropriately optimizing IO coordination strategy and further suit it to SSD and lower overall IO latency.

Purposeful Planning

Plan the number of Ceph OSD under the super-convergent node reasonably according to virtual machine density on the server, while assign in advance CPU and other memory resources. Cgroup, taskset and other tools can be used to perform resource isolation for QEMU-KVM and Ceph OSD

Parameter Tuning

Regarding parameter tuning for Ceph, performance can be effectively improved by fine-tuning parameters on FileStore’s default sequence, OSD’s OP thread and others. Additional tuning can be done through performing iteration test to find the most suitable parameter for the current hardware environment.

Data HA

Regarding data HA, besides existing OpenStack data protection measures, Lenovo has planned a comprehensive disaster recovery protocol for its three centers at two locations:

By employing exclusive low-latency fiber-optic cable, data can be simultaneously stored in local backup centers, and started asynchronously in long-distance centers, maximizing data security.

AD Integration

In addition, Lenovo has integrated its own business demands into the OpenStack enterprise cloud platform. As a mega company with tens of thousands of employees, AD activity logs are needed for authorization so that staffs won’t need to be individually set up user commands. Through customized development by part of the collaborator, Lenovo has successfully integrated AD functions into its OpenStack Enterprise Cloud Platform.

Overall Outcomes

Lenovo’s transformation towards being “internet-driven” was able to begin after the buildup of this OpenStack Enterprise Cloud Platform. eCommerce, Big Data and Analytics, IM, Online Mobile Phone Support and other internet based businesses, all supported by this cloud platform. Judging from feedback from the team, the Lenovo OpenStack Enterprise Cloud Platform is functioning as expected.

In the process of building up this OpenStack based enterprise cloud platform, Lenovo chose EasyStack, the leading Chinese OpenStack Company to provide professional implementation and consulting services, helping to build the initial platform, fostering a number of OpenStack experts. For Lenovo, community compatibility and continuous upgrade, as well as experiences in delivering services at the enterprise level are the main factors for consideration when choosing an OpenStack business partner.

Red Hat Enterprise Linux OpenStack Platform 4 delivery and Dell as the first company to OEM it co-engineered on Dell infrastructure with Red Hat

Red Hat Enterprise Linux OpenStack Platform: Community-invented, Red Hat hardened [RedHatCloud YouTube channel, Aug 5, 2013]

Learn how Red Hat Enterprise Linux OpenStack Platform allows you to deploy a supported version of OpenStack on an enterprise-hardened Linux platform to build a massively scalable public-cloud-like platform for managing and deploying cloud-enabled workloads. With Red Hat Enterprise Linux OpenStack Platform, you can focus resources on building applications that add value to your organization, while Red Hat provides support for OpenStack and the Linux platform it runs on.

From community to enterprise-ready: Red Hat’s momentum with OpenStack [RedHatCloud YouTube channel, Jan 21, 2014]

The open development model is only successful if you are as committed to the community as you are to the products you create. Our primary goal was to become truly integrated in the OpenStack community. Now, we are excited about getting OpenStack in the hands of many. Hear what Red Hat has to say about their momentum around OpenStack.

Cloud and virtualization in RHEL6 ~ Redhat Linux Video [Redhat Linux Video YouTube channel, Feb 17, 2014]

Red Hat Enterprise Linux 6.5 is designed for those who build and manage large, complex IT projects, especially enterprises that require an open hybrid cloud. From security and networking to virtualization, Red Hat Enterprise Linux 6.5 provides the capabilities needed to manage these environments, such as tools that aid in quickly tuning the system to run SAP applications based on published best practices from SAP. Jim Totton, vice president and general manager, Platform Business Unit, Red Hat “Red Hat Enterprise Linux 6.5 provides the innovation expected from the industry’s leading enterprise Linux operating system while also delivering a mature platform for business operations, be it standardizing operating environments or supporting critical applications. The newest version of Red Hat Enterprise Linux 6 forms the building blocks of the entire Red Hat portfolio, including OpenShift and OpenStack, making it a perfect foundation for enterprises looking to explore the open hybrid cloud.”

Dell and Red Hat Creating Open, Innovative Solutions ~ Redhat Linux Video [Redhat Linux Video YouTube channel, Feb 18, 2014]

Dell and Red Hat to Co-Engineer Enterprise-Grade, OpenStack Private Cloud Solutions [joint press release, Dec 12, 2013]

Dell and Red Hat to Co-Engineer Enterprise-Grade, OpenStack Private Cloud Solutions

  • Dell and Red Hat collaboration to enable customers worldwide to build and use highly-scalable, open, private cloud solutions based on OpenStack
  • Dell becomes first company to OEM Red Hat Enterprise Linux OpenStack Platform
  • Dell joins the Red Hat OpenStack Cloud Infrastructure Partner Network as an Alliance Partner
  • Dell to deliver Red Hat Enterprise Linux OpenStack Platform through a dedicated practice within Dell Cloud Services

Dell and Red Hat Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced the companies will jointly engineer enterprise-grade, private cloud solutions based on OpenStack to help customers move to and deploy highly-scalable cloud computing models. As part of the expanded relationship, Dell becomes the first company to OEM Red Hat Enterprise Linux OpenStack Platform. The co-engineered solution will be built on Dell infrastructure and Red Hat Enterprise Linux OpenStack Platform. The solution will be delivered by a Red Hat Enterprise Linux OpenStack Platform practice within Dell Cloud Services.

Dell and Red Hat have partnered for more than 14 years to bring global customers value by collaborating on Red Hat solutions across Dell’s enterprise offerings. Just as Dell and Red Hat collaborated in the early days of Linux, Dell is showing its vision by becoming the first to OEM Red Hat Enterprise Linux OpenStack Platform. With today’s announcement, Dell and Red Hat are strengthening their longstanding collaboration and commitment to help businesses confidently embrace open source-based cloud computing models. With this development, customers worldwide will not only benefit from the co-engineered solutions, but the companies combined cloud expertise, enterprise innovation, and dedicated support and portfolio of services.

Dell and Red Hat will also jointly contribute code to the OpenStack community and collaborate on Red Hat Enterprise Linux OpenStack Platform 4, currently in beta, which integrates OpenStack Havana, Red Hat Enterprise Virtualization Hypervisor, and Red Hat Enterprise Linux 6.5. In addition, Dell plans to work closely with Red Hat on several future-state projects including:

  • OpenStack Networking (Neutron) to enable Software-Defined Networking and Networking-as-a-Service between interface devices such as virtual network interface cards, and
  • OpenStack Telemetry (Ceilometer) to provide OpenStack resource instrumentation, which can help support service monitoring and customer billing systems.

Lastly, Dell is joining the Red Hat OpenStack Cloud Infrastructure Partner Network as an Alliance Partner, the highest tier of program membership. The Red Hat OpenStack Cloud Infrastructure Partner Network connects both business and technical resources to third-party technology companies who are aligning with Red Hat’s OpenStack product offerings.

Red Hat Enterprise Linux OpenStack Platform combines the power of Red Hat Enterprise Linux with Red Hat’s OpenStack cloud platform to deliver an enterprise-grade, scalable and secure foundation for building a private cloud. The alliance with Red Hat complements Dell’s cloud strategy of offering customers open, flexible and scalable technology to build, use and control cloud infrastructures.

Additionally, Dell now offers Dell Cloud Consulting and Application Services to provide expert guidance in helping assess, build, operate and run cloud environments and enable and accelerate enterprise OpenStack adoption. Dell’s expertise spans the hybrid cloud spectrum, with service options ranging from cloud readiness assessment, infrastructure design and operations, and application design and modernization. As a result, Dell customers can achieve increased efficiency and greater realization of the business benefits of cloud computing.

Supporting Quotes

Paul Cormier, President, Products and Technologies, Red Hat

“Our collaboration with Dell keeps getting better and today’s announcement to co-engineer OpenStack solutions marks a significant milestone for both companies and customers. Just as we successfully collaborated with Dell to establish Red Hat Enterprise Linux as an enterprise industry standard, we’re now extending our collaboration to help establish Red Hat Enterprise Linux OpenStack Platform as the standard for open private cloud in the enterprise. Dell and Red Hat are committed to jointly developing and delivering enterprise-grade OpenStack offerings to help customers pursue private cloud today, and advanced computing models in the future.”

Marius Haas, Chief Commercial Officer and President, Enterprise Solutions, Dell

“Dell has been a long-time advocate and participant in the open source and OpenStack communities, pushing the charter of an open alternative to proprietary, enterprise computing systems. Our agreement to co-engineer OpenStack solutions with Red Hat takes our commitment a step further in helping customers obtain and deploy OpenStack solutions for an enterprise-grade, private cloud infrastructure to meet their evolving business needs. We will extend our work with Red Hat to apply our combined experience in commercializing open source for the benefit of our mutual customers as well as the open-source community on its development of networking, storage and compute capabilities.”

Availability

The joint Dell-Red Hat solution is scheduled to be available in 2014.

Get more Red Hat news or subscribe to the Red Hat news RSS feed

About Red Hat, Inc.

Red Hat (NYSE: RHT) is the world’s leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com.

Dell and Red Hat – Enabling the Enterprise with OpenStack [Dell4Enterprise Blog, Dec 17, 2013] by Joseph George, Executive Director, Cloud and Big Data Solutions, Dell Inc.

In 1999, Dell became the first OEM vendor to deliver factory-installed Linux workstations and PowerEdge servers with Red Hat, to enable enterprise customers with Linux. Since those early days, Red Hat Enterprise Linux (RHEL) has become the world’s most deployed enterprise Linux platform, and in 2012, Red Hat became the industry’s first billion dollar open source company.

Now in 2013, Dell and Red Hat are joining forces again to enable enterprises.

And this time, it’s to enable the enterprise with OpenStack.

Our enterprise customers have complex business needs and scalability requirements. In the case of cloud, customers rarely say “We want a cloud.” Rather, many of them say things like “We need to create a content delivery network that scales and is cost effective,” or “We need a test/dev environment to develop our applications.”

And many times the design tenets that are pervasive in large scale distributed cloud environments, such as continuous deployment and devops, are still being translated into how the enterprise does IT.

Dell has been committed to enabling our customers with open source solution options to enable emerging technology areas like cloud – solutions that are open, flexible, scalable and secure. OpenStack provides the foundation for driving multi-tenancy and elasticity, giving our customers a fully open platform with the ability to scale very quickly.

Dell has had a rich and valuable history with Red Hat, with a longstanding joint commitment to our customers, to understand their need for flexibility and choice in the marketplace in a number of technology areas. And OpenStack has been a passion for both companies individually for some time.

With this announcement, Dell is now the first company to OEM RHEL OpenStack Platform, and we will co-engineer enterprise-grade OpenStack private cloud solutions with Red Hat, bringing together the best that both companies have to offer. It’s great news for enterprise customers seeking the products, services, and best practices to bring OpenStack into their IT environments.

Together, we are providing a fast onramp to enable our enterprise customers to get to the cloud, and to capture value from cloud by solving real business problems as quickly as possible. Dell and Red Hat will also jointly contribute code to the OpenStack community, specifically on projects like OpenStack Networking (Neutron) and OpenStack Telemetry (Celiometer).

Needless to say, after being a part of Dell’s very first steps into OpenStack, I am excited about this next step Dell and Red Hat are taking together, and see the strong innovation that will come out of it benefit both our customers and the OpenStack community.

Dell and Red Hat cloud solutions powered by OpenStack [RedHatCloud YouTube channel, Dec 17, 2013]

On December 12, 2013, Red Hat and Dell announced the companies will jointly engineer enterprise-grade, private cloud solutions based on OpenStack to help customers move to and deploy highly-scalable cloud computing models. Hear from Joseph George, Executive Director, Cloud and Big Data Solutions, Dell Inc. on this announcement.

Red Hat’s 7 bold OpenStack predictions for 2014 [Dell Software News Blog, Feb 10, 2014]

Directors and managers from all over Red Hat’s OpenStack team share  their visions for 2014, including OpenStack in 2014: Ready for enterprise adoption. A select few enterprise OpenStack distributions – and providers – will rise to the top. Hybrid cloud management – including OpenStack – will be in-demand. Telco companies, banks, and  government agencies will embrace OpenStack.

Dell and Red Hat will be collaborating on OpenStack – read the blog

OpenStack in 2014: Ready for enterprise adoption. “OpenStack is in 2013 what Amazon was in 2008/2009 – people are very interested but they are not spending money to use OpenStack in enterprise IT environments yet. 2014 should change that as the solution has matured and people are readier to embrace it. OpenStack is now enterprise-ready with stable, reliable versions, and that, combined with the support available from the OpenStack ecosystem, will lead to further adoption of OpenStack in the enterprise.” – Krishnan Subramaniam, director, OpenShift strategy, Red Hat

2014: The Year of the OpenStack Ecosystem. “2014 will be the year of the enterprise OpenStack ecosystem. Hardware and software providers will have more products in the market backed by certifications for a peace of mind value proposition. Given the focus on “as-a-service” solutions there will be a new range of offerings that will be created with OpenStack as a fabric for the datacenter. Finally, I expect that large system integrators will add OpenStack to their service offerings in 2014.” – Radhesh Balakrishnan, general manager, Virtualization and OpenStack, Red Hat

A select few enterprise OpenStack distributions – and providers – will rise to the top. “In 2013 we saw the proliferation of OpenStack distributions, to the point where it feels very similar to the early days of Linux – everyone seems to have a Linux distribution. In 2014, we’re going to see OpenStack distributions collapse. That’s because it’s not enough to just repackage bits; providers need really broad and deep knowledge of both OpenStack and Linux. Customers will look toward the organizations that have this deep knowledge as they seek credible solutions that combine OpenStack and Linux. The few companies that have the ability to offer tight integration between the two will be the last ones left standing.” – Chuck Dubuque, director, Product Marketing, Virtualization and OpenStack, Red Hat

Telco companies, Banks, and Government Agencies will embrace OpenStack. “In the coming year, the public sector and other highly regulated industries, such as financial, will reach the stage of production deployments of enterprise-grade OpenStack. Security will continue to be an aspect that these industries need to address as they move to the cloud. Driven by security, privacy and compliance needs, the public sector and financial industries will turn to OpenStack to keep their most confidential data with them.” – Radhesh Balakrishnan, general manager, Virtualization and OpenStack, Red Hat

“In 2014, OpenStack will make its way into the infrastructure of many large stakeholders. I’ll be bold and predict that within the next year, we’ll see OpenStack in five out of the top ten banks and eight out of the top ten telcos.” – Bryan Che, general manager, Red Hat CloudForms

“2014 will be the year where telecommunications-specific OpenStack offerings will enter in the marketplace and be adopted.” -Radhesh Balakrishnan, general manager, Virtualization and OpenStack, Red Hat

Hybrid cloud management – including OpenStack – will be in-demand. “As enterprises move OpenStack deployments out of a testing environment into a realtime, enterprise deployment environment, they need to be able to manage it. This year, Red Hat debuted CloudForms 3.0 with OpenStack management capabilities, and we are looking forward to developing those capabilities further in 2014. Looking at current data and analyst reports, cloud management is cited as the number one problem enterprises face when they are looking to mobilize their cloud computing resources. 2014 will be the year where large-scale cloud deployments are managed with enterprise-class cloud management solutions, such as Red Hat CloudForms.” – Bryan Che, general manager, Red Hat CloudForms

Continued reinforcement of PaaS and OpenStack interoperability. “In 2014, interoperability between Platform-as-a-Service (PaaS) offerings and OpenStack will continue to be reinforced. Many people believe OpenStack will replace PaaS. In reality, the two are complementary – PaaS generates workloads, while OpenStack offers a place to store them. We’re going to continue to work toward tighter integration and better operability between PaaS and OpenStack.” – Chuck Dubuque, director, Product Marketing, Virtualization and OpenStack, Red Hat

Building the Industry’s Broadest OpenStack Ecosystem: A Decade in the Making [Red Hat press release, Feb 18, 2014]

Red Hat OpenStack Cloud Infrastructure Partner Network team

For those of us in the technology industry, it is sometimes difficult to take a moment to think about the impact and scale of the work that we accomplish on a day-to-day basis. While we are all lucky to be in an amazingly innovative and fast paced industry, it is important to spend a reflective moment or two to gain some perspective on the projects that we work on at our respective companies and in our open communities.

At Red Hat, we have been working steadily to help bring OpenStack from a project to a product for nearly two years. As you would expect, our efforts span the spectrum from contributors and developers across every key OpenStack.org project to enabling our partners and customers with enterprise-grade OpenStack products designed to help them take their computing infrastructure to the cloud.

A key aspect of the inherent value proposition that Red Hat brings to the table is our co-investment with partners in making sure that our products work together as expected, and are supported in a collaborative and well understood manner to reduce customer complexity. This technology certification is an important element that has helped build Red Hat into one of the world’s most trusted brands.

Over the next few months, at Red Hat Summit and at the OpenStack Summit in Atlanta, you’ll hear more from Red Hat on our incredible momentum and progress as we bring OpenStack to global partners and customers around the globe. In the meantime, I’d like to take an opportunity to reflect on our ecosystem progress to date.

In April 2013, we announced the creation of the Red Hat OpenStack Cloud Infrastructure Partner Network at the OpenStack Summit in Portland, Oregon. Since that time, we’ve been impressed with the growth and energy with participants from all over the globe, representing all industries and covering all types of technologies.

In June 2013, we launched Red Hat Enterprise Linux OpenStack Platform, and along with it, our first set of certifications focused on Compute, Storage and Networking. Behind the scenes, our teams worked closely with hundreds of partners to develop testing and automation tools, exchanged ideas and feedback on the process, and created the entire infrastructure necessary to build collaborative support agreements for our customers.

Many of these relationships with our OEM, ISV, IHV and SI partners have been established over years of work together. My colleague Gordon Haff just published a great article reflecting on how OpenStack is paralleling the adoption of Linux in the enterprise. It’s true.

More than a decade’s experience in bringing customers true choice has taught us many things. It showed us that our ongoing commitment to maintaining several multifaceted customer benefits, including a long and stable product lifecycle; tested and secure enterprise-grade solutions; and robust integration through standard interfaces and APIs, helped make Linux enterprise-ready. We’re bringing that same know-how to OpenStack.

It also taught us that creating a tightly coupled and certified solution means more than a press release. It requires deep commitment to rolling up your sleeves and working with engineering teams on real technical issues and repeating that process build after build.

Our partners understand what it takes to make commercially viable solutions. A platform is only as good as the applications, solutions and technologies that work with it, and we are proud of how strong our ecosystem of partners has become.

Led by our Alliance PartnersCisco, Dell, IBM, and Intel – we have seen hundreds of systems and thousands of applications moving towards certification on the Red Hat Enterprise Linux OpenStack Platform. Our commitment here does not waver as we work across competitive boundaries with many companies in building a broad range of enterprise solutions.

In November 2013 at the OpenStack Summit in Hong Kong we expanded our certification scope to include other OpenStack services, offered additional partner benefits for system integrators, MSPs and cloud providers, and enhanced Red Hat Marketplace. It was a proud moment when we were able to announce that in only seven months, we had built the industry’s largest OpenStack ecosystem in support of commercial deployments.

With all of the investments we made in 2013 in our OpenStack ecosystem and certification programs, it may seem as if we just started to build these Red Hat Cloud Infrastructure Partner Network efforts. It wasn’t. The truth is that the foundation for this momentum was laid out 12 years ago when Red Hat first launched Red Hat Enterprise Linux.

Trust is the core for everything that we do; it is our model, and our open approach. While OpenStack as a set of technologies may be new, the relationships with our partners, the excitement of our customers, and the energy within our company to work together to build the next generation of trusted computing is well established and energized. We look forward to a 2014 filled with exciting product, program and partnership announcements.

I invite you to join us at Red Hat Summit in April, and the OpenStack Summit in May, to hear more about our vision and continued momentum.

AMD’s dense server strategy of mixing next-gen x86 Opterons with 64-bit ARM Cortex-A57 based Opterons on the SeaMicro Freedom™ fabric to disrupt the 2014 datacenter market using open source software (so far)

… so far, as Microsoft was in a “shut-up and ship” mode of operation during 2013 and could deliver its revolutionary Cloud OS with its even more disruptive Big Data solution for x86 only (that is likely to change as 64-bit ARM will be delivered with servers in H2 CY14).

Update: Disruptive Technologies for the Datacenter – Andrew Feldman, GM and CVP, AMD [Open Compute Project, Jan 28, 2014]

OCP Summit V – January 28, 2014, San Jose Convention Center, San Jose, California Disruptive Technologies for the Datacenter – Andrew Feldman, GM and CVP, AMD

image

image

image
Note from the press release given below that: “The AMD Opteron A-Series development kit is packaged in a Micro-ATX form factor”. Take the note of the topmost message: “Optimized for dense compute High-density, power-sensitive scale-out workloads: web hosting, data analytics, caching, storage”.

image

image

image

image

AMD to Accelerate the ARM Server Ecosystem with the First ARM-based CPU and Development Platform from a Server Processor Vendor [press release, Jan 28, 2014]

AMD also announced the imminent sampling of the ARM-based processor, named the AMD Opteron™ A1100 Series, and a development platform, which includes an evaluation board and a comprehensive software suite.

image
This should be the evaluation board for the development platform with imminent sampling.

In addition, AMD announced that it would be contributing to the Open Compute Project a new micro-server design using the AMD Opteron A-Series, as part of the common slot architecture specification for motherboards dubbed “Group Hug.”

From OCP Summit IV: Breaking Up the Monolith [blog of the Open Compute Project, Jan 16, 2013]
…  “Group Hug” board: Facebook is contributing a new common slot architecture specification for motherboards. This specification — which we’ve nicknamed “Group Hug” — can be used to produce boards that are completely vendor-neutral and will last through multiple processor generations. The specification uses a simple PCIe x8 connector to link the SOCs to the board. …

How does AMD support the Open Compute common slot architecture? [AMD YouTube channel, Oct 3, 2013]

Learn more about AMD Open Compute: http://bit.ly/AMD_OpenCompute Dense computing is the latest trend in datacenter technology, and the Open Compute Project is driving standards codenamed Common Slot. In this video, AMD explains Common Slot and how the AMD APU and ARM offerings will power next generation data centers.

See also: Facebook Saved Over A Billion Dollars By Building Open Sourced Servers [TechCrunch, Jan 28, 2014]
image
from which I copied here the above image showing the “Group Hug” motherboards.
Below you could see an excerpt from Andrew Feldman’s presentation showing such a motherboard with Opteron™ A1100 Series SoCs (even further down there is an image with Feldman showing that motherboard to the public during his talk):

image

The AMD Opteron A-Series processor, codenamed “Seattle,” will sample this quarter along with a development platform that will make software design on the industry’s premier ARM–based server CPU quick and easy. AMD is collaborating with industry leaders to enable a robust 64-bit software ecosystem for ARM-based designs from compilers and simulators to hypervisors, operating systems and application software, in order to address key workloads in Web-tier and storage data center environments. The AMD Opteron A-Series development platform will be supported by a broad set of tools and software including a standard UEFI boot and Linux environment based on the Fedora Project, a Red Hat-sponsored, community-driven Linux distribution.

imageAMD continues to drive the evolution of the open-source data center from vision to reality and bring choice among processor architectures. It is contributing the new AMD Open CS 1.0 Common Slot design based on the AMD Opteron A-Series processor compliant with the new Common Slot specification, also announced today, to the Open Compute Project.

AMD announces plans to sample 64-bit ARM Opteron A “Seattle” processors [AMD Blogs > AMD Business, Jan 28, 2014]

AMD’s rich history in server-class silicon includes a number of notable firsts including the first 64-bit x86 architecture and true multi-core x86 processors. AMD adds to that history by announcing that its revolutionary AMD Opteron™ A-series 64-bit ARM processors, codenamed “Seattle,” will be sampling this quarter.

AMD Opteron A-Series processors combine AMD’s expertise in delivering server-class silicon with ARM’s trademark low-power architecture and contributing to the Open Source software ecosystem that is rapidly growing around the ARM 64-bit architecture. AMD Opteron A-Series processors make use of ARM’s 64-bit ARMv8 architecture to provide true server-class features in a power efficient solution.

AMD plans for the AMD Opteron™ A1100 processors to be available in the second half of 2014 with four or eight ARM Cortex A57 cores, up to 4MB of shared Level 2 cache and 8MB of shared Level 3 cache. The AMD Opteron A-Series processor supports up to 128GB of DDR3 or DDR4 ECC memory as unbuffered DIMMs, registered DIMMs or SODIMMs.

The ARMv8 architecture is the first from ARM to have 64-bit support, something that AMD brought to the x86 market in 2003 with the AMD Opteron processor. Not only can the ARMv8-based Cortex A-57 architecture address large pools of memory, it has been designed from the ground up to provide the optimal balance of performance and power efficiency to address the broad spectrum of scale-out data center workloads.

With more than a decade of experience in designing server-class solutions silicon, AMD took the ARM Cortex A57 core, added a server-class memory controller, and included features resulting in a processor that meets the demands of scale-out workloads. A requirement of scale-out workloads is high performance connectivity, and the AMD Opteron A1100 processor has extensive integrated I/O, including eight PCI Express Gen 3 lanes, two 10 GB/s Ethernet and eight SATA 3 ports.

Scale-out workloads are becoming critical building blocks in today’s data centers. These workloads scale over hundreds or thousands of servers, making power efficient performance critical in keeping total cost of ownership (TCO) low. The AMD Opteron A-Series meets the demand of these workloads through intelligent silicon design and by supporting a number of operating system and software projects.

As part of delivering a server-class solution, AMD has invested in the software ecosystem that will support AMD Opteron A-Series processors. AMD is a gold member of the Linux Foundation, the organisation that oversees the development of the Linux kernel, and is a member of Linaro, a significant contributor to the Linux kernel. Alongside collaboration with the Linux Foundation and Linaro, AMD itself is listed as a top 20 contributor to the Linux kernel. A number of operating system vendors have stated they will support the 64-bit ARM ecosystem, including Canonical, Red Hat and SUSE, while virtualization will be enabled through KVM and Xen.

Operating system support is supplemented with programming language support, with Oracle and the community-driven OpenJDK porting versions of Java onto the 64-bit ARM architecture. Other popular languages that will run on AMD Opteron A-Series processors include Perl, PHP, Python and Ruby. The extremely popular GNU C compiler and the critical GNU C Library have already been ported to the 64-bit ARM architecture.

Through the combination of kernel support and development tools such as libraries, compilers and debuggers, the foundation has been set for developers to port applications to a rapidly growing ecosystem.

As AMD Opteron A-Series processors are well suited to web hosting and big data workloads, AMD is a gold sponsor of the Apache Foundation, the organisation that manages the Hadoop and HTTP Server projects. Up and down the software stack, the ecosystem is ready for the data center revolution that will take place when AMD Opteron A-Series are deployed.

Soon, AMD’s partners will start to realise what a true server-class 64-bit ARM processor can do. By using AMD’s Opteron A-Series Development Kit, developers can contribute to the fast growing software ecosystem that already includes operating systems, compilers, hypervisors and applications. Combining AMD’s rich history in designing server-class solutions with ARM’s legendary low-power architecture, the Opteron A-Series ushers in the era of personalised performance.

Introducing the industry’s only 64-bit ARM-based server SoC from AMD [AMD YouTube channel, Jan 21, 2014]

Hear from AMD & ARM executives on why AMD is well-suited to bring ARM to the datacenter. AMD is introducing “Seattle,” a 64-bit ARM-based server SoC built on the same technology that powers billions of today’s most popular mobile devices. By fusing AMD’s deep expertise in the server processor space along with ARM’s low-power, parallel processing capabilities, Seattle makes it possible for servers to be tuned for targeted workloads such as web/cloud hosting, multi-media delivery, and data analytics to enable optimized performance at low power thresholds. Subscribe: http://bit.ly/Subscribe_to_AMD

It Begins: AMD Announces Its First ARM Based Server SoC, 64-bit/8-core Opteron A1100 [AnandTech, Jan 28, 2014]

… AMD will be making a reference board available to interested parties starting in March, with server and OEM announcements to come in Q4 of this year

It’s still too early to talk about performance or TDPs, but AMD did indicate better overall performance than its Opteron X2150 (4-core 1.9GHz Jaguar) at a comparable TDP:

image

AMD alluded to substantial cost savings over competing Intel solutions with support for similar memory capacities. AMD tells me we should expect a total “solution” price somewhere around 1/10th that of a competing high-end Xeon box, but it isn’t offering specifics beyond that just yet. Given the Opteron X2150 performance/TDP comparison, I’m guessing we’re looking at a similar ~$100 price point for the SoC. There’s also no word on whether or not the SoC will leverage any of AMD’s graphics IP. …

End of Update

AMD is also in a quite unique market position now as its only real competitor, Calxeda shut down its operation on December 19, 2013 and went into restructuring. The reason for that was lack of further funding by venture capitalists attributed mainly to its initial 32-bit Cortex-A15 based approach and the unwillingness of customers and software partners to port their already 64-bit x86 software back to 32-bit.

With the only remaining competitor in the 64-bit ARM server SoC race so far*, Applied Micro’s X-Gene SoC being built on a purpose built core of its own (see also my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, Dec 6, 2013] post), i.e. with only architecture license taken from ARM Holdings, the volume 64-bit ARM server SoC market starting in 2014 already belongs to AMD. I would base that prediction on the AppliedMicro’s X-Gene: 2013 Year in Review [Dec 20, 2013] post, stating that the first-generation X-Gene product is just nearing volume production, and a pilot X-Gene solution is planned only for early 2014 delivery by Dell.

* There is also Cavium which has too an ARMv8 architecture license only (obtained in August, 2012) but for this the latest information (as of Oct 30, 2013) was that: “In terms of the specific announcement of the product, we want to do it fairly close to silicon. We believe that this is a very differentiated product, and we would like to kind of keep it under the covers as long as we can. Obviously our customers have all the details of the products, and they’re working with them, but on a general basis for competitive reasons, we are kind of keeping this a little bit more quieter than we normally do.”

Meanwhile the 64-bit x86 based SeaMicro solution has been on the market since July 30, 2010, after 3 years in development. At the time of SeaMicro acquisition by AMD (Feb 29, 2012) this already represented a quite well thought-out and engineered solution, as one can easily grasp from the information included below:  

image

1. IOVT: I/O-Virtualization Technology
2. TIO: Turn It Off

image

3. Freedom™ Supercomputer Fabric: 3D torus network fabric
– 8 x 8 x 8 Fabric nodes
– Diameter (max hop) 4 + 4 + 4 = 12
– Theor. cross section bandwidth = 2 (periodic) x 8 x 8 (section) x 2(bidir) x 2.0Gbs/link = 512Gb/s
– Compute, storage, mgmt cards are plugged into the network fabric
– Support for hot plugged compute cards
The first three—IOVT, TIO, and the Freedom™ Supercomputer Fabric—live in SeaMicro’s Freedom™ ASIC. Freedom™ ASICs are paired with each CPU and with DRAM, forming the foundational building block of a SeaMicro system.
4. DCAT: Dynamic Computation-Allocation Technology™
– CPU management and load balancing
– Dynamic workload allocation to specific CPUs on the basis of power-usage metrics
– Users can create pools of compute for a given application
– Compute resources can be dynamically added to the pool based on predefined utilization thresholds
The DCAT technology resides in the SeaMicro system software and custom-designed FPGAs/NPUs, which control and direct the I/O traffic.
More information:
SeaMicro SM10000-64 Server [SeaMicro presentation on Hot Chips 23, Aug 19, 2011] for slides in PDF format while the presentation itself is the first one in the following recorded video (just the first 20 minutes + 7 minutes of—quite valuable—Q&A following that):
Session 7, Hot Chips 23 (2011), Friday, August 19, 2011. SeaMicro SM10000-64 Server: Building Data Center Servers Using “Cell Phone” Chips Ashutosh Dhodapkar, Gary Lauterbach, Sean Lie, Dhiraj Mallick, Jim Bauman, Sundar Kanthadai, Toru Kuzuhara, Gene Shen, Min Xu, and Chris Zhang, SeaMicro Poulson: An 8-Core, 32nm, Next-Generation Intel Itanium Processor Stephen Undy, Intel T4: A Highly Threaded Server-on-a-Chip with Native Support for Heterogenous Computing Robert Golla and Paul Jordan, Oracle
SeaMicro Technology Overview [Anil Rao from SeaMicro, January 2012]
System Overview for the SM10000 Family [Anil Rao from SeaMicro, January 2012]
Note that the above is just for the 1st generation as after the AMD acquisition (Feb 29, 2012) a second generation solution came out with the SM15000 enclosure (Sept 10, 2012 with more info in the details section later), and certainly there will be a 3d generation solution with the integrated into the each of x86 and 64-bit ARM based SoCs coming in 2014.

With the “only production ready, production tested supercompute fabric” (as was touted by Rory Read, CEO of AMD more than a year ago), the SeaMicro Freedom™ now will be integrated into the upcoming 64-bit ARM Cortex-A57 based “Seattle” chips from AMD, sampling in the first quarter of 2014. Consequently I would argue that even the high-end market will be captured by the company. Moreover, I think this will not be only in the SoC realm but in enclosures space as well (although that 3d type of enclosure is still to come), to detriment of HP’s highly marketed Moonshot and CloudSystem initiatives.

Then here are two recent quotes from the top executive duo of AMD showing the importance of their upcoming solution as they view it themselves:

Rory Read – AMD’s President and CEO [Oct 17, 2013]:

In the server market, the industry is at the initial stages of a multiyear transition that will fundamentally change the competitive dynamic. Cloud providers are placing a growing importance on how they get better performance from their datacenters while also reducing the physical footprint and power consumption of their server solution.

image

Lisa Su – AMD’s Senior Vice President and General Manager, Global Business Units [Oct 17, 2013]:

We are fully top to bottom in 28 nanometer now across all of our products, and we are transitioning to both 20 nanometer and to FinFETs over the next couple of quarters in terms of designs. … [Regarding] the SeaMicro business, we are very pleased with the pipeline that we have there. Verizon was the first major datacenter win that we can talk about publicly. We have been working that relationship for the last two years. …

We’re very excited about the server space. It’s a very good market. It’s a market where there is a lot of innovation and change. In terms of 64-bit ARM, you will see us sampling that product in the first quarter of 2014. That development is on schedule and we’re excited about that. All of the customer discussions have been very positive and then we will combine both the [?x86 and the?]64-bit ARM chip with our SeaMicro servers that will have full solution as well. You will see SeaMicro plus ARM in 2014.

So I think we view this combination of IP as really beneficial to accelerating the dense server market both on the chip side and then also on the solution side with the customer set.

AMD SeaMicro has been extensively working with key platform software vendors, especially in the open source space:

image

The current state of that collaboration is reflected in the corresponding numbered sections coming after the detailed discussion (given below before the numbered sections):

  1. Verizon (as its first big name cloud customer, actually not using OpenStack)
  2. OpenStack (inc. Rackspace, excl. Red Hat)
  3. Red Hat
  4. Ubuntu
  5. Big Data, Hadoop


So let’s take a detailed look at the major topic:

AMD in the Demo Theater [OpenStack Foundation YouTube channel, May 8, 2013]

AMD presented its demo at the April 2013 OpenStack Summit in Portland, OR. For more summit videos, visit: http://www.openstack.org/summit/portland-2013/session-videos/
Note that the OpenStack Quantum networking project was renamed Neutron after April, 2013. Details on the OpenStack effort will be provided later in the post.

Rory Read – AMD President and CEO [Oct 30, 2012]:

That SeaMicro Freedom™ fabric is ultimately very-very important. It is the only production ready, production tested supercompute fabric on the planet.

Lisa Su – AMD Senior Vice President and General Manager, Global Business Units [Oct 30, 2012]:

The biggest change in the datacenter is that there is no one size fits all. So we will offer ARM-based CPUs with our fabric. We will offer x86-based CPUs with our fabric. And we will also look at opportunities where we can merge the CPU technology together with graphics compute in an APU form-factor that will be very-very good for specific workloads in servers as well. So AMD will be the only company that’s able to offer the full range of compute horsepower with the right workloads in the datacenter.

AMD makes ARM Cortex-A57 64bit Server Processor [Charbax YouTube channel, Oct 30, 2012]

AMD has announced that they are launching a new ARM Cortex-A57 64bit ARMv8 Processor in 2014, targetted for the servers market. This is an interview with Andrew Feldman, VP and GM of Data Center Server Solutions Group at AMD, founder of SeaMicro now acquired by AMD.

From AMD Changes Compute Landscape as the First to Bridge Both x86 and ARM Processors for the Data Center [press release, Oct 29, 2012]

This strategic partnership with ARM represents the next phase of AMD’s strategy to drive ambidextrous solutions in emerging mega data center solutions. In March, AMD announced the acquisition of SeaMicro, the leader in high-density, energy-efficient servers. With this announcement, AMD will integrate the AMD SeaMicro Freedom fabric across its leadership AMD Opteron x86- and ARM technology-based processors that will enable hundreds, or even thousands of processor clusters to be linked together to provide the most energy-efficient solutions.

AMD ARM Oct 29, 2012 Full length presentation [Manny Janny YouTube channel, Oct 30, 2012]

I do not have any affiliation with AMD or ARM. This video is posted to provide the general public with information and provide an area for comments
Rory Read – AMD President and CEO: [3:27] That SeaMicro Freedom™ fabric is ultimately very-very important in this announcement. It is the only production ready, production tested supercompute fabric on the planet. [3:41]
Lisa Su – Senior Vice President and General Manager, Global Business Units: [13:09] The biggest change in the datacenter is that there is no one size fits all. So we will offer ARM-based CPUs with our fabric. We will offer x86-based CPUs with our fabric. And we will also look at opportunities where we can merge the CPU technology together with graphics compute in an APU form-factor that will be very-very good for specific workloads in servers as well. So AMD will be the only company that’s able to offer the full range of compute horsepower with the right workloads in the datacenter [13:41]

From AMD to Acquire SeaMicro: Accelerates Disruptive Server Strategy [press release, Feb 29, 2012]

AMD (NYSE: AMD) today announced it has signed a definitive agreement to acquire SeaMicro, a pioneer in energy-efficient, high-bandwidth microservers, for approximately $334 million, of which approximately $281 million will be paid in cash. Through the acquisition of SeaMicro, AMD will be accelerating its strategy to deliver disruptive server technology to its OEM customers serving cloud-centric data centers. With SeaMicro’s fabric technology and system-level design capabilities, AMD will be uniquely positioned to offer industry-leading server building blocks tuned for the fastest-growing workloads such as dynamic web content, social networking, search and video. …
… “Cloud computing has brought a sea change to the data center–dramatically altering the economics of compute by changing the workload and optimal characteristics of a server,” said Andrew Feldman, SeaMicro CEO, who will become general manager of AMD’s newly created Data Center Server Solutions business. “SeaMicro was founded to dramatically reduce the power consumed by servers, while increasing compute density and bandwidth.  By becoming a part of AMD, we will have access to new markets, resources, technology, and scale that will provide us with the opportunity to work tightly with our OEM partners as we fundamentally change the server market.”

ARM TechCon 2012 SoC Partner Panel: Introducing the ARM Cortex-A50 Series [ARMflix YouTube channel, recorded on Oct 30, published on Nov 13, 2012]

Moderator: Simon Segars EVP and GM, Processor and Physical IP Divisions ARM Panelists: Andrew Feldman Corporate VP & GM, Data Center Server Solutions (need to confirm his title with AMD) AMD Martyn Humphries VP & General Manager, Mobile Applications Group Broadcom Karl Freund VP, Marketing Calxeda** John Kalkman VP, Marketing Samsung Semiconductor Bob Krysiak EVP and President of the Americas Region STMicroelectronics
** Note that nearly 14 months later, on Dec 19, 2013 Calxeda ran out of its ~$100M venture capital accumulated earlier. As the company was not able to secure further funding it shut down its operation by dismissing most of its employees (except 12 workers serving existing customers) and went into “restructuring” with just putting on their company website: “We will update you as we conclude our restructuring process”. This is despite of the kind of pioneering role the company had, especially with HP’s Moonshot and CloudSystem initiatives, and the relatively short term promise of delivering its server cartridge to HP’s next-gen Moonshot enclosure as was well reflected in my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, Dec 6, 2013] post. The major problem was that “it tried to get to market with 32-bit chip technology, at a time most x86 servers boast 64-bit technology … [and as] customers and software companies weren’t willing to port their software to run on 32-bit systems” – reported the Wall Street Journal. I would also say that AMD’s “only production ready, production tested supercompute fabric on the planet” (see AMD Rory’s statement already given above) with its upcoming “Seattle” 64-bit ARM SoC to be on track for delivery in H2 CY14 was another major reason for the lack of additional venture funds to Calxeda.

AMD’s 64-bit “Seattle” ARM processor brings best of breed hardware and software to the data center [AMD Business blog, Dec 12, 2013]

Going into 2014, the server market is set to face the biggest disruption since AMD launched the 64-bit x86 AMD Opteron™ processor – the first 64-bit x86 processor – in 2003. Processors based on ARM’s 64-bit ARMv8 architecture will start to appear next year, and just like the x86 AMD Opteron™ processors a decade ago, AMD’s ARM 64-bit processors will offer enterprises a viable option for efficiently handling vast amounts of data.

image

From: AMD Unveils Server Strategy and Roadmap [press release June 18, 2013]

These forthcoming AMD Opteron™ processors bring important innovations to the rapidly changing compute market, including integrated CPU and GPU compute (APU); high core-count ARM servers for high-density compute in the data center; and substantial improvements in compute per-watt per-dollar and total cost of ownership.
“Our strategy is to differentiate ourselves by using our unique IP to build server processors that are particularly well matched to a target workload and thereby drive down the total cost of owning servers. This strategy unfolds across both the enterprise and data centers and includes leveraging our graphics processing capabilities and embracing both x86 and ARM instruction sets,” said Andrew Feldman, general manager of the Server Business Unit, AMD. “AMD led the world in the transition to multicore processors and 64-bit computing, and we intend to do it again with our next-generation AMD Opteron families.”
In 2014, AMD will set the bar in power-efficient server compute with the industry’s premier ARM server CPU. The 64-bit CPU, code named “Seattle,” is based on ARM Cortex-A57 cores and is expected to provide category-leading throughput as well as setting the bar in performance-per-watt. AMD will also deliver a best-in-class APU, code named “Berlin.” “Berlin” is an x86 CPU and APU, based on a new generation of cores namedSteamroller.”  Designed to double the performance of the recently available “Kyoto” part, “Berlin” will offer extraordinary compute-per-watt that will enable massive rack density. The third processor announced today is code named “Warsaw,” AMD’s next-generation 2P/4P offering. It is optimized to handle the heavily virtualized workloads found in enterprise environments including the more complex compute needs of data analytics, xSQL and traditional databases. “Warsaw” will provide significantly improved performance-per-watt over today’s AMD Opteron 6300 family. 
Seattle
“Seattle” will be the industry’s only 64-bit ARM-based server SoC from a proven server processor supplier.  “Seattle” is an 8- and then 16-core CPU based on the ARM Cortex-A57 core and is expected to run at or greater than 2 GHz.  The “Seattle” processor is expected to offer 2-4X the performance of AMD’s recently announced AMD Opteron X-Series processor with significant improvement in compute-per-watt.  It will deliver 128GB DRAM support, extensive offload engines for better power efficiency and reduced CPU loading, server caliber encryption, and compression and legacy networking including integrated 10GbE.  It will be the first processor from AMD to integrate AMD’s advanced Freedom™ Fabric for dense compute systems directly onto the chip. AMD plans to sample “Seattle” in the first quarter of 2014 with production in the second half of the year.
Berlin
Berlin” is an x86-based processor that will be available both as a CPU and APU. The processor boasts four next-generation “Steamroller” cores and will offer almost 8X the gigaflops per-watt compared to current AMD Opteron™ 6386SE processor.  It will be the first server APU built on AMD’s revolutionary Heterogeneous System Architecture (HSA), which enables uniform memory access for the CPU and GPU and makes programming as easy as C++. “Berlin” will offer extraordinary compute per-watt that enables massive rack density. It is expected to be available in the first half of 2014
Warsaw
Warsaw” is an enterprise server CPU optimized to deliver unparalleled performance and total cost of ownership for two- and four-socket servers.  Designed for enterprise workloads, it will offer improved performance-per-watt, which drives down the cost of owning a “Warsaw”-based server while enabling seamless migration from the AMD Opteron 6300 Series family.  It is a fully compatible socket with identical software certifications, making it ideal for the AMD Open 3.0 Server – the industry’s most cost effective Open Compute platform.  It is expected to be available in the first quarter of 2014.

Note that AMD Details Embedded Product Roadmap [press release, Sept, 9, 2013] as well in which there is also a:

“Hierofalcon” CPU SoC
“Hierofalcon” is the first 64-bit ARM-based platform from AMD targeting embedded data center applications, communications infrastructure and industrial solutions. It will include up to eight ARM Cortex™-A57 CPUs expected to run up to 2.0 GHz, and provides high-performance memory with two 64-bit DDR3/4 channels with error correction code (ECC) for high reliability applications. The highly integrated SoC includes 10 Gb KR Ethernet and PCI-Express Gen 3 for high-speed network connectivity, making it ideal for control plane applications. The “Hierofalcon” series also provides enhanced security with support for ARM TrustZone® technology and a dedicated cryptographic security co-processor, aligning to the increased need for networked, secure systems. “Hierofalcon” is expected to be sampling in the second quarter of 2014 with production in the second half of the year.

image

The AMD Opteron processor came at a time when x86 processors were seen by many as silicon that could only power personal computers, with specialized processors running on architectures such as SPARC™ and Power™ being the ones that were handling server workloads. Back in 2003, the AMD Opteron processor did more than just offer another option, it made the x86 architecture a viable contender in the server market – showing that processors based on x86 architectures could compete effectively against established architectures. Thanks in no small part to the AMD Opteron processor, today the majority of servers shipped run x86 processors.

In 2014, AMD will once again disrupt the datacenter as x86 processors will be joined by those that make use of ARM’s 64-bit architecture. Codenamed “Seattle,” AMD’s first ARM-based Opteron processor will use the ARMv8 architecture, offering low-power processing in the fast growing dense server space.

To appreciate what the first ARM-based AMD Opteron processor is designed to deliver to those wanting to deploy racks of servers, it is important to realize that the ARMv8 architecture offers a clean slate on which to build both hardware and software.

ARM’s ARMv8 architecture is much more than a doubling of word-length from previous generation ARMv7 architecture: it has been designed from the ground-up to provide higher performance while retaining the trademark power efficiencies that everyone has come to expect from the ARM architecture. AMD’s “Seattle” processors will have either four or eight cores, packing server-grade features such as support for up to 128 GB of ECC memory, and integrated 10Gb/sec of Ethernet connectivity with AMD’s revolutionary Freedom™ fabric, designed to cater for dense compute systems.

From: AMD Delivers a New Generation of AMD Opteron and Intel Xeon “Ivy Bridge” Processors in its New SeaMicro SM15000 Micro Server Chassis [press release, Sept 10, 2012]

With the new AMD Opteron processor, AMD’s SeaMicro SM15000 provides 512 cores in a ten rack unit system with more than four terabytes of DRAM and supports up to five petabytes of Freedom Fabric Storage. Since AMD’s SeaMicro SM15000 server is ten rack units tall, a one-rack, four-system cluster provides 2,024 cores, 16 terabytes of DRAM, and is capable of supporting 20 petabytes of storage.  The new and previously unannounced AMD Opteron processor is a custom designed octal core 2.3 GHz part based on the new “Piledriver” core, and supports up to 64 gigabytes of DRAM per CPU. The SeaMicro SM15000 system with the new AMD Opteron processor sets the high watermark for core density for micro servers.
Configurations based on the AMD Opteron processor and Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge” microarchitecture) will be available in November 2012. …

image

AMD off-chip interconnect fabric IP designed to enable significantly lower TCO

• Links hundreds –> thousands of SoC modules

• Shares hundreds of TBs storage and virtualizes I/O

• 160Gbps Ethernet Uplink

• Instruction Set:
– x86
– ARM (coming in 2014 when the fabric will be integrated into the SoCs as well, including the x86 SoCs)

From: SM15000-OP: 64 Octal Core Servers
with AMD Opteron™ processors (2.0/2.3/2.8 GHz, 8 “Piledriver” cores)

image

Freedom™ ASIC 2.0 – Industry’s only Second Generation Fabric Technology
The Freedom™ ASIC is the building block of SeaMicro Fabric Compute Systems, enabling interconnection of energy efficient servers in a 3-dimensional Torus Fabric. The second generation Freedom ASIC includes high performance network interfaces, storage connectivity, and advanced server management, thereby eliminating the need for multiple sets of network adapters, HBAs, cables, and switches. This results in unmatched density, energy efficiency, and lowered TCO. Some of the key technologies in ASIC 2.0 include:
  • SeaMicro Input/Output Virtualization Technology (IOTV™) eliminates all but three components from SeaMicro’s motherboard—CPU, DRAM, and the ASIC itself—thereby shrinking the motherboard, while reducing power, cost and space.
  • SeaMicro new TIO™ (Turn It Off) technology enables SeaMicro to further power-optimize the mini motherboard by turning off unneeded CPU and chipset functions. Together, SeaMicro’s I/O Virtualization Technology and TIO technology produce the smallest and most power efficient server motherboards available.
  • SeaMicro Freedom Supercompute Fabric built of multiple Freedom ASICs working together, creating a 1.28 terabits per-second fabric that ties together 64 of the power-optimized mini-motherboards at low latency and low power with massive bandwidth.
  • SeaMicro Freedom Fabric Storage technology allows the Freedom supercompute fabric to extend out of the chassis and across the data center linking not just components inside the chassis, but also those outside as well.

image

Unified Management – Easily Provision and Manage Servers, Network, and Storage Resources on Demand
The SeaMicro SM15000 implements a rich management system providing unified management of servers, network, and storage. Resources can be rapidly deployed, managed, and repurposed remotely, enabling lights-off data center operations. It offers a broad set of management API including an industry standard CLI, SNMP, IPMI, syslog, and XEN APIs, allowing customers to seamlessly integrate the SeaMicro SM15000 into existing data center management environments.
Redundancy and Availability – Engineered from the Ground Up to Eliminate Single Points of Failure
The SeaMicro SM15000 is designed for the most demanding environments, helping to ensure availability of compute, network, storage, and system management. At the heart of the system is the Freedom Fabric, interconnecting all resources in the system, with the ability to sustain multiple points of failure and allow live component servicing. All active components in the system can be configured redundant and are hot-swappable, including server cards, network uplink cards, storage controller cards, system management cards, disks, fan trays, and power supplies. Key resources can also be configured to be protected in the following ways:
Compute – A shared spare server can be configured to act as a standby spare for multiple primary servers. In the event of failure, the primary server’s personality, including MAC address, assigned disks, and boot configuration can be migrated to the standby spare and brought back online – ensuring fast restoration of services from a remote location.
Network – The highly available fabric ensures network connectivity is maintained between servers and storage in the event of path failure. For uplink high-availability, the system can be configured with multiple uplink modules and port channels providing redundant active/active interfaces.
Storage – The highly available fabric ensures that servers can access fabric storage in the event of failures. The fabric storage system also provides an efficient, high utilization optional hardware RAID to protect data in case of disk failure.


The Industry’s First Data Center in a Box
AMD’s SeaMicro SM15000 family of Fabric Compute Systems provides the equivalent of 32 1RU dual socket servers, massive bandwidth, top of rack Ethernet switching, and high capacity shared storage, with centralized management in a small, compact 10RU form factor. In addition, it provides an integrated server console management for unified management. The SeaMicro SM15000 dramatically reduces CAPEX and significantly reduces the ongoing OPEX of deploying discreet compute, networking, storage, and management systems.
More information:
An Overview of AMD|SeaMicro Technology [Anil Rao from AMD|SeaMicro, October 2012]
System Overview for the SM15000 Family [Anil Rao from AMD|SeaMicro, October 2012]
What a Difference 0.09 Percent Makes [The Wave Newsletter from AMD, September 2013]
Today’s cloud services have helped companies consolidate infrastructure and drive down costs, however, recent service interruptions point to a big downside of relying on public cloud service. Most are built using commodity, off-the-shelf servers to save costs and are standardized around the same computing and storage SLAs of 99.95 and 99.9 percent. This is significantly lower than the four nine availability standard in the data networking world. Leading companies are realizing that the performance and reliability of their applications is inextricably linked to their underlying server architecture. In this issue, we discuss the strategic importance of selecting the right hardware. Whether building an enterprise-caliber cloud service or implementing Apache™ Hadoop® to process and analyze big data, hardware matters.
more >
Where Does Software End and Hardware Begin? [The Wave Newsletter from AMD, September 2013]
Lines are blurring between software and hardware with some industry leaders choosing to own both. Software companies are realizing that the performance and value of their software depends on their hardware choices.  more >
Improving Cloud Service Resiliency with AMD’s SeaMicro Freedom Fabric [The Wave Newsletter from AMD, December 2013]
Learn why AMD’s SeaMicro Freedom™ Fabric ASIC is the server industry’s first viable solution to cost-effectively improve the resiliency and availability of cloud-based services.

We realize that having an impressive set of hardware features in the first ARM-based Opteron processors is half of the story, and that is why we are hard at work on making sure the software ecosystem will support our cutting edge hardware. Work on software enablement has been happening throughout the stack – from the UEFI, to the operating system and onto application frameworks and developer tools such as compilers and debuggers. This ensures that the software will be ready for ARM-based servers.

AMD developing Linux on ARM at Linaro Connect 2013 [Charbax YouTube channel, March 11, 2013]

[Recorded at Linaro Connect Asia 2013, March 4-8, 2013] Dr. Leendert van Doorn, Corporate Fellow at AMD, talks about what AMD does with Linaro to optimize Linux on ARM. He talks about the expectations that AMD has for results to come from Linaro in terms of achieving a better and more fully featured Linux world on ARM, especially for the ARM Cortex-A57 ARMv8 processor that AMD has announced for the server market.

AMD’s participation in software projects is well documented, being a gold member of the Linux Foundation, the organization that manages the development of the Linux kernel, and a group member of Linaro. AMD is a gold sponsor of the Apache Foundation, which oversees projects such as Hadoop, HTTP Server and Samba among many others, and the company’s engineers are contributors to the OpenJDK project. This is just a small selection of the work AMD is taking part in, and these projects in particular highlight how important AMD feels that open source software is to the data center, and in particular micro servers, that make use of ARM-based processors.

And running ARM-based processors doesn’t mean giving up on the flexibility of virtual machines, with KVM already ported to the ARMv8 architecture. Another popular hypervisor, Xen, is already available for 32-bit ARM architectures with a 64-bit port planned, ensuring that two popular and highly capable hypervisors will be available.

The Linux kernel has supported 64-bit ARMv8 architecture since Linux 3.7, and a number of popular Linux distributions have already signaled their support for the architecture including Canonical’s Ubuntu and the Red Hat sponsored Fedora distribution. In fact there is a downloadable, bootable Ubuntu distribution available in anticipation for ARMv8-based processors.

It’s not just operating systems and applications that are available. Developer tools such as the extremely popular open source GCC compiler and the vital GNU C Library (Glibc) have already been ported to the ARMv8 architecture and are available for download. With GCC and Glibc good to go, a solid foundation for developers to target the ARMv8 architecture is forming.

All of this work on both hardware and software should shed some light on just how big ARM processors will be in the data center. AMD, an established enterprise semiconductor vendor, is uniquely placed to ship both 64-bit ARMv8 and 64-bit x86 processors that enable “mixed rack” environments. And thanks to the army of software engineers at AMD, as well as others around the world who have committed significant time and effort, the software ecosystem will be there to support these revolutionary processors. 2014 is set to see the biggest disruption in the data center in over a decade, with AMD again at the center of it.

Lawrence Latif is a blogger and technical communications representative at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites, and references to third party trademarks, are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.

End of AMD’s 64-bit “Seattle” ARM processor brings best of breed hardware and software to the data center [AMD Business blog, Dec 12, 2013]

AMD at ARM Techcon 2013 [Charbax YouTube channel, recorded at the ARM Techcon 2013 (Oct 29-31), published on Dec 25, 2013]

AMD in 2014 will be delivering a 64bit ARM processor for servers. The ARM Architecture and Ecosystem enables servers to achieve greater performance per watt and greater performance per dollar. The code name for the product is Seattle. AMD Seattle is expected to reach mass market cloud servers in the second half of 2014.

From: Advanced Micro Devices’ CEO Discusses Q3 2013 Results – Earnings Call Transcript [Seeking Alpha, Oct 17, 2013]

Rory Read – President and CEO:

The three step turnaround plan we outlined a year ago to restructure, accelerate and ultimately transform AMD is clearly paying off. We completed the restructuring phase of our plan, maintaining cash at optimal levels and beating our $450 million quarterly operating expense goal in the third quarter. We are now in the second phase of our strategy – accelerating our performance by consistently executing our product roadmap while growing our new businesses to drive a return to profitability and positive free cash flow.
We are also laying the foundation for the third phase of our strategy, as we transform AMD to compete across a set of high growth markets. Our progress on this front was evident in the third quarter as we generated more than 30% of our revenue from our semi-custom and embedded businesses. Over the next two years we will continue to transform AMD to expand beyond a slowing, transitioning PC industry, as we create a more diverse company and look to generate approximately 50% of our revenue from these new high growth markets.

We have strategically targeted that semi-custom, ultra-low power client, embedded, dense server and the professional graphics market where we can offer differentiated products that leverage our APU and graphics IP. Our strategy allows us to continue to invest in the product that will drive growth, while effectively managing operating expenses. …

… Several of our growth businesses passed key milestones in the third quarter. Most significantly, our semi-custom business ramped in the quarter. We successfully shipped millions of units to support Sony and Microsoft, as they prepared to launch their next-generation game consoles. Our game console wins are generating a lot of customer interest, as we demonstrate our ability to design and reliably ramp production on two of the most complex SOCs ever built for high-volume consumer devices. We have several strong semi-custom design opportunities moving through the pipeline as customers look to tap into AMD’s IP, design and integration expertise to create differentiated winning solutions. … it’s our intention to win and mix in a whole set semicustom offerings as we build out this exciting and important new business.
We made good progress in our embedded business in the third quarter. We expanded our current embedded SOC offering and detailed our plans to be the only company to offer both 64-bit x86 and ARM solutions beginning in 2014. We have developed a strong embedded design pipeline which, we expect, will drive further growth for this business across 2014.
We also continue to make steady progress in another of our growth businesses in the third quarter, as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe we can continue to gain share in this lucrative part of the GPU market, based on our product portfolio, design wins [in place] [ph] and enhanced channel programs.

In the server market, the industry is at the initial stages of a multiyear transition that will fundamentally change the competitive dynamic. Cloud providers are placing a growing importance on how they get better performance from their datacenters while also reducing the physical footprint and power consumption of their server solution.

This will become the defining metric of this industry and will be a key growth driver for the market and the new AMD. AMD is leading this emerging trend in the server market and we are committed to defining a leadership position.

Earlier this quarter, we had a significant public endorsement of our dense server strategy as Verizon announced a high performance public cloud that uses our SeaMicro technology and Opteron processor. We remain on track to introduce new, low-power X86 and 64-bit ARM processors next year and we believe we will offer the industry leading ARM-based servers. …

Two years ago we were 90% to 95% of our business centered over PCs and we’ve launched the clear strategy to diversify our portfolio taking our IT — leadership IT and Graphics and CPU and taking it into adjacent segment where there is high growth for three, five, seven years and stickier opportunities.
We see that as an opportunity to drive 50% or more of our business over that time horizon. And if you look at the results in the third quarter, we are already seeing the benefits of that opportunity with over 30% of our revenue now coming from semi-custom and our embedded businesses.
We see it is an important business in PC, but its time is changing and the go-go era is over. We need to move and attack the new opportunities where the market is going, and that’s what we are doing.

Lisa Su – Senior Vice President and General Manager, Global Business Units:

We are fully top to bottom in 28 nanometer now across all of our products, and we are transitioning to both 20 nanometer and to FinFETs over the next couple of quarters in terms of designs. We will do 20 nanometer first, and then we will go to FinFETs. …

game console semicustom product is a long life cycle product over five to seven years. Certainly when we look at cost reduction opportunities, one of the important ones is to move technology nodes. So we will in this timeframe certainly move from 28 nanometer to 20 nanometer and now the reason to do that is both for pure die cost savings as well as all the power savings that our customer benefits from. … so expect the cost to go down on a unit basis as we move to 20.

[Regarding] the SeaMicro business, we are very pleased with the pipeline that we have there. Verizon was the first major datacenter win that we can talk about publicly. We have been working that relationship for the last two years. So it’s actually nice to be able to talk about it. We do see it as a major opportunity that will give us revenue potential in 2014. And we continue to see a strong pipeline of opportunities with SeaMicro as more of the datacenter guys are looking at how to incorporate these dense servers into their new cloud infrastructures. …

… As I said the Verizon engagement has lasted over the past two years. So some of the initial deployments were with the Intel processors but we do have significant deployments with AMD Opteron as well. We do see the percentage of Opteron processors increasing because that’s what we’d like to do. …

We’re very excited about the server space. It’s a very good market. It’s a market where there is a lot of innovation and change. In terms of 64-bit ARM, you will see us sampling that product in the first quarter of 2014. That development is on schedule and we’re excited about that. All of the customer discussions have been very positive and then we will combine both the [?x86 and the?]64-bit ARM chip with our SeaMicro servers that will have full solution as well. You will see SeaMicro plus ARM in 2014.

So I think we view this combination of IP as really beneficial to accelerating the dense server market both on the chip side and then also on the solution side with the customer set.

Amazon’s James Hamilton: Why Innovation Wins [AMD SeaMicro YouTube channel, Nov 12, 2012] video which was included into the Headline News and Events section of Volume 1, December 2012 of The Wave Newsletter from AMD SeaMicro with the following intro:

James Hamilton, VP and Distinguished Engineer at Amazon called AMD’s co-announcement with ARM to develop 64-bit ARM technology-based processors “A great day for the server ecosystem.” Learn why and hear what James had to say about what this means for customers and the broader server industry.

James Hamilton of Amazon discusses the four basic tenants of why he thinks data center server innovation needs to go beyond just absolute performance. He believes server innovation delivering improved volume economics, storage performance, price/performance and power/performance will win in the end.

AMD Changes Compute Landscape as the First to Bridge Both x86 and ARM Processors for the Data Center [press release, Oct 29, 2012]

Company to Complement x86-based Offerings with New Processors Based on ARM 64-bit Technology, Starting with Server Market

SUNNYVALE, Calif. —10/29/2012

In a bold strategic move, AMD (NYSE: AMD) announced that it will design 64-bit ARM® technology-based processors in addition to its x86 processors for multiple markets, starting with cloud and data center servers. AMD’s first ARM technology-based processor will be a highly-integrated, 64-bit multicore System-on-a-Chip (SoC) optimized for the dense, energy-efficient servers that now dominate the largest data centers and power the modern computing experience. The first ARM technology-based AMD Opteron™ processor is targeted for production in 2014 and will integrate the AMD SeaMicro Freedom™ supercompute fabric, the industry’s premier high-performance fabric.

AMD’s new design initiative addresses the growing demand to deliver better performance-per-watt for dense cloud computing solutions. Just as AMD introduced the industry’s first mainstream 64-bit x86 server solution with the AMD Opteron processor in 2003, AMD will be the only processor provider bridging the x86 and 64-bit ARM ecosystems to enable new levels of flexibility and drive optimal performance and power-efficiency for a range of enterprise workloads.

“AMD led the data center transition to mainstream 64-bit computing with AMD64, and with our ambidextrous strategy we will again lead the next major industry inflection point by driving the widespread adoption of energy-efficient 64-bit server processors based on both the x86 and ARM architectures,” said Rory Read, president and chief executive officer, AMD. “Through our collaboration with ARM, we are building on AMD’s rich IP portfolio, including our deep 64-bit processor knowledge and industry-leading AMD SeaMicro Freedom supercompute fabric, to offer the most flexible and complete processing solutions for the modern data center.”

“The industry needs to continuously innovate across markets to meet customers’ ever-increasing demands, and ARM and our partners are enabling increasingly energy-efficient computing solutions to address these needs,” said Warren East, chief executive officer, ARM. “By collaborating with ARM, AMD is able to leverage its extraordinary portfolio of IP, including its AMD Freedom supercompute fabric, with ARM 64-bit processor cores to build solutions that deliver on this demand and transform the industry.”

The explosion of the data center has brought with it an opportunity to optimize compute with vastly different solutions. AMD is providing a compute ecosystem filled with choice, offering solutions based on AMD Opteron x86 CPUs, new server-class Accelerated Processing Units (APUs) that leverage Heterogeneous Systems Architecture (HSA), and new 64-bit ARM-based solutions.

This strategic partnership with ARM represents the next phase of AMD’s strategy to drive ambidextrous solutions in emerging mega data center solutions. In March, AMD announced the acquisition of SeaMicro, the leader in high-density, energy-efficient servers. With this announcement, AMD will integrate the AMD SeaMicro Freedom fabric across its leadership AMD Opteron x86- and ARM technology-based processors that will enable hundreds, or even thousands of processor clusters to be linked together to provide the most energy-efficient solutions.

“Over the past decade the computer industry has coalesced around two high-volume processor architectures – x86 for personal computers and servers, and ARM for mobile devices,” observed Nathan Brookwood, research fellow at Insight 64. “Over the next decade, the purveyors of these established architectures will each seek to extend their presence into market segments dominated by the other. The path on which AMD has now embarked will allow it to offer products based on both x86 and ARM architectures, a capability no other semiconductor manufacturer can likely match.”

At an event hosted by AMD in San Francisco, representatives from Amazon, Dell, Facebook and Red Hat participated in a panel discussion on opportunities created by ARM server solutions from AMD. A replay of the event can be found here as of 5 p.m. PDT, Oct. 29.

Supporting Resources

  • AMD bridges the x86 and ARM ecosystems for the data center announcement press resources
  • Follow AMD on Twitter at @AMD
  • Follow the AMD and ARM announcement on Twitter at #AMDARM
  • Like AMD on Facebook.

AMD SeaMicro SM15000 with Freedom Fabric Storage [AMD YouTube channel, Sept 11, 2012]

AMD Extends Leadership in Data Center Innovation – First to Optimize the Micro Server for Big Data [press release, Sept 10, 2012]

AMD’s SeaMicro SM15000™ Server Delivers Hyper-efficient Compute for Big Data and Cloud Supporting Five Petabytes of Storage; Available with AMD Opteron™ and Intel® Xeon® “Ivy Bridge”/”Sandy Bridge” Processors
SUNNYVALE, Calif. —9/10/2012
AMD (NYSE: AMD) today announced the SeaMicro SM15000™ server, another computing innovation from its Data Center Server Solutions (DCSS) group that cements its position as the technology leader in the micro server category. AMD’s SeaMicro SM15000 server revolutionizes computing with the invention of Freedom™ Fabric Storage, which extends its Freedom™ Fabric beyond the SeaMicro chassis to connect directly to massive disk arrays, enabling a single ten rack unit system to support more than five petabytes of low-cost, easy-to-install storage. The SM15000 server combines industry-leading density, power efficiency and bandwidth with a new generation of storage technology, enabling a single rack to contain thousands of cores, and petabytes of storage – ideal for big data applications like Apache™ Hadoop™ and Cassandra™ for public and private cloud deployments.
AMD’s SeaMicro SM15000 system is available today and currently supports the Intel® Xeon® Processor E3-1260L (“Sandy Bridge”). In November, it will support the next generation of AMD Opteron™ processors featuring the “Piledriver” core, as well as the newly announced Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge”). In addition to these latest offerings, the AMD SeaMicro fabric technology continues to deliver a key building block for AMD’s server partners to build extremely energy efficient micro servers for their customers.
“Historically, server architecture has focused on the processor, while storage and networking were afterthoughts. But increasingly, cloud and big data customers have sought a solution in which storage, networking and compute are in balance and are shared. In a legacy server, storage is a captive resource for an individual processor, limiting the ability of disks to be shared across multiple processors, causing massive data replication and necessitating the purchase of expensive storage area networking or network attached storage equipment,” said Andrew Feldman, corporate vice president and general manager of the Data Center Server Solutions group at AMD. “AMD’s SeaMicro SM15000 server enables companies, for the first time, to share massive amounts of storage across hundreds of efficient computing nodes in an exceptionally dense form factor. We believe that this will transform the data center compute and storage landscape.”
AMD’s SeaMicro products transformed the data center with the first micro server to combine compute, storage and fabric-based networking in a single chassis. Micro servers deliver massive efficiencies in power, space and bandwidth, and AMD set the bar with its SeaMicro product that uses one-quarter the power, takes one-sixth the space and delivers 16 times the bandwidth of the best-in-class alternatives. With the SeaMicro SM15000 server, the innovative trajectory broadens the benefits of the micro server to storage, solving the most pressing needs of the data center.
Combining the Freedom™ Supercompute Fabric technology with the pioneering Freedom™ Fabric Storage technology enables data centers to provide more than five petabytes of storage with 64 servers in a single ten rack unit (17.5 inch tall) SM15000 system. Once these disks are interconnected with the fabric, they are seen and shared by all servers in the system. This approach provides the benefits typically provided by expensive and complex solutions such as network-attached storage and storage area networking with the simplicity and low cost of direct attached storage
“AMD’s SeaMicro technology is leading innovation in micro servers and data center compute,” said Zeus Kerravala, founder and principal analyst of ZK Research. “The team invented the micro server category, was the first to bring small-core servers and large-core servers to market in the same system, the first to market with a second-generation fabric, and the first to build a fabric that supports multiple processors and instruction sets. It is not surprising that they have extended the technology to storage. The bringing together of compute and petabytes of storage demonstrates the flexibility of the Freedom Fabric. They are blurring the boundaries of compute, storage and networking, and they have once again challenged the industry with bold innovation.”
Leaders Across the Big Data Community Agree
Dr. Amr Awadallah, CTO and Founder at Cloudera, the category leader that is setting the standard for Hadoop in the enterprise, observes: “The big data community is hungry for innovations that simplify the infrastructure for big data analysis while reducing hardware costs. As we hear from our vast big data partner ecosystem and from customers using CDH and
Cloudera Enterprise, companies that are seeking to gain insights across all their data want their hardware vendors to provide low cost, high density, standards-based compute that connects to massive arrays of low cost storage. AMD’s SeaMicro delivers on this promise.”
Eric Baldeschwieler, co-founder and CTO of Hortonworks and a pioneer in Hadoop technology, notes: “Petabytes of low cost storage, hyper-dense energy-efficient compute, connected with a supercompute-style fabric is an architecture particularly well suited for big data analytics and Hortonworks Data Platform. At Hortonworks, we seek to make Apache Hadoop easier to use, consume and deploy, which is in line with AMD’s goal to revolutionize and commoditize the storage and processing of big data. We are pleased to see leaders in the hardware community inventing technology that extends the reach of big data analysis.”
Matt Pfeil, co-founder and VP of customer solutions at DataStax, the leader in real-time mission-critical big data platforms, agrees: “At DataStax, we believe that extraordinary databases, such as Cassandra, running mission-critical applications, can be used by nearly every enterprise. To see AMD’s DCSS group bringing together efficient compute and petabytes of storage over a unified fabric in a single low-cost, energy-efficient solution is enormously exciting. The combination of the SM15000 server and best-in-class database, Cassandra, offer a powerful threat to the incumbent makers of both databases and the expensive hardware on which they reside.”
AMD’s SeaMicro SM15000™ Technology
AMD’s SeaMicro SM15000 server is built around the industry’s first and only second-generation fabric, the Freedom Fabric. It is the only fabric technology designed and optimized to work with Central Processor Units (CPUs) that have both large and small cores, as well as x86 and non-x86 CPUs. Freedom Fabric contains innovative technology including:
  • SeaMicro IOVT (Input/Output Virtualization Technology), which eliminates all but three components from the SeaMicro motherboard – CPU, DRAM, and the ASIC itself – thereby shrinking the motherboard, while reducing power, cost and space;
  • SeaMicro TIO™ (Turn It Off) technology, which enables further power optimization on the mini motherboard by turning off unneeded CPU and chipset functions. Together, SeaMicro IOVT and TIO technology produce the smallest and most power efficient motherboards available;
  • Freedom Supercompute Fabric creates a 1.28 terabits-per-second fabric that ties together 64 of the power-optimized mini-motherboards at low latency and low power with massive bandwidth;
  • SeaMicro Freedom Fabric Storage, which allows the Freedom Supercompute Fabric to extend out of the chassis and across the data center, linking not just components inside the chassis, but those outside as well.
AMD’s SeaMicro SM15000 Server Details
AMD’s SeaMicro SM15000 server will be available with 64 compute cards, each holding a new custom-designed single-socket octal core 2.0/2.3/2.8 GHz AMD Opteron processor based on the “Piledriver” core, for a total of 512 heavy-weight cores per system or 2,048 cores per rack. Each AMD Opteron processor can support 64 gigabytes of DRAM, enabling a single system to handle more than four terabytes of DRAM and over 16 terabytes of DRAM per rack. AMD’s SeaMicro SM15000 system will also be available with a quad core 2.5 GHz Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge”) for 256 2.5 GHz cores in a ten rack unit system or 1,024 cores in a standard rack. Each processor supports up to 32 gigabytes of memory so a single SeaMicro SM15000 system can deliver up to two terabytes of DRAM and up to eight terabytes of DRAM per rack.
AMD’s SeaMicro SM15000 server also contains 16 fabric extender slots, each of which can connect to three different Freedom Fabric Storage arrays with different capacities:
  • FS 5084-L is an ultra-dense capacity-optimized storage system. It supports up to 84 SAS/SATA 3.5 inch or 2.5 inch drives in 5 rack units for up to 336 terabytes of capacity per-array and over five petabytes per SeaMicro SM15000 system;
  • FS 2012-L is a capacity-optimized storage system. It supports up to 12 3.5 inch or 2.5 inch drives in 2 rack units for up to 48 terabytes of capacity per-array or up to 768 terabytes of capacity per SeaMicro SM15000 system;
  • FS 2024-S is a performance-optimized storage system. It supports up to 24 2.5 inch drives in 2 rack units for up to 24 terabytes of capacity per-array or up to 384 terabytes of capacity per SM15000 system.

In summary, AMD’s SeaMicro SM15000 system:

  • Stands ten rack units or 17.5 inches tall;
  • Contains 64 slots for compute cards for AMD Opteron or Intel Xeon processors;
  • Provides up to ten gigabits per-second of bandwidth to each CPU;
  • Connects up to 1,408 solid state or hard drives with Freedom Fabric Storage
  • Delivers up to 16 10 GbE uplinks or up to 64 1GbE uplinks;
  • Runs standard off-the-shelf operating systems including Windows®, Linux, Red Hat and VMware and Citrix XenServer hypervisors.
Availability
AMD’s SeaMicro SM15000 server with Intel’s Xeon Processor E3-1260L “Sandy Bridge” is now generally available in the U.S and in select international regions. Configurations based on AMD Opteron processors and Intel Xeon Processor E3-1265Lv2 with the “Ivy Bridge” microarchitecture will be available in November, 2012. More information on AMD’s revolutionary SeaMicro family of servers can be found at www.seamicro.com/products.


1. Verizon

Verizon Cloud on AMD’s SeaMicro SM15000 [AMD YouTube channel, Oct 7, 2013]

Find out more about SeaMicro and AMD athttp://bit.ly/AMD_SeaMicro Verizon and AMD partner to create an enterprise-class cloud service that was not possible using off the shelf servers. Verizon Cloud is based on the SeaMicro SM15000, the industry’s first and only programmable server hardware. The new services redefine the benchmarks for public cloud computing and storage performance and reliability.

Verizon Cloud Compute and Verizon Cloud Storage [The Wave Newsletter from AMD, December 2013]

With enterprise adoption of public cloud services at 10 percent1, Verizon identified a need for a cloud service that was secure, reliable and highly flexible with enterprise-grade performance guarantees. Large, global enterprises want to take advantage of the agility, flexibility and compelling economics of the public cloud, but the performance and reliability are not up to par for their needs. To fulfill this need, Verizon spent over two years identifying and developing software using AMD’s SeaMicro SM15000, the industry’s first and only programmable server hardware. The new services redefine the benchmarks for public cloud computing and storage performance and security.

Designed specifically for enterprise customers, the new services allow companies to use the same policies and procedures across the enterprise network and the public cloud. The close collaboration has resulted in cloud computing services with unheralded performance level guarantees that are offered with competitive pricing. The new cloud services are backed by the power of Verizon, including global data centers, global IP network and enterprise-grade managed security services. The performance and security innovations are expected to accelerate public cloud adoption by the enterprise for their mission critical applications. more >

Verizon Selects AMD’s SeaMicro SM15000 for Enterprise Class Services: Verizon Cloud Compute and Verizon Cloud Storage [AMD-Seamicro press release, Oct 7, 2013]

Verizon and AMD create technology that transforms the public cloud, delivering the industry’s most advanced cloud capabilities

SUNNYVALE, Calif. —10/7/2013

AMD (NYSE: AMD) today announced that Verizon is deploying SeaMicro SM15000™ servers for its new global cloud platform and cloud-based object storage service, whose public beta was recently announced. AMD’s SeaMicro SM15000 server links hundreds of cores together in a single system using a fraction of the power and space of traditional servers. To enable Verizon’s next generation solution, technology has been taken one step further: Verizon and AMD co-developed additional hardware and software technology on the SM15000 server that provides unprecedented performance and best-in-class reliability backed by enterprise-level service level agreements (SLAs). The combination of these technologies co-developed by AMD and Verizon ushers in a new era of enterprise-class cloud services by enabling a higher level of control over security and performance SLAs. With this technology underpinning the new Verizon Cloud Compute and Verizon Cloud Storage, enterprise customers can for the first time confidently deploy mission-critical systems in the public cloud.

“We reinvented the public cloud from the ground up to specifically address the needs of our enterprise clients,” said John Considine, chief technology officer at Verizon Terremark. “We wanted to give them back control of their infrastructure – providing the speed and flexibility of a generic public cloud with the performance and security they expect from an enterprise-grade cloud. Our collaboration with AMD enabled us to develop revolutionary technology, and it represents the backbone of our future plans.”

As part of its joint development, AMD and Verizon co-developed hardware and software to reserve, allocate and guarantee application SLAs. AMD’s SeaMicro Freedom™ fabric-based SM15000 server delivers the industry’s first and only programmable server hardware that includes a high bandwidth, low latency programmable interconnect fabric, and programmable data and control plane for both network and storage traffic. Leveraging AMD’s programmable server hardware, Verizon developed unique software to guarantee and deliver reliability, unheralded performance guarantees and SLAs for enterprise cloud computing services.

“Verizon has a clear vision for the future of the public cloud services—services that are more flexible, more reliable and guaranteed,” said Andrew Feldman, corporate vice president and general manager, Server, AMD. “The technology we developed turns the cloud paradigm upside down by creating a service that an enterprise can configure and control as if the equipment were in its own data center. With this innovation in cloud services, I expect enterprises to migrate their core IT services and mission critical applications to Verizon’s cloud services.”

“The rapid, reliable and scalable delivery of cloud compute and storage services is the key to competing successfully in any cloud market from infrastructure, to platform, to application; and enterprises are constantly asking for more as they alter their business models to thrive in a mobile and analytic world,” said Richard Villars, vice president, Datacenter & Cloud at IDC. “Next generation integrated IT solutions like AMD’s SeaMicro SM15000 provide a flexible yet high-performance platform upon which companies like Verizon can use to build the next generation of cloud service offerings.”

Innovative Verizon Cloud Capabilities on AMD’s SeaMicro SM15000 Server Industry Firsts

Verizon leveraged the SeaMicro SM15000 server’s ability to disaggregate server resources to create a cloud optimized for computing and storage services. Verizon and AMD’s SeaMicro engineers worked for over two years to create a revolutionary public cloud platform with enterprise class capabilities.

These new capabilities include:

  • Virtual machine server provisioning in seconds, a fraction of the time of a legacy public cloud;
  • Fine-grained server configuration options that match real life requirements, not just small, medium, large sizing, including processor speed (500 MHz to 2,000 MHz) and DRAM (.5 GB increments) options;
  • Shared disks across multiple server instances versus requiring each virtual machine to have its own dedicated drive;
  • Defined storage quality of service by specifying performance up to 5,000 IOPS to meet the demands of the application being deployed, compared to best-effort performance;
  • Consistent network security policies and procedures across the enterprise network and the public cloud;
  • Strict traffic isolation, data encryption, and data inspection with full featured firewalls that achieve Department of Defense and PCI compliance levels;
  • Guaranteed network performance for every virtual machine with reserved network performance up to 500 Mbps compared to no guarantees in many other public clouds.

The public beta for Verizon Cloud will launch in the fourth quarter. Companies interested in becoming a beta customer can sign up through the Verizon Enterprise Solutions website: www.verizonenterprise.com/verizoncloud.

AMD’s SeaMicro SM15000 Server

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, more than five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

AMD’s SeaMicro server product family currently supports the next generation AMD Opteron™ (“Piledriver”) processor, Intel® Xeon® E3-1260L (“Sandy Bridge”) and E3-1265Lv2 (“Ivy Bridge”) and Intel® Atom™ N570 processors. The SeaMicro SM15000 server also supports the Freedom Fabric Storage products, enabling a single system to connect with more than five petabytes of storage capacity in two racks. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.

For more information on the Verizon Cloud implementation, please visit: www.seamicro.com/vzcloud.

About AMD

AMD (NYSE: AMD) designs and integrates technology that powers millions of intelligent devices, including personal computers, tablets, game consoles and cloud servers that define the new era of surround computing. AMD solutions enable people everywhere to realize the full potential of their favorite devices and applications to push the boundaries of what is possible. For more information, visit www.amd.com.

4:01 PM – 10 Dec 13:

imageAMD SeaMicro@SeaMicroInc

correction…Verizon is not using OpenStack, but they are using our hardware. @cloud_attitude


2. OpenStack

OpenStack 101 – What Is OpenStack? [Rackspace YouTube channel, Jan 14, 2013]

OpenStack is an open source cloud operating system and community founded by Rackspace and NASA in July 2010. Here is a brief look at what OpenStack is, how it works and what people are doing with it. See: http://www.openstack.org/

OpenStack: The Open Source Cloud Operating System

Why OpenStack? [The Wave Newsletter from AMD, December 2013]

OpenStack continues to gain momentum in the market as more and more, larger, established technology and service companies move from evaluation to deployment. But why has OpenStack become so popular? In this issue, we discuss the business drivers behind the widespread adoption and why AMD’s SeaMicro SM15000 server is the industry’s best choice for a successful OpenStack deployment. If you’re considering OpenStack, learn about the options and hear winning strategies from experts featured in our most recent OpenStack webcasts. And in case you missed it, read about AMD’s exciting collaboration with Verizon enabling them to offer enterprise-caliber cloud services. more >

OpenStack the SeaMicro SM15000 – From Zero to 2,048 Cores in Less than One Hour [The Wave Newsletter from AMD, March 2013]

The SeaMicro SM15000 is optimized for OpenStack, a solution that is being adopted by both public and private cloud operators. Red 5 Studios recently deployed OpenStack on a 48 foot bus to power their new massive multiplayer online game Firefall. The SM15000 uniquely excels for object storage, providing more than 5 petabytes of direct attached storage in two data center racks.  more >

State of the Stack [OpenStack Foundation YouTube channel, recorded on Nov 8 under official title “Stack Debate: Understanding OpenStack’s Future”, published on Nov 9, 2013]

OpenStack in three short years has become one of the most successful,most talked about and most community-driven Open Source projects inhistory.In this joint presentation Randy Bias (Cloudscaling) and Scott Sanchez (Rackspace) will examine the progress from Grizzly to Havana and delve into new areas like refstack, tripleO, baremetal/Ironic, the move from”projects” to “programs”, and AWS compatibility.They will show updated statistics on project momentum and a deep diveon OpenStack Orchestrate (Heat), which has the opportunity to changethe game for OpenStack in the greater private cloud game. The duo willalso highlight the challenges ahead of the project and what should bedone to avoid failure. Joint presenters: Scott Sanchez, Randy Bias

The biggest issue with OpenStack project which “started without a benevolent dictator and/or architect” was mentioned there (watch from [6:40]) as a kind of: “The worst architectural decision you can make is stay with default networking for a production system because the default networking model in OpenStack is broken for use at scale”.

Then Randy Bias summarized that particular issue later in Neutron in Production: Work in Progress or Ready for Prime Time? [Cloudscaling blog, Dec 6, 2013] as:

Ultimately, it’s unclear whether all networking functions ever will be modeled behind the Neutron API with a bunch of plug-ins. That’s part of the ongoing dialogue we’re having in the community about what makes the most sense for the project’s future.

The bottom-line consensus was is that Neutron is a work in progress. Vanilla Neutron is not ready for production, so you should get a vendor if you need to move into production soon.

AMD’s SeaMicro SM15000 Is the First Server to Provide Bare Metal Provisioning to Scale Massive OpenStack Compute Deployments [press release, Nov 5, 2013]

Provides Foundation to Leverage OpenStack Compute for Large Networks of Virtualized and Bare Metal Servers

SUNNYVALE, Calif. and Hong Kong, OpenStack Summit —11/5/2013

AMD (NYSE: AMD) today announced that the SeaMicro SM15000™ server supports bare metal features in OpenStack® Compute. AMD’s SeaMicro SM15000 server is ideally suited for massive OpenStack deployments by integrating compute, storage and networking into a 10 rack unit system. The system is built around the Freedom™ fabric, the industry’s premier supercomputing fabric for scale out data center applications. The Freedom fabric disaggregates compute, storage and network I/O to provide the most flexible, scalable and resilient data center infrastructure in the industry. This allows customers to match the compute performance, storage capacity and networking I/O to their application needs. The result is an adaptive data center where any server can be mapped to any hard disk/SSD or network I/O to expand capacity or recover from a component failure.

“OpenStack Compute’s bare metal capabilities provide the scalability and flexibility to build and manage large-scale public and private clouds with virtualized and dedicated servers,” said Dhiraj Mallick, corporate vice president and general manager, Data Center Server Solutions, at AMD. “The SeaMicro SM15000 server’s bare metal provisioning capabilities should simplify enterprise adoption of OpenStack and accelerate mass deployments since not all work loads are optimized for virtualized environments.”

Bare metal computing provides more predictable performance than a shared server environment using virtual servers. In a bare metal environment there are no delays caused by different virtual machines contending for shared resources, since the entire server’s resources are dedicated to a single user instance. In addition, in a bare metal environment the performance penalty imposed by the hypervisor is eliminated, allowing the application software to make full use of the processor’s capabilities

In addition to leading in bare metal provisioning, AMD’s SeaMicro SM15000 server provides the ability to boot and install a base server image from a central server for massive OpenStack deployments. A cloud image containing the KVM, the OpenStack Compute image and other applications can be configured by the central server. The coordination and scheduling of this workflow can be managed by Heat, the orchestration application that manages the entire lifecycle of an OpenStack cloud for bare metal and virtual machines.

Supporting Resources

Scalable Fabric-based Object Storage with the SM15000 [The Wave Newsletter from AMD, March 2013]

The SeaMicro SM15000 is changing the economics of deploying object storage, delivering the storage of unprecedented amounts of data while using 1/2 the power and 1/3 the space of traditional servers. more >

SwiftStack with OpenStack Swift Overview [SwiftStack YouTube channel, Oct 4, 2012]

SwiftStack manages and operates OpenStack Swift. SwiftStack is built from the ground up for web, mobile and as-a-service applications. Designed to store and serve content for many concurrent users, SwiftStack contains everything you need to set up, integrate and operate a private storage cloud on hardware that you control.

AMD’s SeaMicro SM15000 Server Achieves Certification for Rackspace Private Cloud, Validated for OpenStack [press release, Jan 30, 2013]

Providing unprecedented computing efficiency for “Nova in a Box” and object storage capacity for “Swift in a Rack


3. Red Hat

OpenStack + SM15000 Server = 1,000 Virtual Machines for Red Hat [The Wave Newsletter from AMD, June 2013]

Red Hat deploys one SM15000 server to quickly and cost effectively build out a high capacity server cluster to meet the growing demands for OpenShift demonstrations and to accelerate sales. Red Hat OpenShift, which runs on Red Hat OpenStack, is Red Hat’s cloud computing Platform-as-a-Service (PaaS) offering. The service provides built-in support for nearly every open source programming language, including Node.js, Ruby, Python, PHP, Perl, and Java. OpenShift can also be expanded with customizable modules that allow developers to add other languages.
more >

Red Hat Enterprise Linux OpenStack Platform: Community-invented, Red Hat-hardened [RedHatCloud YouTube channel, Aug 5, 2013]

Learn how Red Hat Enterprise Linux OpenStack Platform allows you to deploy a supported version of OpenStack on an enterprise-hardened Linux platform to build a massively scalable public-cloud-like platform for managing and deploying cloud-enabled workloads. With Red Hat Enterprise Linux OpenStack Platform, you can focus resources on building applications that add value to your organization, while Red Hat provides support for OpenStack and the Linux platform it runs on.

AMD’s SeaMicro SM15000 Server Achieves Certification for Red Hat OpenStack [press release, June 12, 2013]

BOSTON – Red Hat Summit —6/12/2013

AMD (NYSE: AMD) today announced that its SeaMicro SM15000™ server is certified for Red Hat® OpenStack, and that the company has joined the Red Hat OpenStack Cloud Infrastructure Partner Network. The certification ensures that the SeaMicro SM15000 server provides a rigorously tested platform for organizations building private or public cloud Infrastructure as a Service (IaaS), based on the security, stability and support available with Red Hat OpenStack. AMD’s SeaMicro solutions for OpenStack include “Nova in a Box” and “Swift in a Rack” reference architectures that have been validated to ensure consistent performance, supportability and compatibility.

The SeaMicro SM15000 server integrates compute, storage and networking into a compact, 10 RU (17.5 inches) form factor with 1.28 Tbps supercompute fabric. The technology enables users to install and configure thousands of computing cores more efficiently than any other server. Complex time-consuming tasks are completed within minutes due to the integration of compute, storage and networking. Operational fire drills, such as setting up servers on short notice, manually configuring hundreds of machines and re-provisioning the network to optimize traffic are all handled through a single, easy-to-use management interface.

“AMD has shown leadership in providing a uniquely differentiated server for OpenStack deployments, and we are excited to have them as a seminal member of the Red Hat OpenStack Cloud Infrastructure Partner Network,” said Mike Werner, senior director, ISV and Developer Ecosystems at Red Hat. “The SeaMicro server is an example of incredible innovation, and I am pleased that our customers will have the SM15000 system as an option for energy-efficient, dense computing as part of the Red Hat Certified Solution Marketplace.”

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking and more than five petabytes of storage with a 1.28 Terabits-per-second high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

“We are excited to be a part of the Red Hat OpenStack Cloud Infrastructure Partner Network because the company has a strong track record of bridging the communities that create open source software and the enterprises that use it,” said Dhiraj Mallick, corporate vice president and general manager, Data Center Server Solutions, AMD. “As cloud deployments accelerate, AMD’s certified SeaMicro solutions ensure enterprises are able realize the benefits of increased efficiency and simplified operations, providing them with a competitive edge and the lowest total cost of ownership.”

AMD’s SeaMicro server product family currently supports the next-generation AMD Opteron™ (“Piledriver”) processor, Intel® Xeon® E3-1260L (“Sandy Bridge”) and E3-1265Lv2 (“Ivy Bridge”) and Intel® Atom™ N570 processors. The SeaMicro SM15000 server also supports the Freedom Fabric Storage products, enabling a single system to connect with more than five petabytes of storage capacity in two racks. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.


4. Ubuntu

Ubuntu Server certified hardware SeaMicro [one of Ubuntu certification pages]

Canonical works closely with SeaMicro to certify Ubuntu on a range of their hardware.

The following are all Certified. More and more devices are being added with each release, so don’t forget to check this page regularly.

Ubuntu on SeaMicro SM15000-OP | Ubuntu [Sept 1, 2013]

Ubuntu on SeaMicro SM15000-XN | Ubuntu [Oct 1, 2013]

Ubuntu on SeaMicro SM15000-XH | Ubuntu [Dec 18, 2013]

Ubuntu OIL announced for broadest set of cloud infrastructure options [Ubuntu Insights, Nov 5, 2013]

Today at the OpenStack Design Summit in Hong Kong, we announced the Ubuntu OpenStack Interoperability Lab (Ubuntu OIL). The programme will test and validate the interoperability of hardware and software in a purpose-built lab, giving Ubuntu OpenStack users the reassurance and flexibility of choice.
We’re launching the programme with many significant partners onboard, such as; Dell, EMC, Emulex, Fusion-io, HP, IBM, Inktank/Ceph, Intel, LSi, Open Compute, SeaMicro, VMware.
The OpenStack ecosystem has grown rapidly giving businesses access to a huge selection of components for their cloud environments. Most will expect that, whatever choices they make or however complex their requirements, the environment should ‘just work’, where any and all components are interoperable. That’s why we created the Ubuntu OpenStack Interoperability Lab.
Ubuntu OIL is designed to offer integration and interoperability testing as well as validation to customers, ISVs and hardware manufacturers. Ecosystem partners can test their technologies’ interoperability with Ubuntu OpenStack and a range of software and hardware, ensuring they work together seamlessly as well as with existing processes and systems. It means that manufacturers can get to market faster and with less cost, while users can minimise integration efforts required to connect Ubuntu OpenStack with their infrastructure.
Ubuntu is about giving customers choice. Over the last releases, we’ve introduced new hypervisors, and software-defined networking (SDN) stacks, and capabilities for workloads running on different types of public cloud options. Ubuntu OIL will test all of these options as well as other technologies to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments. Ubuntu OIL will test and validate for all supported and future releases of Ubuntu, Ubuntu LTS and OpenStack.
Involvement in the lab is through our Canonical Partner Programme. New partners can sign up here.
Learn more about Ubuntu OIL


5. Big Data, Hadoop

Storing Big Data – The Rise of the Storage Cloud [The Wave Newsletter from AMD, December 2012]

Data is everywhere and growing at unprecedented rates. Each year, there are over one hundred million new Internet users generating thousands of terabytes of data every day. Where will all this data be stored? more >

AMD’s SeaMicro SM15000 Achieves Certification for CDH4, Cloudera’s Distribution Including Apache Hadoop Version 4 [press release, March 20, 2013]

Hadoop-in-a-Box” package accelerates deployments by providing 512 cores and over five petabytes in two racks

The Hidden Truth: Hadoop is a Hardware Investment [The Wave Newsletter from AMD, September 2013]

Apache Hadoop is a leading software application for analyzing big data, but its performance and reliability are tied to a company’s underlying server architecture. Learn how AMD’s SeaMicro SM15000™ server compares with other minimum scale deployments. more >