Home » Posts tagged 'AMD'

Tag Archives: AMD

HP split into two–HP Enterprise and HP Inc. (devices and printers)–for the growth phase of its turnaround

HP share price -- Sept 2011 - Oct 2014

HP share price — Sept 2011 – Oct 2014. Meg Whitman was named CEO on September 22, 2011. As well as renewing focus on HP’s Research & Development division, Whitman’s major decision during her first year as CEO has been to retain and recommit the firm to the PC business that her predecessor announced he was considering discarding (see the August 2011 post on this blog). After such “stabilization and foundation year” on October 03, 2012 she announced an ambitious 5-year turnaround strategy that promised new products by FY14 and finally growth by 2015.  This plan promised changes in HP’s four primary businesses. Enterprise Services got an entirely different operating model. Likewise the Enterprise Group planned to further utilize the cloud. The operating model of the Printing and Personal Systems Group was simplified by reducing its product line. A new cloud-based consumption model was implemented for the Software Group. With the split now  Meg Whitman writes  that “Hewlett-Packard Enterprise … will define the next generation of infrastructure, software, and services for the New Style of IT” while “HP Inc. will be extremely well-positioned to leverage its impressive portfolio and strong innovation pipeline across areas such as multi-function printing, Ink in the office, notebooks, mobile workstations, tablets and phablets, as well as 3-D printing and new computing experiences”. By separation into two they will “be able to accellerate the progress” they’ve made to date, “unlock additional value”, and “more aggressively go after the opportunities in front” of them.

Also seeing total 55,000 job cuts this year, with 45,000-50,000 cuts already done in Q2. CEO Meg Whitman (age 58) is enjoying huge bonus payments via those job cuts, and then she will lead HP Enterprise as CEO, as well as will become the non-executive Chairman of HP Inc.’s Board of Directors.

Detailed information on this blog about the new direction set up for Personal Systems Group part of HP Inc. (very few):

Latest news from HP Personal Systems Group:
– Revamped Z desktop and ZBook mobile workstations [Sept 10, 2014]
HP Stream series of skinny Windows 8.1 laptops and tablets targeted for the holidays [Sept 29, 2014]
– HP 10 Plus 10.1-Inch 16 GB Android Full HD IPS Tablet with Allwinner A31 quadcore 1.0 GHz on Amazon and elsewhere for $280  [July 13, 2014]
– HP Slate 21 – 21.5″-k100 All-in-One Full HD IPS Android PC with NVIDIA Tegra 4 for $400 [Sept 28, 2014] a 17″ version of which, HP Slate 17 will be hitting stores by New Year

Note that such large screen All-in-One Full HD IPS strategy for both desktop replacements as well as great home devices + complete flat tabletop mode for using an application that’s maybe multi-orientational was started with Windows 8-based HP ENVY Rove [June 23, 2013], using Intel® Core™ i3-4010U and now selling for $980.

Detailed information on this blog about the new direction set up for HP Enterprise (quite extensive and deep):


* Note here that as of now Microsoft Windows Server is not available (even the upcoming Windows Server 10 for “the Future of the datacenter from Microsoft“) on the emerging 64-bit ARM. See: Intel: ARM Server Competition ‘Imminent,’ But Not Yet There, Says MKM [Barrons.com, Oct 2, 2014], in which the current state characterized as:

ARM highlighted progress in servers by citing two data center end-customers (sharing the stage with Sandia Labs but not Paypal) that use HP blades for their Moonshot server chassis based on 64-bit Applied Micro (AMCC, NR, $6.90) and 32-bit Texas Instruments silicon.

HP Moonshot program and the 1st 64-bit ARM server (ARM TechCon 2014, Oct 1-3)

HP’s ARM-powered ProLiant m400 (Moonshot) is ready for DDR4 [ARM Connected Community, Oct 8, 2014]

AppliedMicro and Hewlett-Packard recently introduced the first commercially-available 64-bit ARMv8  server. Dubbed the ProLiant m400, the cartridge is specifically designed to fit HP’s Moonshot server framework. The new server – targeted at web caching workloads  – is based on AppliedMicro’s X-Gene System-on-a-Chip  (SoC) and runs Canonical’s versatile Ubuntu operating system.

… One of the key advantages of the X-Gene based m400? The doubling of addressable memory to 64GB per cartridge. … “You put 10 of these enclosures in a rack and you have 3,600 cores and 28 TB of memory to hook together to run a distributed application,” … “The m400 node burns about 55 watts with all of its components on the board, so a rack is in the neighborhood of 25 kilowatts across 450 nodes.” …

Loren Shalinsky, a Strategic Development Director at Rambus, points out that each ProLiant m400 cartridge is actually a fully contained server with its own dedicated memory, which, in the default launch version, carries a payload of DDR3L DIMMs.

“However, future generations of the cartridges can be upgraded from DDR3 to DDR4, without affecting the other cartridges in the rack. This should allow for even higher memory bandwidth and lower power consumption,” he added. “Our expectation is that DDR4 will ramp on the server side – both in terms of x86  and ARM – before finding its way into desktop PCs, laptops and consumer applications like digital TVs and set-top boxes.”

As we’ve previously discussed on Rambus Press , DDR4 memory delivers a 40-50 percent increase in bandwidth, along with a 35 percent reduction in power consumption compared to DDR3 memory, currently in servers. In addition, internal data transfers are faster with DDR4 , while in-memory applications such as databases – where a significant amount of processing takes place in DRAM – are expected to benefit as well.

Compare the above to what was written in Choosing chips for next-generation datacentres [ComputerWeekly.com, Sept 22, 2014]:

HP CEO Meg Whitman has high hopes for the company’s Moonshot low-energy server family as a differentiator in the commodity server market. Moonshot is based on Intel Atom and AMD Opteron system-on-a-chip (SoC) processors, optimised for desktop virtualisation and web content delivery applications. These servers can run Windows Server 2012 R2 or Red Hat, Canonical or Suse Linux distributions.

Semiconductor companies Cavium and Applied Micro are taking two different approaches to the ARM microserver market. Cavium is specialising in low-powered cores, while Applied Micro is taking a high-performance computing (HPC) approach.

AMD is building its chips based on the ARM Cortex-A57 core. … Servers with AMD’s Seattle [Opteron A-Series] ARM-based chip are not expected to ship until mid-2015.

Note here as well that AMD’s Seattle, i.e. Opteron A-Series strategy is also serving the company’s own dense server infrastructure strategy (going against HP’s Moonshot fabric solution) as described here earlier in AMD’s dense server strategy of mixing next-gen x86 Opterons with 64-bit ARM Cortex-A57 based Opterons on the SeaMicro Freedom™ fabric to disrupt the 2014 datacenter market using open source software (so far) [Dec 31, 2014 – Jan 28, 2014] post.

“HP has supported ARM’s standardization effort since its inception, recognizing the benefits of an extensible platform with value-added features,” said Dong Wei, HP fellow. “With the new SBSA specification [Server Base System Architecture from ARM], we are able to establish a simplified baseline for deploying ARM-based solutions and look forward to future HP [server] products based on the ARM architecture.”

 

AMD’s Heterogeneous System Architecture (HSA) and Graphics Core Next (GCN) is coming to notebooks

Why AMDers are excited about “Kaveri” [AMD YouTube channel, Jan 15, 2014]

Hear from the team behind “Kaveri” why they are excited about it and how it will affect the pc market.http://www.amd.com/nextgenapu

The GCN architecture that is behind Xbox One and Sony PS4 (among others) and the HSA (quite probably available as well in Xbox One and PS4) are coming now to notebook APUs.
OR how much could AMD reap the benefits (first time) of ATI acquisition in 2006?
OR how much the 28nm SHP (Super High Performance) process from Global Foundries will help AMD to compete?
OR will the next-gen Steamroller microarchitecture be sufficient to compete?

How “Kaveri” is Going to Change the World of Compute Capabilities [AMD YouTube channel, Jan 16, 2014]

AMD’s John Byrne, Chief Sales Officer sat down with us “Kaveri” Tech Day in Las Vegas to discuss why he is excited about “Kaveri” and the effect is it going to have on the computer market. Learn more: http://www.amd.com/nextgenapu

If it can game, imagine what else it can do. [AMD YouTube channel, Jan 6, 2014]

See what else the AMD APU can do at amd.com/ifitcangame AMD APUs are found in everything from the leading game consoles to PCs. AMD has brought it all together to bring you incredible, new experiences. Our AMD A-Series APUs combine the performance of multicore processors and the power of AMD RadeonTM graphics technology on a single chip for a whole new level of immersion and interactivity with your PC. Whether gaming, watching videos or multitasking on your PC, we give you the performance you need to fit your life.

The Four Technologies that make up AMD’s Kaveri APU [AMD YouTube channel, Jan 14, 2014]

AMD’s Joe Macri, Corporate VP, Product CTO of Global Business Units sat down with us Kaveri Tech Day in Las Vegas to highlight the four technologies that make up Kaveri.

In AMD Kaveri Review: A8-7600 and A10-7850K Tested [AnandTech, Jan 14, 2014] it was touted as:

The first major component launch of 2014 falls at the feet of AMD and the next iteration of its APU platform, Kaveri. Kaveri has been the aim for AMD for several years, it’s actually the whole reason the company bought ATI back in 2006. As a result many different prongs of AMD’s platform come together: HSA, hUMA, offloading compute, unifying GPU architectures, developing a software ecosystem around HSA and a scalable architecture. This is, on paper at least, a strong indicator of where the PC processor market is heading in the mainstream segment.

My insert: AMD Kaveri APU Tech Day at CES [on Jan 5, 2014] [AMD YouTube channel, Jan 14, 2014]

End of my insert

Final Words
As with all previous AMD APU launches, we’re going to have to break this one down into three parts: CPU, the promise of HSA and GPU.
In a vacuum where all that’s available are other AMD parts, Kaveri and its Steamroller cores actually look pretty good. At identical frequencies there’s a healthy increase in IPC, and AMD has worked very hard to move its Bulldozer family down to a substantially lower TDP. While Trinity/Richland were happy shipping at 100W, Kaveri is clearly optimized for a much more modern TDP. Performance gains at lower TDPs (45/65W) are significant. In nearly all of our GPU tests, a 45W Kaveri ends up delivering very similar gaming performance to a 100W Richland. The mainstream desktop market has clearly moved to smaller form factors and it’s very important that AMD move there as well. Kaveri does just that.
In the broader sense however, Kaveri doesn’t really change the CPU story for AMD. Steamroller comes with a good increase in IPC, but without a corresponding increase in frequency AMD fails to move the single threaded CPU performance needle. To make matters worse, Intel’s dual-core Haswell parts are priced very aggressively and actually match Kaveri’s CPU clocks. With a substantial advantage in IPC and shipping at similar frequencies, a dual-core Core i3 Haswell will deliver much better CPU performance than even the fastest Kaveri at a lower price.
The reality is quite clear by now: AMD isn’t going to solve its CPU performance issues with anything from the Bulldozer family. What we need is a replacement architecture, one that I suspect we’ll get after Excavator concludes the line in 2015.
In the past AMD has argued that for the majority of users, the CPU performance it delivers today is good enough. While true, it’s a dangerous argument to make (one that eventually ends up with you recommending an iPad or Nexus 7). I have to applaud AMD’s PR this time around as no one tried to make the argument that CPU performance was somehow irrelevant. Although we tend to keep PR critique off of AnandTech, the fact of the matter is that for every previous APU launch AMD tried its best to convince the press that the problem wasn’t with its CPU performance but rather with how we benchmark. With Kaveri, the arguments more or less stopped. AMD has accepted its CPU performance is what it is and seems content to ride this one out. It’s a tough position to be in, but it’s really the only course of action until Bulldozer goes away.
It’s a shame that the CPU story is what it is, because Kaveri finally delivers on the promise of the ATI acquisition from 2006. AMD has finally put forth a truly integrated APU/SoC, treating both CPU and GPU as first class citizens and allowing developers to harness both processors, cooperatively, to work on solving difficult problems and enabling new experiences. In tests where both the CPU and GPU are used, Kaveri looks great as this is exactly the promise of HSA. The clock starts now. It’ll still be a matter of years before we see widespread adoption of heterogeneous programming and software, but we finally have the necessary hardware and priced at below $200.

image

Until then, outside of specific applications and GPU compute workloads, the killer app for Kaveri remains gaming. Here the story really isn’t very different than it was with Trinity and Richland. With Haswell Intel went soft on (socketed) desktop graphics, and Kaveri continues to prey on that weakness. If you are building an entry level desktop PC where gaming is a focus, there really isn’t a better option. I do wonder how AMD will address memory bandwidth requirements going forward. A dual-channel DDR3 memory interface works surprisingly well for Kaveri. We still see 10 – 30% GPU performance increases over Richland despite not having any increase in memory bandwidth. It’s clear that AMD will have to look at something more exotic going forward though.

My insert: Kaveri Tech Day: Thief running on a 7850K APU with Dual Graphics [AMD YouTube channel, Jan 14, 2014]

Learn more at http://www.amd.com/nextgenapu At Kaveri Tech Day in Las Vegas we showed off a ton of awesome demos including the upcoming Eidos Montreal title Thief. Check it out running on dual graphics!

End of my insert

For casual gaming, AMD is hitting the nail square on the head in its quest for 1080p gaming at 30 frames per second, albeit generally at lower quality settings. There are still a few titles that are starting to stretch the legs of a decent APU (Company of Heroes is practically brutal), but it all comes down to perspective. Let me introduce you to my Granddad. He’s an ex-aerospace engineer, and likes fiddling with stuff. He got onboard the ‘build-your-own’ PC train in about 2002 and stopped there – show him a processor more than a Pentium 4 and he’ll shrug it off as something new-fangled. My grandfather has one amazing geeky quality that shines through though – he has played and completed every Tomb Raider game on the PC he can get his hands on.
It all came to a head this holiday season when he was playing the latest Tomb Raider game. He was running the game on a Pentium D with an NVIDIA 7200GT graphics card. His reactions are not the sharpest, and he did not seem to mind running at sub-5 FPS at a 640×480 resolution. I can imagine many of our readers recoiling at the thought of playing a modern game at 480p with 5 FPS. In the true spirit of the season, I sent him a HD 6750, an identical model to the one in the review today. Despite some issues he had finding drivers (his Google-fu needs a refresher), he loves his new card and can now play reasonably well at 1280×1024 on his old monitor.
The point I am making with this heart-warming/wrenching family story is that the Kaveri APU is probably the ideal fit for what he needs. Strap him up with an A8-7600 and away he goes. It will be faster than anything he has used before, it will play his games as well as that new HD 6750, and when my grandmother wants to surf the web or edit some older images, she will not have to wait around for them to happen. It should all come in with a budget they would like as well.

The Importance of AMD’s TrueAudio Technology in Thief [AMD YouTube channel, Jan 10, 2014]

Eidos Montreal joined us onsite at CES to demo their upcoming game title Thief. Hear from Jean-Normand Bucci, Technical Art Director, Square Enix on the importance of audio in games and how AMD’s TrueAudio is making a difference.

Johan Andersson explains how Mantle [API] will leverage AMD’s new “Kaveri” APU [AMD YouTube channel, Dec 3, 2013]

At APU13 DICE/EA’s Technical Director Johan Andersson explains how Mantle is bringing the level of performance experienced on next generation consoles to AMD powered PCs. With the extra performance on Frostbite, there’s bound to be things never seen before in gaming!

In AMD Surrounds 2014 International CES Visitors with Breakthrough Visual and Audio Experiences [press release, Jan 6, 2014] it was touted as:

“Kaveri” – AMD’s most powerful APUs ever, the AMD A10 7850K and 7700K (codenamed “Kaveri”), are now shipping and will be on shelves in desktops early next week, with pre-orders starting today from select system builders. “Kaveri” is the world’s first APU to include Heterogeneous System Architecture (HSA) features, the immersive sound of AMD TrueAudio Technology and the performance gaming experiences of Mantle API. “Kaveri”-based notebooks will be available in the first half of this year.

“Surround House 2: Monsters in the Orchestra”

Bringing AMD’s Surround Computing vision to life in an overwhelming and unique way, “Surround House 2: Monsters in the Orchestra” engages show-goers in an instrumental performance by a collection of misfit monsters performing in a 360-degree domed theater. This immersive experience uses many of AMD’s current and developing technologies including gesture control optimized by HSA features on the new “Kaveri” APU, next-generation AMD FirePro™ graphics driving 14 million pixels across six projectors, and 32.4 channels of audio processed with AMD TrueAudio technology and presented with Discrete Digital Multipoint Audio.

Building of AMD Surround House 2: Monsters in the Orchestra at CES 2014 [AMD YouTube channel, Jan 6, 2014]

Time lapse video of the construction of AMD Surround House 2: Monsters in the Orchestra dome as part of AMD’s booth at the 2014 Consumer Electronics Show (CES)

Oxide Games AMD Mantle Presentation and Demo [AMD YouTube channel, Dec 17, 2013]

At APU13 Oxide Games showed off the first live demo of AMD’s Mantle API. Watch their full presentation and see the results for yourself. Learn more: http://bit.ly/AMD_Mantle

Now it is said by them that AMD Revolutionizes Compute and UltraHD Entertainment with 2014 AMD A-Series Accelerated Processors [press release, Jan 14, 2014]

Heterogeneous System Architecture (HSA) features enable groundbreaking compute performance and define next-gen application acceleration
SUNNYVALE, Calif. —1/14/2014

AMD (NYSE: AMD) today launched the 2014 AMD A-Series Accelerated Processing Units (APUs), the most advanced and developer friendly performance APUs from AMD to date. The AMD A-Series APUs with AMD Radeon™ R7 graphics, codenamed “Kaveri”, are designed with industry-changing new features that deliver superior compute and heart-pounding gaming performance.

New and improved features of the AMD A-Series APUs include: 

  • Up to 12 Compute Cores (4 CPU and 8 GPU) unlocking full APU potential1
  • Heterogeneous System Architecture (HSA) features, a new intelligent computing architecture that enables the CPU and GPU to work in harmony by seamlessly streamlining right tasks to the most suitable processing element, resulting in performance and efficiency for both consumers and developers; 
  • Award-winning Graphics Core Next (GCN) Architecture with powerful AMD Radeon™ R7 Series graphics for performance that commands respect and with support for DirectX 11.22
  • AMD’s acclaimed Mantle, an API that simplifies game optimizations for programmers and developers to raise gaming performance to unprecedented levels when unlocked3
  • AMD TrueAudio Technology, 32-channel surround audio delivering the best in audio realism and immersion4
  • Support for UltraHD (4K) resolutions and new video post processing enhancements that will make 1080p videos look even better when upscaled on UltraHD-enabled monitor or TV5;  
  • FM2+ socket compatibility for a unifying infrastructure that works with APUs and CPUs.

“AMD maintains our technology leadership with the 2014 AMD A-Series APUs, a revolutionary next generation APU that marks a new era of computing,” said Bernd Lienhard, corporate vice president and general manager, Client Business Unit, AMD. “With world-class graphics and compute technology on a single chip, the AMD A-Series APU is an effective and efficient solution for our customers and enable industry-leading computing experiences.”

The A10-7850K and A10-7700K APUs will be bundled with EA’s Battlefield 4, to bring a first-in-class APU gaming experience6.

Product Specifications

Model
AMD A10-7850K with Radeon™ R7 Graphics
AMD A10-7700K with Radeon™ R7 Graphics
AMD A8-7600 with Radeon™ R7 Graphics
Price7
$173 USD 
$152 USD 
$119 USD 
Power 
95W
95W
65W/45W
Compute Cores
12
10
10
CPU Cores 
4
4
4
GPU Cores1
8
6
6
Max Turbo Core 
4.0GHz
3.8GHz
3.8/3.3GHz
Default CPU Frequency 
3.7GHz
3.4GHz
3.3/3.1GHz
GPU Frequency 
720MHz
720MHz
720MHz
L2 Cache 
4MB
4MB
4MB

The AMD A-Series APU processor-in-a-box (PIBs) for the AMD A10-7850K and AMD A10-7700K, which started shipping in Q4 2013, are available starting today. The AMD A8-7600 will be shipping in Q1 2014. Additionally, the AMD Radeon™ R9 2400 Gamer Series memory is tested and certified for AMD A10 APUs, unleashing their full potential with AMD Memory Profile technology (AMP) offering speeds up to 2400MHz. For more information, please visit the Radeon Memory product page.  

The AMD A-Series APUs are also available today in PCs from our partner system builders. For more information, please visit our product information page.

Supporting Resources

  1. AMD defines a “Radeon Core” as one Shader/Shader Array. The term “GPU Core” is an evolution of the term “Radeon Core”. “GPU Core” is defined as having 4 SIMDS each comprising of 64 Shaders/Shader Arrays. For example, 512 “Radeon Cores” equals 8 “GPU Cores“ (8 GPU Cores x 4 SIMDs x 16 Shader Arrays = 512 Radeon Cores). Visit www.amd.com/computecores for more information.
  2. The GCN Architecture and its associated features (AMD Enduro™, AMD ZeroCore Power technology, DDM Audio, and 28nm production) are exclusive to the AMD Radeon™ HD 7700M, HD 7800M and HD 7900M Series Graphics and select AMD A-Series APUs. Not all technologies are supported in all system configurations—check with your system manufacturer for specific model capabilities.
  3. Mantle application support is required.
  4. AMD TrueAudio technology is offered by select AMD Radeon™ R9 and R7 200 Series GPUs and select AMD A-Series APUs and is designed to improve acoustic realism.  Requires enabled game or application.  Not all audio equipment supports all audio effects; additional audio equipment may be required for some audio effects. Not all products feature all technologies—check with your component or system manufacturer for specific capabilities.
  5. Requires 4K display and content. Supported resolution varies by GPU model and board design; confirm specifications with manufacturer before purchase.
  6. Battlefield 4 is valued at MSRP $59.99 USD. Bundle offered while supplies last. For more information, please visit: www.amd.com/battlefield4offer.
  7. SEP [suggested e-tail pricing] as of January 14, 2014.

See also:
AMD Kaveri Review: A8-7600 and A10-7850K Tested [AnandTech, Jan 14, 2014]
Surround House 2: Monsters in the Orchestra [AMD ‘Innovations We Pioneer’, Jan 8, 2014]
AMD Announces New Unified SDK, Tools and Accelerated Libraries for Heterogeneous Computing Developers [press release, Nov 11, 2013]

APU13 serves as launch platform for new developer tools and sheds light on upcoming third generation APU, “Kaveri”

… AMD also announced today at APU13 details about “Kaveri,” the third generation performance APU from AMD, during a keynote delivered by Dr. Lisa Su, senior vice president and general manager, Global Business Units, AMD.

“Kaveri” is the first APU with HSA features, AMD TrueAudio technology and AMD’s Mantle API combining to bring the next level of graphics, compute and efficiency to desktops (FM2+), notebooks, embedded APUs and servers.  FM2+ shipments to customers are slated to begin in late 2013 with initial availability in customer desktop offerings scheduled for Jan. 14, 2014. Further details will be announced at CES 2014. …

AMD Unveils Innovative New APUs and SoCs that Give Consumers a More Exciting and Immersive Experience [press release, Jan 7, 2013]

… AMD also introduced the new APU codenamed “Richland” which is currently shipping to OEMs and delivers visual performance increases ranging from more than 20 percent to up to 40 percent over the previous generation of AMD A-Series APUs1. “Richland” is expected to come bundled with new software for consumers such as gesture- and facial-recognition to dramatically expand and enhance consumers’ user experiences. The follow-on to “Richland” will be the 28nm APU codenamed “Kaveri” with revolutionary heterogeneous system architecture (HSA) features which is expected to begin shipping to customers in the second half of 2013. …

AMD’s dense server strategy of mixing next-gen x86 Opterons with 64-bit ARM Cortex-A57 based Opterons on the SeaMicro Freedom™ fabric to disrupt the 2014 datacenter market using open source software (so far)

… so far, as Microsoft was in a “shut-up and ship” mode of operation during 2013 and could deliver its revolutionary Cloud OS with its even more disruptive Big Data solution for x86 only (that is likely to change as 64-bit ARM will be delivered with servers in H2 CY14).

Update: Disruptive Technologies for the Datacenter – Andrew Feldman, GM and CVP, AMD [Open Compute Project, Jan 28, 2014]

OCP Summit V – January 28, 2014, San Jose Convention Center, San Jose, California Disruptive Technologies for the Datacenter – Andrew Feldman, GM and CVP, AMD

image

image

image
Note from the press release given below that: “The AMD Opteron A-Series development kit is packaged in a Micro-ATX form factor”. Take the note of the topmost message: “Optimized for dense compute High-density, power-sensitive scale-out workloads: web hosting, data analytics, caching, storage”.

image

image

image

image

AMD to Accelerate the ARM Server Ecosystem with the First ARM-based CPU and Development Platform from a Server Processor Vendor [press release, Jan 28, 2014]

AMD also announced the imminent sampling of the ARM-based processor, named the AMD Opteron™ A1100 Series, and a development platform, which includes an evaluation board and a comprehensive software suite.

image
This should be the evaluation board for the development platform with imminent sampling.

In addition, AMD announced that it would be contributing to the Open Compute Project a new micro-server design using the AMD Opteron A-Series, as part of the common slot architecture specification for motherboards dubbed “Group Hug.”

From OCP Summit IV: Breaking Up the Monolith [blog of the Open Compute Project, Jan 16, 2013]
…  “Group Hug” board: Facebook is contributing a new common slot architecture specification for motherboards. This specification — which we’ve nicknamed “Group Hug” — can be used to produce boards that are completely vendor-neutral and will last through multiple processor generations. The specification uses a simple PCIe x8 connector to link the SOCs to the board. …

How does AMD support the Open Compute common slot architecture? [AMD YouTube channel, Oct 3, 2013]

Learn more about AMD Open Compute: http://bit.ly/AMD_OpenCompute Dense computing is the latest trend in datacenter technology, and the Open Compute Project is driving standards codenamed Common Slot. In this video, AMD explains Common Slot and how the AMD APU and ARM offerings will power next generation data centers.

See also: Facebook Saved Over A Billion Dollars By Building Open Sourced Servers [TechCrunch, Jan 28, 2014]
image
from which I copied here the above image showing the “Group Hug” motherboards.
Below you could see an excerpt from Andrew Feldman’s presentation showing such a motherboard with Opteron™ A1100 Series SoCs (even further down there is an image with Feldman showing that motherboard to the public during his talk):

image

The AMD Opteron A-Series processor, codenamed “Seattle,” will sample this quarter along with a development platform that will make software design on the industry’s premier ARM–based server CPU quick and easy. AMD is collaborating with industry leaders to enable a robust 64-bit software ecosystem for ARM-based designs from compilers and simulators to hypervisors, operating systems and application software, in order to address key workloads in Web-tier and storage data center environments. The AMD Opteron A-Series development platform will be supported by a broad set of tools and software including a standard UEFI boot and Linux environment based on the Fedora Project, a Red Hat-sponsored, community-driven Linux distribution.

imageAMD continues to drive the evolution of the open-source data center from vision to reality and bring choice among processor architectures. It is contributing the new AMD Open CS 1.0 Common Slot design based on the AMD Opteron A-Series processor compliant with the new Common Slot specification, also announced today, to the Open Compute Project.

AMD announces plans to sample 64-bit ARM Opteron A “Seattle” processors [AMD Blogs > AMD Business, Jan 28, 2014]

AMD’s rich history in server-class silicon includes a number of notable firsts including the first 64-bit x86 architecture and true multi-core x86 processors. AMD adds to that history by announcing that its revolutionary AMD Opteron™ A-series 64-bit ARM processors, codenamed “Seattle,” will be sampling this quarter.

AMD Opteron A-Series processors combine AMD’s expertise in delivering server-class silicon with ARM’s trademark low-power architecture and contributing to the Open Source software ecosystem that is rapidly growing around the ARM 64-bit architecture. AMD Opteron A-Series processors make use of ARM’s 64-bit ARMv8 architecture to provide true server-class features in a power efficient solution.

AMD plans for the AMD Opteron™ A1100 processors to be available in the second half of 2014 with four or eight ARM Cortex A57 cores, up to 4MB of shared Level 2 cache and 8MB of shared Level 3 cache. The AMD Opteron A-Series processor supports up to 128GB of DDR3 or DDR4 ECC memory as unbuffered DIMMs, registered DIMMs or SODIMMs.

The ARMv8 architecture is the first from ARM to have 64-bit support, something that AMD brought to the x86 market in 2003 with the AMD Opteron processor. Not only can the ARMv8-based Cortex A-57 architecture address large pools of memory, it has been designed from the ground up to provide the optimal balance of performance and power efficiency to address the broad spectrum of scale-out data center workloads.

With more than a decade of experience in designing server-class solutions silicon, AMD took the ARM Cortex A57 core, added a server-class memory controller, and included features resulting in a processor that meets the demands of scale-out workloads. A requirement of scale-out workloads is high performance connectivity, and the AMD Opteron A1100 processor has extensive integrated I/O, including eight PCI Express Gen 3 lanes, two 10 GB/s Ethernet and eight SATA 3 ports.

Scale-out workloads are becoming critical building blocks in today’s data centers. These workloads scale over hundreds or thousands of servers, making power efficient performance critical in keeping total cost of ownership (TCO) low. The AMD Opteron A-Series meets the demand of these workloads through intelligent silicon design and by supporting a number of operating system and software projects.

As part of delivering a server-class solution, AMD has invested in the software ecosystem that will support AMD Opteron A-Series processors. AMD is a gold member of the Linux Foundation, the organisation that oversees the development of the Linux kernel, and is a member of Linaro, a significant contributor to the Linux kernel. Alongside collaboration with the Linux Foundation and Linaro, AMD itself is listed as a top 20 contributor to the Linux kernel. A number of operating system vendors have stated they will support the 64-bit ARM ecosystem, including Canonical, Red Hat and SUSE, while virtualization will be enabled through KVM and Xen.

Operating system support is supplemented with programming language support, with Oracle and the community-driven OpenJDK porting versions of Java onto the 64-bit ARM architecture. Other popular languages that will run on AMD Opteron A-Series processors include Perl, PHP, Python and Ruby. The extremely popular GNU C compiler and the critical GNU C Library have already been ported to the 64-bit ARM architecture.

Through the combination of kernel support and development tools such as libraries, compilers and debuggers, the foundation has been set for developers to port applications to a rapidly growing ecosystem.

As AMD Opteron A-Series processors are well suited to web hosting and big data workloads, AMD is a gold sponsor of the Apache Foundation, the organisation that manages the Hadoop and HTTP Server projects. Up and down the software stack, the ecosystem is ready for the data center revolution that will take place when AMD Opteron A-Series are deployed.

Soon, AMD’s partners will start to realise what a true server-class 64-bit ARM processor can do. By using AMD’s Opteron A-Series Development Kit, developers can contribute to the fast growing software ecosystem that already includes operating systems, compilers, hypervisors and applications. Combining AMD’s rich history in designing server-class solutions with ARM’s legendary low-power architecture, the Opteron A-Series ushers in the era of personalised performance.

Introducing the industry’s only 64-bit ARM-based server SoC from AMD [AMD YouTube channel, Jan 21, 2014]

Hear from AMD & ARM executives on why AMD is well-suited to bring ARM to the datacenter. AMD is introducing “Seattle,” a 64-bit ARM-based server SoC built on the same technology that powers billions of today’s most popular mobile devices. By fusing AMD’s deep expertise in the server processor space along with ARM’s low-power, parallel processing capabilities, Seattle makes it possible for servers to be tuned for targeted workloads such as web/cloud hosting, multi-media delivery, and data analytics to enable optimized performance at low power thresholds. Subscribe: http://bit.ly/Subscribe_to_AMD

It Begins: AMD Announces Its First ARM Based Server SoC, 64-bit/8-core Opteron A1100 [AnandTech, Jan 28, 2014]

… AMD will be making a reference board available to interested parties starting in March, with server and OEM announcements to come in Q4 of this year

It’s still too early to talk about performance or TDPs, but AMD did indicate better overall performance than its Opteron X2150 (4-core 1.9GHz Jaguar) at a comparable TDP:

image

AMD alluded to substantial cost savings over competing Intel solutions with support for similar memory capacities. AMD tells me we should expect a total “solution” price somewhere around 1/10th that of a competing high-end Xeon box, but it isn’t offering specifics beyond that just yet. Given the Opteron X2150 performance/TDP comparison, I’m guessing we’re looking at a similar ~$100 price point for the SoC. There’s also no word on whether or not the SoC will leverage any of AMD’s graphics IP. …

End of Update

AMD is also in a quite unique market position now as its only real competitor, Calxeda shut down its operation on December 19, 2013 and went into restructuring. The reason for that was lack of further funding by venture capitalists attributed mainly to its initial 32-bit Cortex-A15 based approach and the unwillingness of customers and software partners to port their already 64-bit x86 software back to 32-bit.

With the only remaining competitor in the 64-bit ARM server SoC race so far*, Applied Micro’s X-Gene SoC being built on a purpose built core of its own (see also my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, Dec 6, 2013] post), i.e. with only architecture license taken from ARM Holdings, the volume 64-bit ARM server SoC market starting in 2014 already belongs to AMD. I would base that prediction on the AppliedMicro’s X-Gene: 2013 Year in Review [Dec 20, 2013] post, stating that the first-generation X-Gene product is just nearing volume production, and a pilot X-Gene solution is planned only for early 2014 delivery by Dell.

* There is also Cavium which has too an ARMv8 architecture license only (obtained in August, 2012) but for this the latest information (as of Oct 30, 2013) was that: “In terms of the specific announcement of the product, we want to do it fairly close to silicon. We believe that this is a very differentiated product, and we would like to kind of keep it under the covers as long as we can. Obviously our customers have all the details of the products, and they’re working with them, but on a general basis for competitive reasons, we are kind of keeping this a little bit more quieter than we normally do.”

Meanwhile the 64-bit x86 based SeaMicro solution has been on the market since July 30, 2010, after 3 years in development. At the time of SeaMicro acquisition by AMD (Feb 29, 2012) this already represented a quite well thought-out and engineered solution, as one can easily grasp from the information included below:  

image

1. IOVT: I/O-Virtualization Technology
2. TIO: Turn It Off

image

3. Freedom™ Supercomputer Fabric: 3D torus network fabric
– 8 x 8 x 8 Fabric nodes
– Diameter (max hop) 4 + 4 + 4 = 12
– Theor. cross section bandwidth = 2 (periodic) x 8 x 8 (section) x 2(bidir) x 2.0Gbs/link = 512Gb/s
– Compute, storage, mgmt cards are plugged into the network fabric
– Support for hot plugged compute cards
The first three—IOVT, TIO, and the Freedom™ Supercomputer Fabric—live in SeaMicro’s Freedom™ ASIC. Freedom™ ASICs are paired with each CPU and with DRAM, forming the foundational building block of a SeaMicro system.
4. DCAT: Dynamic Computation-Allocation Technology™
– CPU management and load balancing
– Dynamic workload allocation to specific CPUs on the basis of power-usage metrics
– Users can create pools of compute for a given application
– Compute resources can be dynamically added to the pool based on predefined utilization thresholds
The DCAT technology resides in the SeaMicro system software and custom-designed FPGAs/NPUs, which control and direct the I/O traffic.
More information:
SeaMicro SM10000-64 Server [SeaMicro presentation on Hot Chips 23, Aug 19, 2011] for slides in PDF format while the presentation itself is the first one in the following recorded video (just the first 20 minutes + 7 minutes of—quite valuable—Q&A following that):
Session 7, Hot Chips 23 (2011), Friday, August 19, 2011. SeaMicro SM10000-64 Server: Building Data Center Servers Using “Cell Phone” Chips Ashutosh Dhodapkar, Gary Lauterbach, Sean Lie, Dhiraj Mallick, Jim Bauman, Sundar Kanthadai, Toru Kuzuhara, Gene Shen, Min Xu, and Chris Zhang, SeaMicro Poulson: An 8-Core, 32nm, Next-Generation Intel Itanium Processor Stephen Undy, Intel T4: A Highly Threaded Server-on-a-Chip with Native Support for Heterogenous Computing Robert Golla and Paul Jordan, Oracle
SeaMicro Technology Overview [Anil Rao from SeaMicro, January 2012]
System Overview for the SM10000 Family [Anil Rao from SeaMicro, January 2012]
Note that the above is just for the 1st generation as after the AMD acquisition (Feb 29, 2012) a second generation solution came out with the SM15000 enclosure (Sept 10, 2012 with more info in the details section later), and certainly there will be a 3d generation solution with the integrated into the each of x86 and 64-bit ARM based SoCs coming in 2014.

With the “only production ready, production tested supercompute fabric” (as was touted by Rory Read, CEO of AMD more than a year ago), the SeaMicro Freedom™ now will be integrated into the upcoming 64-bit ARM Cortex-A57 based “Seattle” chips from AMD, sampling in the first quarter of 2014. Consequently I would argue that even the high-end market will be captured by the company. Moreover, I think this will not be only in the SoC realm but in enclosures space as well (although that 3d type of enclosure is still to come), to detriment of HP’s highly marketed Moonshot and CloudSystem initiatives.

Then here are two recent quotes from the top executive duo of AMD showing the importance of their upcoming solution as they view it themselves:

Rory Read – AMD’s President and CEO [Oct 17, 2013]:

In the server market, the industry is at the initial stages of a multiyear transition that will fundamentally change the competitive dynamic. Cloud providers are placing a growing importance on how they get better performance from their datacenters while also reducing the physical footprint and power consumption of their server solution.

image

Lisa Su – AMD’s Senior Vice President and General Manager, Global Business Units [Oct 17, 2013]:

We are fully top to bottom in 28 nanometer now across all of our products, and we are transitioning to both 20 nanometer and to FinFETs over the next couple of quarters in terms of designs. … [Regarding] the SeaMicro business, we are very pleased with the pipeline that we have there. Verizon was the first major datacenter win that we can talk about publicly. We have been working that relationship for the last two years. …

We’re very excited about the server space. It’s a very good market. It’s a market where there is a lot of innovation and change. In terms of 64-bit ARM, you will see us sampling that product in the first quarter of 2014. That development is on schedule and we’re excited about that. All of the customer discussions have been very positive and then we will combine both the [?x86 and the?]64-bit ARM chip with our SeaMicro servers that will have full solution as well. You will see SeaMicro plus ARM in 2014.

So I think we view this combination of IP as really beneficial to accelerating the dense server market both on the chip side and then also on the solution side with the customer set.

AMD SeaMicro has been extensively working with key platform software vendors, especially in the open source space:

image

The current state of that collaboration is reflected in the corresponding numbered sections coming after the detailed discussion (given below before the numbered sections):

  1. Verizon (as its first big name cloud customer, actually not using OpenStack)
  2. OpenStack (inc. Rackspace, excl. Red Hat)
  3. Red Hat
  4. Ubuntu
  5. Big Data, Hadoop


So let’s take a detailed look at the major topic:

AMD in the Demo Theater [OpenStack Foundation YouTube channel, May 8, 2013]

AMD presented its demo at the April 2013 OpenStack Summit in Portland, OR. For more summit videos, visit: http://www.openstack.org/summit/portland-2013/session-videos/
Note that the OpenStack Quantum networking project was renamed Neutron after April, 2013. Details on the OpenStack effort will be provided later in the post.

Rory Read – AMD President and CEO [Oct 30, 2012]:

That SeaMicro Freedom™ fabric is ultimately very-very important. It is the only production ready, production tested supercompute fabric on the planet.

Lisa Su – AMD Senior Vice President and General Manager, Global Business Units [Oct 30, 2012]:

The biggest change in the datacenter is that there is no one size fits all. So we will offer ARM-based CPUs with our fabric. We will offer x86-based CPUs with our fabric. And we will also look at opportunities where we can merge the CPU technology together with graphics compute in an APU form-factor that will be very-very good for specific workloads in servers as well. So AMD will be the only company that’s able to offer the full range of compute horsepower with the right workloads in the datacenter.

AMD makes ARM Cortex-A57 64bit Server Processor [Charbax YouTube channel, Oct 30, 2012]

AMD has announced that they are launching a new ARM Cortex-A57 64bit ARMv8 Processor in 2014, targetted for the servers market. This is an interview with Andrew Feldman, VP and GM of Data Center Server Solutions Group at AMD, founder of SeaMicro now acquired by AMD.

From AMD Changes Compute Landscape as the First to Bridge Both x86 and ARM Processors for the Data Center [press release, Oct 29, 2012]

This strategic partnership with ARM represents the next phase of AMD’s strategy to drive ambidextrous solutions in emerging mega data center solutions. In March, AMD announced the acquisition of SeaMicro, the leader in high-density, energy-efficient servers. With this announcement, AMD will integrate the AMD SeaMicro Freedom fabric across its leadership AMD Opteron x86- and ARM technology-based processors that will enable hundreds, or even thousands of processor clusters to be linked together to provide the most energy-efficient solutions.

AMD ARM Oct 29, 2012 Full length presentation [Manny Janny YouTube channel, Oct 30, 2012]

I do not have any affiliation with AMD or ARM. This video is posted to provide the general public with information and provide an area for comments
Rory Read – AMD President and CEO: [3:27] That SeaMicro Freedom™ fabric is ultimately very-very important in this announcement. It is the only production ready, production tested supercompute fabric on the planet. [3:41]
Lisa Su – Senior Vice President and General Manager, Global Business Units: [13:09] The biggest change in the datacenter is that there is no one size fits all. So we will offer ARM-based CPUs with our fabric. We will offer x86-based CPUs with our fabric. And we will also look at opportunities where we can merge the CPU technology together with graphics compute in an APU form-factor that will be very-very good for specific workloads in servers as well. So AMD will be the only company that’s able to offer the full range of compute horsepower with the right workloads in the datacenter [13:41]

From AMD to Acquire SeaMicro: Accelerates Disruptive Server Strategy [press release, Feb 29, 2012]

AMD (NYSE: AMD) today announced it has signed a definitive agreement to acquire SeaMicro, a pioneer in energy-efficient, high-bandwidth microservers, for approximately $334 million, of which approximately $281 million will be paid in cash. Through the acquisition of SeaMicro, AMD will be accelerating its strategy to deliver disruptive server technology to its OEM customers serving cloud-centric data centers. With SeaMicro’s fabric technology and system-level design capabilities, AMD will be uniquely positioned to offer industry-leading server building blocks tuned for the fastest-growing workloads such as dynamic web content, social networking, search and video. …
… “Cloud computing has brought a sea change to the data center–dramatically altering the economics of compute by changing the workload and optimal characteristics of a server,” said Andrew Feldman, SeaMicro CEO, who will become general manager of AMD’s newly created Data Center Server Solutions business. “SeaMicro was founded to dramatically reduce the power consumed by servers, while increasing compute density and bandwidth.  By becoming a part of AMD, we will have access to new markets, resources, technology, and scale that will provide us with the opportunity to work tightly with our OEM partners as we fundamentally change the server market.”

ARM TechCon 2012 SoC Partner Panel: Introducing the ARM Cortex-A50 Series [ARMflix YouTube channel, recorded on Oct 30, published on Nov 13, 2012]

Moderator: Simon Segars EVP and GM, Processor and Physical IP Divisions ARM Panelists: Andrew Feldman Corporate VP & GM, Data Center Server Solutions (need to confirm his title with AMD) AMD Martyn Humphries VP & General Manager, Mobile Applications Group Broadcom Karl Freund VP, Marketing Calxeda** John Kalkman VP, Marketing Samsung Semiconductor Bob Krysiak EVP and President of the Americas Region STMicroelectronics
** Note that nearly 14 months later, on Dec 19, 2013 Calxeda ran out of its ~$100M venture capital accumulated earlier. As the company was not able to secure further funding it shut down its operation by dismissing most of its employees (except 12 workers serving existing customers) and went into “restructuring” with just putting on their company website: “We will update you as we conclude our restructuring process”. This is despite of the kind of pioneering role the company had, especially with HP’s Moonshot and CloudSystem initiatives, and the relatively short term promise of delivering its server cartridge to HP’s next-gen Moonshot enclosure as was well reflected in my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, Dec 6, 2013] post. The major problem was that “it tried to get to market with 32-bit chip technology, at a time most x86 servers boast 64-bit technology … [and as] customers and software companies weren’t willing to port their software to run on 32-bit systems” – reported the Wall Street Journal. I would also say that AMD’s “only production ready, production tested supercompute fabric on the planet” (see AMD Rory’s statement already given above) with its upcoming “Seattle” 64-bit ARM SoC to be on track for delivery in H2 CY14 was another major reason for the lack of additional venture funds to Calxeda.

AMD’s 64-bit “Seattle” ARM processor brings best of breed hardware and software to the data center [AMD Business blog, Dec 12, 2013]

Going into 2014, the server market is set to face the biggest disruption since AMD launched the 64-bit x86 AMD Opteron™ processor – the first 64-bit x86 processor – in 2003. Processors based on ARM’s 64-bit ARMv8 architecture will start to appear next year, and just like the x86 AMD Opteron™ processors a decade ago, AMD’s ARM 64-bit processors will offer enterprises a viable option for efficiently handling vast amounts of data.

image

From: AMD Unveils Server Strategy and Roadmap [press release June 18, 2013]

These forthcoming AMD Opteron™ processors bring important innovations to the rapidly changing compute market, including integrated CPU and GPU compute (APU); high core-count ARM servers for high-density compute in the data center; and substantial improvements in compute per-watt per-dollar and total cost of ownership.
“Our strategy is to differentiate ourselves by using our unique IP to build server processors that are particularly well matched to a target workload and thereby drive down the total cost of owning servers. This strategy unfolds across both the enterprise and data centers and includes leveraging our graphics processing capabilities and embracing both x86 and ARM instruction sets,” said Andrew Feldman, general manager of the Server Business Unit, AMD. “AMD led the world in the transition to multicore processors and 64-bit computing, and we intend to do it again with our next-generation AMD Opteron families.”
In 2014, AMD will set the bar in power-efficient server compute with the industry’s premier ARM server CPU. The 64-bit CPU, code named “Seattle,” is based on ARM Cortex-A57 cores and is expected to provide category-leading throughput as well as setting the bar in performance-per-watt. AMD will also deliver a best-in-class APU, code named “Berlin.” “Berlin” is an x86 CPU and APU, based on a new generation of cores namedSteamroller.”  Designed to double the performance of the recently available “Kyoto” part, “Berlin” will offer extraordinary compute-per-watt that will enable massive rack density. The third processor announced today is code named “Warsaw,” AMD’s next-generation 2P/4P offering. It is optimized to handle the heavily virtualized workloads found in enterprise environments including the more complex compute needs of data analytics, xSQL and traditional databases. “Warsaw” will provide significantly improved performance-per-watt over today’s AMD Opteron 6300 family. 
Seattle
“Seattle” will be the industry’s only 64-bit ARM-based server SoC from a proven server processor supplier.  “Seattle” is an 8- and then 16-core CPU based on the ARM Cortex-A57 core and is expected to run at or greater than 2 GHz.  The “Seattle” processor is expected to offer 2-4X the performance of AMD’s recently announced AMD Opteron X-Series processor with significant improvement in compute-per-watt.  It will deliver 128GB DRAM support, extensive offload engines for better power efficiency and reduced CPU loading, server caliber encryption, and compression and legacy networking including integrated 10GbE.  It will be the first processor from AMD to integrate AMD’s advanced Freedom™ Fabric for dense compute systems directly onto the chip. AMD plans to sample “Seattle” in the first quarter of 2014 with production in the second half of the year.
Berlin
Berlin” is an x86-based processor that will be available both as a CPU and APU. The processor boasts four next-generation “Steamroller” cores and will offer almost 8X the gigaflops per-watt compared to current AMD Opteron™ 6386SE processor.  It will be the first server APU built on AMD’s revolutionary Heterogeneous System Architecture (HSA), which enables uniform memory access for the CPU and GPU and makes programming as easy as C++. “Berlin” will offer extraordinary compute per-watt that enables massive rack density. It is expected to be available in the first half of 2014
Warsaw
Warsaw” is an enterprise server CPU optimized to deliver unparalleled performance and total cost of ownership for two- and four-socket servers.  Designed for enterprise workloads, it will offer improved performance-per-watt, which drives down the cost of owning a “Warsaw”-based server while enabling seamless migration from the AMD Opteron 6300 Series family.  It is a fully compatible socket with identical software certifications, making it ideal for the AMD Open 3.0 Server – the industry’s most cost effective Open Compute platform.  It is expected to be available in the first quarter of 2014.

Note that AMD Details Embedded Product Roadmap [press release, Sept, 9, 2013] as well in which there is also a:

“Hierofalcon” CPU SoC
“Hierofalcon” is the first 64-bit ARM-based platform from AMD targeting embedded data center applications, communications infrastructure and industrial solutions. It will include up to eight ARM Cortex™-A57 CPUs expected to run up to 2.0 GHz, and provides high-performance memory with two 64-bit DDR3/4 channels with error correction code (ECC) for high reliability applications. The highly integrated SoC includes 10 Gb KR Ethernet and PCI-Express Gen 3 for high-speed network connectivity, making it ideal for control plane applications. The “Hierofalcon” series also provides enhanced security with support for ARM TrustZone® technology and a dedicated cryptographic security co-processor, aligning to the increased need for networked, secure systems. “Hierofalcon” is expected to be sampling in the second quarter of 2014 with production in the second half of the year.

image

The AMD Opteron processor came at a time when x86 processors were seen by many as silicon that could only power personal computers, with specialized processors running on architectures such as SPARC™ and Power™ being the ones that were handling server workloads. Back in 2003, the AMD Opteron processor did more than just offer another option, it made the x86 architecture a viable contender in the server market – showing that processors based on x86 architectures could compete effectively against established architectures. Thanks in no small part to the AMD Opteron processor, today the majority of servers shipped run x86 processors.

In 2014, AMD will once again disrupt the datacenter as x86 processors will be joined by those that make use of ARM’s 64-bit architecture. Codenamed “Seattle,” AMD’s first ARM-based Opteron processor will use the ARMv8 architecture, offering low-power processing in the fast growing dense server space.

To appreciate what the first ARM-based AMD Opteron processor is designed to deliver to those wanting to deploy racks of servers, it is important to realize that the ARMv8 architecture offers a clean slate on which to build both hardware and software.

ARM’s ARMv8 architecture is much more than a doubling of word-length from previous generation ARMv7 architecture: it has been designed from the ground-up to provide higher performance while retaining the trademark power efficiencies that everyone has come to expect from the ARM architecture. AMD’s “Seattle” processors will have either four or eight cores, packing server-grade features such as support for up to 128 GB of ECC memory, and integrated 10Gb/sec of Ethernet connectivity with AMD’s revolutionary Freedom™ fabric, designed to cater for dense compute systems.

From: AMD Delivers a New Generation of AMD Opteron and Intel Xeon “Ivy Bridge” Processors in its New SeaMicro SM15000 Micro Server Chassis [press release, Sept 10, 2012]

With the new AMD Opteron processor, AMD’s SeaMicro SM15000 provides 512 cores in a ten rack unit system with more than four terabytes of DRAM and supports up to five petabytes of Freedom Fabric Storage. Since AMD’s SeaMicro SM15000 server is ten rack units tall, a one-rack, four-system cluster provides 2,024 cores, 16 terabytes of DRAM, and is capable of supporting 20 petabytes of storage.  The new and previously unannounced AMD Opteron processor is a custom designed octal core 2.3 GHz part based on the new “Piledriver” core, and supports up to 64 gigabytes of DRAM per CPU. The SeaMicro SM15000 system with the new AMD Opteron processor sets the high watermark for core density for micro servers.
Configurations based on the AMD Opteron processor and Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge” microarchitecture) will be available in November 2012. …

image

AMD off-chip interconnect fabric IP designed to enable significantly lower TCO

• Links hundreds –> thousands of SoC modules

• Shares hundreds of TBs storage and virtualizes I/O

• 160Gbps Ethernet Uplink

• Instruction Set:
– x86
– ARM (coming in 2014 when the fabric will be integrated into the SoCs as well, including the x86 SoCs)

From: SM15000-OP: 64 Octal Core Servers
with AMD Opteron™ processors (2.0/2.3/2.8 GHz, 8 “Piledriver” cores)

image

Freedom™ ASIC 2.0 – Industry’s only Second Generation Fabric Technology
The Freedom™ ASIC is the building block of SeaMicro Fabric Compute Systems, enabling interconnection of energy efficient servers in a 3-dimensional Torus Fabric. The second generation Freedom ASIC includes high performance network interfaces, storage connectivity, and advanced server management, thereby eliminating the need for multiple sets of network adapters, HBAs, cables, and switches. This results in unmatched density, energy efficiency, and lowered TCO. Some of the key technologies in ASIC 2.0 include:
  • SeaMicro Input/Output Virtualization Technology (IOTV™) eliminates all but three components from SeaMicro’s motherboard—CPU, DRAM, and the ASIC itself—thereby shrinking the motherboard, while reducing power, cost and space.
  • SeaMicro new TIO™ (Turn It Off) technology enables SeaMicro to further power-optimize the mini motherboard by turning off unneeded CPU and chipset functions. Together, SeaMicro’s I/O Virtualization Technology and TIO technology produce the smallest and most power efficient server motherboards available.
  • SeaMicro Freedom Supercompute Fabric built of multiple Freedom ASICs working together, creating a 1.28 terabits per-second fabric that ties together 64 of the power-optimized mini-motherboards at low latency and low power with massive bandwidth.
  • SeaMicro Freedom Fabric Storage technology allows the Freedom supercompute fabric to extend out of the chassis and across the data center linking not just components inside the chassis, but also those outside as well.

image

Unified Management – Easily Provision and Manage Servers, Network, and Storage Resources on Demand
The SeaMicro SM15000 implements a rich management system providing unified management of servers, network, and storage. Resources can be rapidly deployed, managed, and repurposed remotely, enabling lights-off data center operations. It offers a broad set of management API including an industry standard CLI, SNMP, IPMI, syslog, and XEN APIs, allowing customers to seamlessly integrate the SeaMicro SM15000 into existing data center management environments.
Redundancy and Availability – Engineered from the Ground Up to Eliminate Single Points of Failure
The SeaMicro SM15000 is designed for the most demanding environments, helping to ensure availability of compute, network, storage, and system management. At the heart of the system is the Freedom Fabric, interconnecting all resources in the system, with the ability to sustain multiple points of failure and allow live component servicing. All active components in the system can be configured redundant and are hot-swappable, including server cards, network uplink cards, storage controller cards, system management cards, disks, fan trays, and power supplies. Key resources can also be configured to be protected in the following ways:
Compute – A shared spare server can be configured to act as a standby spare for multiple primary servers. In the event of failure, the primary server’s personality, including MAC address, assigned disks, and boot configuration can be migrated to the standby spare and brought back online – ensuring fast restoration of services from a remote location.
Network – The highly available fabric ensures network connectivity is maintained between servers and storage in the event of path failure. For uplink high-availability, the system can be configured with multiple uplink modules and port channels providing redundant active/active interfaces.
Storage – The highly available fabric ensures that servers can access fabric storage in the event of failures. The fabric storage system also provides an efficient, high utilization optional hardware RAID to protect data in case of disk failure.


The Industry’s First Data Center in a Box
AMD’s SeaMicro SM15000 family of Fabric Compute Systems provides the equivalent of 32 1RU dual socket servers, massive bandwidth, top of rack Ethernet switching, and high capacity shared storage, with centralized management in a small, compact 10RU form factor. In addition, it provides an integrated server console management for unified management. The SeaMicro SM15000 dramatically reduces CAPEX and significantly reduces the ongoing OPEX of deploying discreet compute, networking, storage, and management systems.
More information:
An Overview of AMD|SeaMicro Technology [Anil Rao from AMD|SeaMicro, October 2012]
System Overview for the SM15000 Family [Anil Rao from AMD|SeaMicro, October 2012]
What a Difference 0.09 Percent Makes [The Wave Newsletter from AMD, September 2013]
Today’s cloud services have helped companies consolidate infrastructure and drive down costs, however, recent service interruptions point to a big downside of relying on public cloud service. Most are built using commodity, off-the-shelf servers to save costs and are standardized around the same computing and storage SLAs of 99.95 and 99.9 percent. This is significantly lower than the four nine availability standard in the data networking world. Leading companies are realizing that the performance and reliability of their applications is inextricably linked to their underlying server architecture. In this issue, we discuss the strategic importance of selecting the right hardware. Whether building an enterprise-caliber cloud service or implementing Apache™ Hadoop® to process and analyze big data, hardware matters.
more >
Where Does Software End and Hardware Begin? [The Wave Newsletter from AMD, September 2013]
Lines are blurring between software and hardware with some industry leaders choosing to own both. Software companies are realizing that the performance and value of their software depends on their hardware choices.  more >
Improving Cloud Service Resiliency with AMD’s SeaMicro Freedom Fabric [The Wave Newsletter from AMD, December 2013]
Learn why AMD’s SeaMicro Freedom™ Fabric ASIC is the server industry’s first viable solution to cost-effectively improve the resiliency and availability of cloud-based services.

We realize that having an impressive set of hardware features in the first ARM-based Opteron processors is half of the story, and that is why we are hard at work on making sure the software ecosystem will support our cutting edge hardware. Work on software enablement has been happening throughout the stack – from the UEFI, to the operating system and onto application frameworks and developer tools such as compilers and debuggers. This ensures that the software will be ready for ARM-based servers.

AMD developing Linux on ARM at Linaro Connect 2013 [Charbax YouTube channel, March 11, 2013]

[Recorded at Linaro Connect Asia 2013, March 4-8, 2013] Dr. Leendert van Doorn, Corporate Fellow at AMD, talks about what AMD does with Linaro to optimize Linux on ARM. He talks about the expectations that AMD has for results to come from Linaro in terms of achieving a better and more fully featured Linux world on ARM, especially for the ARM Cortex-A57 ARMv8 processor that AMD has announced for the server market.

AMD’s participation in software projects is well documented, being a gold member of the Linux Foundation, the organization that manages the development of the Linux kernel, and a group member of Linaro. AMD is a gold sponsor of the Apache Foundation, which oversees projects such as Hadoop, HTTP Server and Samba among many others, and the company’s engineers are contributors to the OpenJDK project. This is just a small selection of the work AMD is taking part in, and these projects in particular highlight how important AMD feels that open source software is to the data center, and in particular micro servers, that make use of ARM-based processors.

And running ARM-based processors doesn’t mean giving up on the flexibility of virtual machines, with KVM already ported to the ARMv8 architecture. Another popular hypervisor, Xen, is already available for 32-bit ARM architectures with a 64-bit port planned, ensuring that two popular and highly capable hypervisors will be available.

The Linux kernel has supported 64-bit ARMv8 architecture since Linux 3.7, and a number of popular Linux distributions have already signaled their support for the architecture including Canonical’s Ubuntu and the Red Hat sponsored Fedora distribution. In fact there is a downloadable, bootable Ubuntu distribution available in anticipation for ARMv8-based processors.

It’s not just operating systems and applications that are available. Developer tools such as the extremely popular open source GCC compiler and the vital GNU C Library (Glibc) have already been ported to the ARMv8 architecture and are available for download. With GCC and Glibc good to go, a solid foundation for developers to target the ARMv8 architecture is forming.

All of this work on both hardware and software should shed some light on just how big ARM processors will be in the data center. AMD, an established enterprise semiconductor vendor, is uniquely placed to ship both 64-bit ARMv8 and 64-bit x86 processors that enable “mixed rack” environments. And thanks to the army of software engineers at AMD, as well as others around the world who have committed significant time and effort, the software ecosystem will be there to support these revolutionary processors. 2014 is set to see the biggest disruption in the data center in over a decade, with AMD again at the center of it.

Lawrence Latif is a blogger and technical communications representative at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites, and references to third party trademarks, are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.

End of AMD’s 64-bit “Seattle” ARM processor brings best of breed hardware and software to the data center [AMD Business blog, Dec 12, 2013]

AMD at ARM Techcon 2013 [Charbax YouTube channel, recorded at the ARM Techcon 2013 (Oct 29-31), published on Dec 25, 2013]

AMD in 2014 will be delivering a 64bit ARM processor for servers. The ARM Architecture and Ecosystem enables servers to achieve greater performance per watt and greater performance per dollar. The code name for the product is Seattle. AMD Seattle is expected to reach mass market cloud servers in the second half of 2014.

From: Advanced Micro Devices’ CEO Discusses Q3 2013 Results – Earnings Call Transcript [Seeking Alpha, Oct 17, 2013]

Rory Read – President and CEO:

The three step turnaround plan we outlined a year ago to restructure, accelerate and ultimately transform AMD is clearly paying off. We completed the restructuring phase of our plan, maintaining cash at optimal levels and beating our $450 million quarterly operating expense goal in the third quarter. We are now in the second phase of our strategy – accelerating our performance by consistently executing our product roadmap while growing our new businesses to drive a return to profitability and positive free cash flow.
We are also laying the foundation for the third phase of our strategy, as we transform AMD to compete across a set of high growth markets. Our progress on this front was evident in the third quarter as we generated more than 30% of our revenue from our semi-custom and embedded businesses. Over the next two years we will continue to transform AMD to expand beyond a slowing, transitioning PC industry, as we create a more diverse company and look to generate approximately 50% of our revenue from these new high growth markets.

We have strategically targeted that semi-custom, ultra-low power client, embedded, dense server and the professional graphics market where we can offer differentiated products that leverage our APU and graphics IP. Our strategy allows us to continue to invest in the product that will drive growth, while effectively managing operating expenses. …

… Several of our growth businesses passed key milestones in the third quarter. Most significantly, our semi-custom business ramped in the quarter. We successfully shipped millions of units to support Sony and Microsoft, as they prepared to launch their next-generation game consoles. Our game console wins are generating a lot of customer interest, as we demonstrate our ability to design and reliably ramp production on two of the most complex SOCs ever built for high-volume consumer devices. We have several strong semi-custom design opportunities moving through the pipeline as customers look to tap into AMD’s IP, design and integration expertise to create differentiated winning solutions. … it’s our intention to win and mix in a whole set semicustom offerings as we build out this exciting and important new business.
We made good progress in our embedded business in the third quarter. We expanded our current embedded SOC offering and detailed our plans to be the only company to offer both 64-bit x86 and ARM solutions beginning in 2014. We have developed a strong embedded design pipeline which, we expect, will drive further growth for this business across 2014.
We also continue to make steady progress in another of our growth businesses in the third quarter, as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe we can continue to gain share in this lucrative part of the GPU market, based on our product portfolio, design wins [in place] [ph] and enhanced channel programs.

In the server market, the industry is at the initial stages of a multiyear transition that will fundamentally change the competitive dynamic. Cloud providers are placing a growing importance on how they get better performance from their datacenters while also reducing the physical footprint and power consumption of their server solution.

This will become the defining metric of this industry and will be a key growth driver for the market and the new AMD. AMD is leading this emerging trend in the server market and we are committed to defining a leadership position.

Earlier this quarter, we had a significant public endorsement of our dense server strategy as Verizon announced a high performance public cloud that uses our SeaMicro technology and Opteron processor. We remain on track to introduce new, low-power X86 and 64-bit ARM processors next year and we believe we will offer the industry leading ARM-based servers. …

Two years ago we were 90% to 95% of our business centered over PCs and we’ve launched the clear strategy to diversify our portfolio taking our IT — leadership IT and Graphics and CPU and taking it into adjacent segment where there is high growth for three, five, seven years and stickier opportunities.
We see that as an opportunity to drive 50% or more of our business over that time horizon. And if you look at the results in the third quarter, we are already seeing the benefits of that opportunity with over 30% of our revenue now coming from semi-custom and our embedded businesses.
We see it is an important business in PC, but its time is changing and the go-go era is over. We need to move and attack the new opportunities where the market is going, and that’s what we are doing.

Lisa Su – Senior Vice President and General Manager, Global Business Units:

We are fully top to bottom in 28 nanometer now across all of our products, and we are transitioning to both 20 nanometer and to FinFETs over the next couple of quarters in terms of designs. We will do 20 nanometer first, and then we will go to FinFETs. …

game console semicustom product is a long life cycle product over five to seven years. Certainly when we look at cost reduction opportunities, one of the important ones is to move technology nodes. So we will in this timeframe certainly move from 28 nanometer to 20 nanometer and now the reason to do that is both for pure die cost savings as well as all the power savings that our customer benefits from. … so expect the cost to go down on a unit basis as we move to 20.

[Regarding] the SeaMicro business, we are very pleased with the pipeline that we have there. Verizon was the first major datacenter win that we can talk about publicly. We have been working that relationship for the last two years. So it’s actually nice to be able to talk about it. We do see it as a major opportunity that will give us revenue potential in 2014. And we continue to see a strong pipeline of opportunities with SeaMicro as more of the datacenter guys are looking at how to incorporate these dense servers into their new cloud infrastructures. …

… As I said the Verizon engagement has lasted over the past two years. So some of the initial deployments were with the Intel processors but we do have significant deployments with AMD Opteron as well. We do see the percentage of Opteron processors increasing because that’s what we’d like to do. …

We’re very excited about the server space. It’s a very good market. It’s a market where there is a lot of innovation and change. In terms of 64-bit ARM, you will see us sampling that product in the first quarter of 2014. That development is on schedule and we’re excited about that. All of the customer discussions have been very positive and then we will combine both the [?x86 and the?]64-bit ARM chip with our SeaMicro servers that will have full solution as well. You will see SeaMicro plus ARM in 2014.

So I think we view this combination of IP as really beneficial to accelerating the dense server market both on the chip side and then also on the solution side with the customer set.

Amazon’s James Hamilton: Why Innovation Wins [AMD SeaMicro YouTube channel, Nov 12, 2012] video which was included into the Headline News and Events section of Volume 1, December 2012 of The Wave Newsletter from AMD SeaMicro with the following intro:

James Hamilton, VP and Distinguished Engineer at Amazon called AMD’s co-announcement with ARM to develop 64-bit ARM technology-based processors “A great day for the server ecosystem.” Learn why and hear what James had to say about what this means for customers and the broader server industry.

James Hamilton of Amazon discusses the four basic tenants of why he thinks data center server innovation needs to go beyond just absolute performance. He believes server innovation delivering improved volume economics, storage performance, price/performance and power/performance will win in the end.

AMD Changes Compute Landscape as the First to Bridge Both x86 and ARM Processors for the Data Center [press release, Oct 29, 2012]

Company to Complement x86-based Offerings with New Processors Based on ARM 64-bit Technology, Starting with Server Market

SUNNYVALE, Calif. —10/29/2012

In a bold strategic move, AMD (NYSE: AMD) announced that it will design 64-bit ARM® technology-based processors in addition to its x86 processors for multiple markets, starting with cloud and data center servers. AMD’s first ARM technology-based processor will be a highly-integrated, 64-bit multicore System-on-a-Chip (SoC) optimized for the dense, energy-efficient servers that now dominate the largest data centers and power the modern computing experience. The first ARM technology-based AMD Opteron™ processor is targeted for production in 2014 and will integrate the AMD SeaMicro Freedom™ supercompute fabric, the industry’s premier high-performance fabric.

AMD’s new design initiative addresses the growing demand to deliver better performance-per-watt for dense cloud computing solutions. Just as AMD introduced the industry’s first mainstream 64-bit x86 server solution with the AMD Opteron processor in 2003, AMD will be the only processor provider bridging the x86 and 64-bit ARM ecosystems to enable new levels of flexibility and drive optimal performance and power-efficiency for a range of enterprise workloads.

“AMD led the data center transition to mainstream 64-bit computing with AMD64, and with our ambidextrous strategy we will again lead the next major industry inflection point by driving the widespread adoption of energy-efficient 64-bit server processors based on both the x86 and ARM architectures,” said Rory Read, president and chief executive officer, AMD. “Through our collaboration with ARM, we are building on AMD’s rich IP portfolio, including our deep 64-bit processor knowledge and industry-leading AMD SeaMicro Freedom supercompute fabric, to offer the most flexible and complete processing solutions for the modern data center.”

“The industry needs to continuously innovate across markets to meet customers’ ever-increasing demands, and ARM and our partners are enabling increasingly energy-efficient computing solutions to address these needs,” said Warren East, chief executive officer, ARM. “By collaborating with ARM, AMD is able to leverage its extraordinary portfolio of IP, including its AMD Freedom supercompute fabric, with ARM 64-bit processor cores to build solutions that deliver on this demand and transform the industry.”

The explosion of the data center has brought with it an opportunity to optimize compute with vastly different solutions. AMD is providing a compute ecosystem filled with choice, offering solutions based on AMD Opteron x86 CPUs, new server-class Accelerated Processing Units (APUs) that leverage Heterogeneous Systems Architecture (HSA), and new 64-bit ARM-based solutions.

This strategic partnership with ARM represents the next phase of AMD’s strategy to drive ambidextrous solutions in emerging mega data center solutions. In March, AMD announced the acquisition of SeaMicro, the leader in high-density, energy-efficient servers. With this announcement, AMD will integrate the AMD SeaMicro Freedom fabric across its leadership AMD Opteron x86- and ARM technology-based processors that will enable hundreds, or even thousands of processor clusters to be linked together to provide the most energy-efficient solutions.

“Over the past decade the computer industry has coalesced around two high-volume processor architectures – x86 for personal computers and servers, and ARM for mobile devices,” observed Nathan Brookwood, research fellow at Insight 64. “Over the next decade, the purveyors of these established architectures will each seek to extend their presence into market segments dominated by the other. The path on which AMD has now embarked will allow it to offer products based on both x86 and ARM architectures, a capability no other semiconductor manufacturer can likely match.”

At an event hosted by AMD in San Francisco, representatives from Amazon, Dell, Facebook and Red Hat participated in a panel discussion on opportunities created by ARM server solutions from AMD. A replay of the event can be found here as of 5 p.m. PDT, Oct. 29.

Supporting Resources

  • AMD bridges the x86 and ARM ecosystems for the data center announcement press resources
  • Follow AMD on Twitter at @AMD
  • Follow the AMD and ARM announcement on Twitter at #AMDARM
  • Like AMD on Facebook.

AMD SeaMicro SM15000 with Freedom Fabric Storage [AMD YouTube channel, Sept 11, 2012]

AMD Extends Leadership in Data Center Innovation – First to Optimize the Micro Server for Big Data [press release, Sept 10, 2012]

AMD’s SeaMicro SM15000™ Server Delivers Hyper-efficient Compute for Big Data and Cloud Supporting Five Petabytes of Storage; Available with AMD Opteron™ and Intel® Xeon® “Ivy Bridge”/”Sandy Bridge” Processors
SUNNYVALE, Calif. —9/10/2012
AMD (NYSE: AMD) today announced the SeaMicro SM15000™ server, another computing innovation from its Data Center Server Solutions (DCSS) group that cements its position as the technology leader in the micro server category. AMD’s SeaMicro SM15000 server revolutionizes computing with the invention of Freedom™ Fabric Storage, which extends its Freedom™ Fabric beyond the SeaMicro chassis to connect directly to massive disk arrays, enabling a single ten rack unit system to support more than five petabytes of low-cost, easy-to-install storage. The SM15000 server combines industry-leading density, power efficiency and bandwidth with a new generation of storage technology, enabling a single rack to contain thousands of cores, and petabytes of storage – ideal for big data applications like Apache™ Hadoop™ and Cassandra™ for public and private cloud deployments.
AMD’s SeaMicro SM15000 system is available today and currently supports the Intel® Xeon® Processor E3-1260L (“Sandy Bridge”). In November, it will support the next generation of AMD Opteron™ processors featuring the “Piledriver” core, as well as the newly announced Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge”). In addition to these latest offerings, the AMD SeaMicro fabric technology continues to deliver a key building block for AMD’s server partners to build extremely energy efficient micro servers for their customers.
“Historically, server architecture has focused on the processor, while storage and networking were afterthoughts. But increasingly, cloud and big data customers have sought a solution in which storage, networking and compute are in balance and are shared. In a legacy server, storage is a captive resource for an individual processor, limiting the ability of disks to be shared across multiple processors, causing massive data replication and necessitating the purchase of expensive storage area networking or network attached storage equipment,” said Andrew Feldman, corporate vice president and general manager of the Data Center Server Solutions group at AMD. “AMD’s SeaMicro SM15000 server enables companies, for the first time, to share massive amounts of storage across hundreds of efficient computing nodes in an exceptionally dense form factor. We believe that this will transform the data center compute and storage landscape.”
AMD’s SeaMicro products transformed the data center with the first micro server to combine compute, storage and fabric-based networking in a single chassis. Micro servers deliver massive efficiencies in power, space and bandwidth, and AMD set the bar with its SeaMicro product that uses one-quarter the power, takes one-sixth the space and delivers 16 times the bandwidth of the best-in-class alternatives. With the SeaMicro SM15000 server, the innovative trajectory broadens the benefits of the micro server to storage, solving the most pressing needs of the data center.
Combining the Freedom™ Supercompute Fabric technology with the pioneering Freedom™ Fabric Storage technology enables data centers to provide more than five petabytes of storage with 64 servers in a single ten rack unit (17.5 inch tall) SM15000 system. Once these disks are interconnected with the fabric, they are seen and shared by all servers in the system. This approach provides the benefits typically provided by expensive and complex solutions such as network-attached storage and storage area networking with the simplicity and low cost of direct attached storage
“AMD’s SeaMicro technology is leading innovation in micro servers and data center compute,” said Zeus Kerravala, founder and principal analyst of ZK Research. “The team invented the micro server category, was the first to bring small-core servers and large-core servers to market in the same system, the first to market with a second-generation fabric, and the first to build a fabric that supports multiple processors and instruction sets. It is not surprising that they have extended the technology to storage. The bringing together of compute and petabytes of storage demonstrates the flexibility of the Freedom Fabric. They are blurring the boundaries of compute, storage and networking, and they have once again challenged the industry with bold innovation.”
Leaders Across the Big Data Community Agree
Dr. Amr Awadallah, CTO and Founder at Cloudera, the category leader that is setting the standard for Hadoop in the enterprise, observes: “The big data community is hungry for innovations that simplify the infrastructure for big data analysis while reducing hardware costs. As we hear from our vast big data partner ecosystem and from customers using CDH and
Cloudera Enterprise, companies that are seeking to gain insights across all their data want their hardware vendors to provide low cost, high density, standards-based compute that connects to massive arrays of low cost storage. AMD’s SeaMicro delivers on this promise.”
Eric Baldeschwieler, co-founder and CTO of Hortonworks and a pioneer in Hadoop technology, notes: “Petabytes of low cost storage, hyper-dense energy-efficient compute, connected with a supercompute-style fabric is an architecture particularly well suited for big data analytics and Hortonworks Data Platform. At Hortonworks, we seek to make Apache Hadoop easier to use, consume and deploy, which is in line with AMD’s goal to revolutionize and commoditize the storage and processing of big data. We are pleased to see leaders in the hardware community inventing technology that extends the reach of big data analysis.”
Matt Pfeil, co-founder and VP of customer solutions at DataStax, the leader in real-time mission-critical big data platforms, agrees: “At DataStax, we believe that extraordinary databases, such as Cassandra, running mission-critical applications, can be used by nearly every enterprise. To see AMD’s DCSS group bringing together efficient compute and petabytes of storage over a unified fabric in a single low-cost, energy-efficient solution is enormously exciting. The combination of the SM15000 server and best-in-class database, Cassandra, offer a powerful threat to the incumbent makers of both databases and the expensive hardware on which they reside.”
AMD’s SeaMicro SM15000™ Technology
AMD’s SeaMicro SM15000 server is built around the industry’s first and only second-generation fabric, the Freedom Fabric. It is the only fabric technology designed and optimized to work with Central Processor Units (CPUs) that have both large and small cores, as well as x86 and non-x86 CPUs. Freedom Fabric contains innovative technology including:
  • SeaMicro IOVT (Input/Output Virtualization Technology), which eliminates all but three components from the SeaMicro motherboard – CPU, DRAM, and the ASIC itself – thereby shrinking the motherboard, while reducing power, cost and space;
  • SeaMicro TIO™ (Turn It Off) technology, which enables further power optimization on the mini motherboard by turning off unneeded CPU and chipset functions. Together, SeaMicro IOVT and TIO technology produce the smallest and most power efficient motherboards available;
  • Freedom Supercompute Fabric creates a 1.28 terabits-per-second fabric that ties together 64 of the power-optimized mini-motherboards at low latency and low power with massive bandwidth;
  • SeaMicro Freedom Fabric Storage, which allows the Freedom Supercompute Fabric to extend out of the chassis and across the data center, linking not just components inside the chassis, but those outside as well.
AMD’s SeaMicro SM15000 Server Details
AMD’s SeaMicro SM15000 server will be available with 64 compute cards, each holding a new custom-designed single-socket octal core 2.0/2.3/2.8 GHz AMD Opteron processor based on the “Piledriver” core, for a total of 512 heavy-weight cores per system or 2,048 cores per rack. Each AMD Opteron processor can support 64 gigabytes of DRAM, enabling a single system to handle more than four terabytes of DRAM and over 16 terabytes of DRAM per rack. AMD’s SeaMicro SM15000 system will also be available with a quad core 2.5 GHz Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge”) for 256 2.5 GHz cores in a ten rack unit system or 1,024 cores in a standard rack. Each processor supports up to 32 gigabytes of memory so a single SeaMicro SM15000 system can deliver up to two terabytes of DRAM and up to eight terabytes of DRAM per rack.
AMD’s SeaMicro SM15000 server also contains 16 fabric extender slots, each of which can connect to three different Freedom Fabric Storage arrays with different capacities:
  • FS 5084-L is an ultra-dense capacity-optimized storage system. It supports up to 84 SAS/SATA 3.5 inch or 2.5 inch drives in 5 rack units for up to 336 terabytes of capacity per-array and over five petabytes per SeaMicro SM15000 system;
  • FS 2012-L is a capacity-optimized storage system. It supports up to 12 3.5 inch or 2.5 inch drives in 2 rack units for up to 48 terabytes of capacity per-array or up to 768 terabytes of capacity per SeaMicro SM15000 system;
  • FS 2024-S is a performance-optimized storage system. It supports up to 24 2.5 inch drives in 2 rack units for up to 24 terabytes of capacity per-array or up to 384 terabytes of capacity per SM15000 system.

In summary, AMD’s SeaMicro SM15000 system:

  • Stands ten rack units or 17.5 inches tall;
  • Contains 64 slots for compute cards for AMD Opteron or Intel Xeon processors;
  • Provides up to ten gigabits per-second of bandwidth to each CPU;
  • Connects up to 1,408 solid state or hard drives with Freedom Fabric Storage
  • Delivers up to 16 10 GbE uplinks or up to 64 1GbE uplinks;
  • Runs standard off-the-shelf operating systems including Windows®, Linux, Red Hat and VMware and Citrix XenServer hypervisors.
Availability
AMD’s SeaMicro SM15000 server with Intel’s Xeon Processor E3-1260L “Sandy Bridge” is now generally available in the U.S and in select international regions. Configurations based on AMD Opteron processors and Intel Xeon Processor E3-1265Lv2 with the “Ivy Bridge” microarchitecture will be available in November, 2012. More information on AMD’s revolutionary SeaMicro family of servers can be found at www.seamicro.com/products.


1. Verizon

Verizon Cloud on AMD’s SeaMicro SM15000 [AMD YouTube channel, Oct 7, 2013]

Find out more about SeaMicro and AMD athttp://bit.ly/AMD_SeaMicro Verizon and AMD partner to create an enterprise-class cloud service that was not possible using off the shelf servers. Verizon Cloud is based on the SeaMicro SM15000, the industry’s first and only programmable server hardware. The new services redefine the benchmarks for public cloud computing and storage performance and reliability.

Verizon Cloud Compute and Verizon Cloud Storage [The Wave Newsletter from AMD, December 2013]

With enterprise adoption of public cloud services at 10 percent1, Verizon identified a need for a cloud service that was secure, reliable and highly flexible with enterprise-grade performance guarantees. Large, global enterprises want to take advantage of the agility, flexibility and compelling economics of the public cloud, but the performance and reliability are not up to par for their needs. To fulfill this need, Verizon spent over two years identifying and developing software using AMD’s SeaMicro SM15000, the industry’s first and only programmable server hardware. The new services redefine the benchmarks for public cloud computing and storage performance and security.

Designed specifically for enterprise customers, the new services allow companies to use the same policies and procedures across the enterprise network and the public cloud. The close collaboration has resulted in cloud computing services with unheralded performance level guarantees that are offered with competitive pricing. The new cloud services are backed by the power of Verizon, including global data centers, global IP network and enterprise-grade managed security services. The performance and security innovations are expected to accelerate public cloud adoption by the enterprise for their mission critical applications. more >

Verizon Selects AMD’s SeaMicro SM15000 for Enterprise Class Services: Verizon Cloud Compute and Verizon Cloud Storage [AMD-Seamicro press release, Oct 7, 2013]

Verizon and AMD create technology that transforms the public cloud, delivering the industry’s most advanced cloud capabilities

SUNNYVALE, Calif. —10/7/2013

AMD (NYSE: AMD) today announced that Verizon is deploying SeaMicro SM15000™ servers for its new global cloud platform and cloud-based object storage service, whose public beta was recently announced. AMD’s SeaMicro SM15000 server links hundreds of cores together in a single system using a fraction of the power and space of traditional servers. To enable Verizon’s next generation solution, technology has been taken one step further: Verizon and AMD co-developed additional hardware and software technology on the SM15000 server that provides unprecedented performance and best-in-class reliability backed by enterprise-level service level agreements (SLAs). The combination of these technologies co-developed by AMD and Verizon ushers in a new era of enterprise-class cloud services by enabling a higher level of control over security and performance SLAs. With this technology underpinning the new Verizon Cloud Compute and Verizon Cloud Storage, enterprise customers can for the first time confidently deploy mission-critical systems in the public cloud.

“We reinvented the public cloud from the ground up to specifically address the needs of our enterprise clients,” said John Considine, chief technology officer at Verizon Terremark. “We wanted to give them back control of their infrastructure – providing the speed and flexibility of a generic public cloud with the performance and security they expect from an enterprise-grade cloud. Our collaboration with AMD enabled us to develop revolutionary technology, and it represents the backbone of our future plans.”

As part of its joint development, AMD and Verizon co-developed hardware and software to reserve, allocate and guarantee application SLAs. AMD’s SeaMicro Freedom™ fabric-based SM15000 server delivers the industry’s first and only programmable server hardware that includes a high bandwidth, low latency programmable interconnect fabric, and programmable data and control plane for both network and storage traffic. Leveraging AMD’s programmable server hardware, Verizon developed unique software to guarantee and deliver reliability, unheralded performance guarantees and SLAs for enterprise cloud computing services.

“Verizon has a clear vision for the future of the public cloud services—services that are more flexible, more reliable and guaranteed,” said Andrew Feldman, corporate vice president and general manager, Server, AMD. “The technology we developed turns the cloud paradigm upside down by creating a service that an enterprise can configure and control as if the equipment were in its own data center. With this innovation in cloud services, I expect enterprises to migrate their core IT services and mission critical applications to Verizon’s cloud services.”

“The rapid, reliable and scalable delivery of cloud compute and storage services is the key to competing successfully in any cloud market from infrastructure, to platform, to application; and enterprises are constantly asking for more as they alter their business models to thrive in a mobile and analytic world,” said Richard Villars, vice president, Datacenter & Cloud at IDC. “Next generation integrated IT solutions like AMD’s SeaMicro SM15000 provide a flexible yet high-performance platform upon which companies like Verizon can use to build the next generation of cloud service offerings.”

Innovative Verizon Cloud Capabilities on AMD’s SeaMicro SM15000 Server Industry Firsts

Verizon leveraged the SeaMicro SM15000 server’s ability to disaggregate server resources to create a cloud optimized for computing and storage services. Verizon and AMD’s SeaMicro engineers worked for over two years to create a revolutionary public cloud platform with enterprise class capabilities.

These new capabilities include:

  • Virtual machine server provisioning in seconds, a fraction of the time of a legacy public cloud;
  • Fine-grained server configuration options that match real life requirements, not just small, medium, large sizing, including processor speed (500 MHz to 2,000 MHz) and DRAM (.5 GB increments) options;
  • Shared disks across multiple server instances versus requiring each virtual machine to have its own dedicated drive;
  • Defined storage quality of service by specifying performance up to 5,000 IOPS to meet the demands of the application being deployed, compared to best-effort performance;
  • Consistent network security policies and procedures across the enterprise network and the public cloud;
  • Strict traffic isolation, data encryption, and data inspection with full featured firewalls that achieve Department of Defense and PCI compliance levels;
  • Guaranteed network performance for every virtual machine with reserved network performance up to 500 Mbps compared to no guarantees in many other public clouds.

The public beta for Verizon Cloud will launch in the fourth quarter. Companies interested in becoming a beta customer can sign up through the Verizon Enterprise Solutions website: www.verizonenterprise.com/verizoncloud.

AMD’s SeaMicro SM15000 Server

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, more than five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

AMD’s SeaMicro server product family currently supports the next generation AMD Opteron™ (“Piledriver”) processor, Intel® Xeon® E3-1260L (“Sandy Bridge”) and E3-1265Lv2 (“Ivy Bridge”) and Intel® Atom™ N570 processors. The SeaMicro SM15000 server also supports the Freedom Fabric Storage products, enabling a single system to connect with more than five petabytes of storage capacity in two racks. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.

For more information on the Verizon Cloud implementation, please visit: www.seamicro.com/vzcloud.

About AMD

AMD (NYSE: AMD) designs and integrates technology that powers millions of intelligent devices, including personal computers, tablets, game consoles and cloud servers that define the new era of surround computing. AMD solutions enable people everywhere to realize the full potential of their favorite devices and applications to push the boundaries of what is possible. For more information, visit www.amd.com.

4:01 PM – 10 Dec 13:

imageAMD SeaMicro@SeaMicroInc

correction…Verizon is not using OpenStack, but they are using our hardware. @cloud_attitude


2. OpenStack

OpenStack 101 – What Is OpenStack? [Rackspace YouTube channel, Jan 14, 2013]

OpenStack is an open source cloud operating system and community founded by Rackspace and NASA in July 2010. Here is a brief look at what OpenStack is, how it works and what people are doing with it. See: http://www.openstack.org/

OpenStack: The Open Source Cloud Operating System

Why OpenStack? [The Wave Newsletter from AMD, December 2013]

OpenStack continues to gain momentum in the market as more and more, larger, established technology and service companies move from evaluation to deployment. But why has OpenStack become so popular? In this issue, we discuss the business drivers behind the widespread adoption and why AMD’s SeaMicro SM15000 server is the industry’s best choice for a successful OpenStack deployment. If you’re considering OpenStack, learn about the options and hear winning strategies from experts featured in our most recent OpenStack webcasts. And in case you missed it, read about AMD’s exciting collaboration with Verizon enabling them to offer enterprise-caliber cloud services. more >

OpenStack the SeaMicro SM15000 – From Zero to 2,048 Cores in Less than One Hour [The Wave Newsletter from AMD, March 2013]

The SeaMicro SM15000 is optimized for OpenStack, a solution that is being adopted by both public and private cloud operators. Red 5 Studios recently deployed OpenStack on a 48 foot bus to power their new massive multiplayer online game Firefall. The SM15000 uniquely excels for object storage, providing more than 5 petabytes of direct attached storage in two data center racks.  more >

State of the Stack [OpenStack Foundation YouTube channel, recorded on Nov 8 under official title “Stack Debate: Understanding OpenStack’s Future”, published on Nov 9, 2013]

OpenStack in three short years has become one of the most successful,most talked about and most community-driven Open Source projects inhistory.In this joint presentation Randy Bias (Cloudscaling) and Scott Sanchez (Rackspace) will examine the progress from Grizzly to Havana and delve into new areas like refstack, tripleO, baremetal/Ironic, the move from”projects” to “programs”, and AWS compatibility.They will show updated statistics on project momentum and a deep diveon OpenStack Orchestrate (Heat), which has the opportunity to changethe game for OpenStack in the greater private cloud game. The duo willalso highlight the challenges ahead of the project and what should bedone to avoid failure. Joint presenters: Scott Sanchez, Randy Bias

The biggest issue with OpenStack project which “started without a benevolent dictator and/or architect” was mentioned there (watch from [6:40]) as a kind of: “The worst architectural decision you can make is stay with default networking for a production system because the default networking model in OpenStack is broken for use at scale”.

Then Randy Bias summarized that particular issue later in Neutron in Production: Work in Progress or Ready for Prime Time? [Cloudscaling blog, Dec 6, 2013] as:

Ultimately, it’s unclear whether all networking functions ever will be modeled behind the Neutron API with a bunch of plug-ins. That’s part of the ongoing dialogue we’re having in the community about what makes the most sense for the project’s future.

The bottom-line consensus was is that Neutron is a work in progress. Vanilla Neutron is not ready for production, so you should get a vendor if you need to move into production soon.

AMD’s SeaMicro SM15000 Is the First Server to Provide Bare Metal Provisioning to Scale Massive OpenStack Compute Deployments [press release, Nov 5, 2013]

Provides Foundation to Leverage OpenStack Compute for Large Networks of Virtualized and Bare Metal Servers

SUNNYVALE, Calif. and Hong Kong, OpenStack Summit —11/5/2013

AMD (NYSE: AMD) today announced that the SeaMicro SM15000™ server supports bare metal features in OpenStack® Compute. AMD’s SeaMicro SM15000 server is ideally suited for massive OpenStack deployments by integrating compute, storage and networking into a 10 rack unit system. The system is built around the Freedom™ fabric, the industry’s premier supercomputing fabric for scale out data center applications. The Freedom fabric disaggregates compute, storage and network I/O to provide the most flexible, scalable and resilient data center infrastructure in the industry. This allows customers to match the compute performance, storage capacity and networking I/O to their application needs. The result is an adaptive data center where any server can be mapped to any hard disk/SSD or network I/O to expand capacity or recover from a component failure.

“OpenStack Compute’s bare metal capabilities provide the scalability and flexibility to build and manage large-scale public and private clouds with virtualized and dedicated servers,” said Dhiraj Mallick, corporate vice president and general manager, Data Center Server Solutions, at AMD. “The SeaMicro SM15000 server’s bare metal provisioning capabilities should simplify enterprise adoption of OpenStack and accelerate mass deployments since not all work loads are optimized for virtualized environments.”

Bare metal computing provides more predictable performance than a shared server environment using virtual servers. In a bare metal environment there are no delays caused by different virtual machines contending for shared resources, since the entire server’s resources are dedicated to a single user instance. In addition, in a bare metal environment the performance penalty imposed by the hypervisor is eliminated, allowing the application software to make full use of the processor’s capabilities

In addition to leading in bare metal provisioning, AMD’s SeaMicro SM15000 server provides the ability to boot and install a base server image from a central server for massive OpenStack deployments. A cloud image containing the KVM, the OpenStack Compute image and other applications can be configured by the central server. The coordination and scheduling of this workflow can be managed by Heat, the orchestration application that manages the entire lifecycle of an OpenStack cloud for bare metal and virtual machines.

Supporting Resources

Scalable Fabric-based Object Storage with the SM15000 [The Wave Newsletter from AMD, March 2013]

The SeaMicro SM15000 is changing the economics of deploying object storage, delivering the storage of unprecedented amounts of data while using 1/2 the power and 1/3 the space of traditional servers. more >

SwiftStack with OpenStack Swift Overview [SwiftStack YouTube channel, Oct 4, 2012]

SwiftStack manages and operates OpenStack Swift. SwiftStack is built from the ground up for web, mobile and as-a-service applications. Designed to store and serve content for many concurrent users, SwiftStack contains everything you need to set up, integrate and operate a private storage cloud on hardware that you control.

AMD’s SeaMicro SM15000 Server Achieves Certification for Rackspace Private Cloud, Validated for OpenStack [press release, Jan 30, 2013]

Providing unprecedented computing efficiency for “Nova in a Box” and object storage capacity for “Swift in a Rack


3. Red Hat

OpenStack + SM15000 Server = 1,000 Virtual Machines for Red Hat [The Wave Newsletter from AMD, June 2013]

Red Hat deploys one SM15000 server to quickly and cost effectively build out a high capacity server cluster to meet the growing demands for OpenShift demonstrations and to accelerate sales. Red Hat OpenShift, which runs on Red Hat OpenStack, is Red Hat’s cloud computing Platform-as-a-Service (PaaS) offering. The service provides built-in support for nearly every open source programming language, including Node.js, Ruby, Python, PHP, Perl, and Java. OpenShift can also be expanded with customizable modules that allow developers to add other languages.
more >

Red Hat Enterprise Linux OpenStack Platform: Community-invented, Red Hat-hardened [RedHatCloud YouTube channel, Aug 5, 2013]

Learn how Red Hat Enterprise Linux OpenStack Platform allows you to deploy a supported version of OpenStack on an enterprise-hardened Linux platform to build a massively scalable public-cloud-like platform for managing and deploying cloud-enabled workloads. With Red Hat Enterprise Linux OpenStack Platform, you can focus resources on building applications that add value to your organization, while Red Hat provides support for OpenStack and the Linux platform it runs on.

AMD’s SeaMicro SM15000 Server Achieves Certification for Red Hat OpenStack [press release, June 12, 2013]

BOSTON – Red Hat Summit —6/12/2013

AMD (NYSE: AMD) today announced that its SeaMicro SM15000™ server is certified for Red Hat® OpenStack, and that the company has joined the Red Hat OpenStack Cloud Infrastructure Partner Network. The certification ensures that the SeaMicro SM15000 server provides a rigorously tested platform for organizations building private or public cloud Infrastructure as a Service (IaaS), based on the security, stability and support available with Red Hat OpenStack. AMD’s SeaMicro solutions for OpenStack include “Nova in a Box” and “Swift in a Rack” reference architectures that have been validated to ensure consistent performance, supportability and compatibility.

The SeaMicro SM15000 server integrates compute, storage and networking into a compact, 10 RU (17.5 inches) form factor with 1.28 Tbps supercompute fabric. The technology enables users to install and configure thousands of computing cores more efficiently than any other server. Complex time-consuming tasks are completed within minutes due to the integration of compute, storage and networking. Operational fire drills, such as setting up servers on short notice, manually configuring hundreds of machines and re-provisioning the network to optimize traffic are all handled through a single, easy-to-use management interface.

“AMD has shown leadership in providing a uniquely differentiated server for OpenStack deployments, and we are excited to have them as a seminal member of the Red Hat OpenStack Cloud Infrastructure Partner Network,” said Mike Werner, senior director, ISV and Developer Ecosystems at Red Hat. “The SeaMicro server is an example of incredible innovation, and I am pleased that our customers will have the SM15000 system as an option for energy-efficient, dense computing as part of the Red Hat Certified Solution Marketplace.”

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking and more than five petabytes of storage with a 1.28 Terabits-per-second high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

“We are excited to be a part of the Red Hat OpenStack Cloud Infrastructure Partner Network because the company has a strong track record of bridging the communities that create open source software and the enterprises that use it,” said Dhiraj Mallick, corporate vice president and general manager, Data Center Server Solutions, AMD. “As cloud deployments accelerate, AMD’s certified SeaMicro solutions ensure enterprises are able realize the benefits of increased efficiency and simplified operations, providing them with a competitive edge and the lowest total cost of ownership.”

AMD’s SeaMicro server product family currently supports the next-generation AMD Opteron™ (“Piledriver”) processor, Intel® Xeon® E3-1260L (“Sandy Bridge”) and E3-1265Lv2 (“Ivy Bridge”) and Intel® Atom™ N570 processors. The SeaMicro SM15000 server also supports the Freedom Fabric Storage products, enabling a single system to connect with more than five petabytes of storage capacity in two racks. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.


4. Ubuntu

Ubuntu Server certified hardware SeaMicro [one of Ubuntu certification pages]

Canonical works closely with SeaMicro to certify Ubuntu on a range of their hardware.

The following are all Certified. More and more devices are being added with each release, so don’t forget to check this page regularly.

Ubuntu on SeaMicro SM15000-OP | Ubuntu [Sept 1, 2013]

Ubuntu on SeaMicro SM15000-XN | Ubuntu [Oct 1, 2013]

Ubuntu on SeaMicro SM15000-XH | Ubuntu [Dec 18, 2013]

Ubuntu OIL announced for broadest set of cloud infrastructure options [Ubuntu Insights, Nov 5, 2013]

Today at the OpenStack Design Summit in Hong Kong, we announced the Ubuntu OpenStack Interoperability Lab (Ubuntu OIL). The programme will test and validate the interoperability of hardware and software in a purpose-built lab, giving Ubuntu OpenStack users the reassurance and flexibility of choice.
We’re launching the programme with many significant partners onboard, such as; Dell, EMC, Emulex, Fusion-io, HP, IBM, Inktank/Ceph, Intel, LSi, Open Compute, SeaMicro, VMware.
The OpenStack ecosystem has grown rapidly giving businesses access to a huge selection of components for their cloud environments. Most will expect that, whatever choices they make or however complex their requirements, the environment should ‘just work’, where any and all components are interoperable. That’s why we created the Ubuntu OpenStack Interoperability Lab.
Ubuntu OIL is designed to offer integration and interoperability testing as well as validation to customers, ISVs and hardware manufacturers. Ecosystem partners can test their technologies’ interoperability with Ubuntu OpenStack and a range of software and hardware, ensuring they work together seamlessly as well as with existing processes and systems. It means that manufacturers can get to market faster and with less cost, while users can minimise integration efforts required to connect Ubuntu OpenStack with their infrastructure.
Ubuntu is about giving customers choice. Over the last releases, we’ve introduced new hypervisors, and software-defined networking (SDN) stacks, and capabilities for workloads running on different types of public cloud options. Ubuntu OIL will test all of these options as well as other technologies to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments. Ubuntu OIL will test and validate for all supported and future releases of Ubuntu, Ubuntu LTS and OpenStack.
Involvement in the lab is through our Canonical Partner Programme. New partners can sign up here.
Learn more about Ubuntu OIL


5. Big Data, Hadoop

Storing Big Data – The Rise of the Storage Cloud [The Wave Newsletter from AMD, December 2012]

Data is everywhere and growing at unprecedented rates. Each year, there are over one hundred million new Internet users generating thousands of terabytes of data every day. Where will all this data be stored? more >

AMD’s SeaMicro SM15000 Achieves Certification for CDH4, Cloudera’s Distribution Including Apache Hadoop Version 4 [press release, March 20, 2013]

Hadoop-in-a-Box” package accelerates deployments by providing 512 cores and over five petabytes in two racks

The Hidden Truth: Hadoop is a Hardware Investment [The Wave Newsletter from AMD, September 2013]

Apache Hadoop is a leading software application for analyzing big data, but its performance and reliability are tied to a company’s underlying server architecture. Learn how AMD’s SeaMicro SM15000™ server compares with other minimum scale deployments. more >

Software defined server without Microsoft: HP Moonshot

Updates as of Dec 6, 2013 (8 months after the original post):

image

Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

This Cloud, Social, Big Data and Mobile we are referring to as this “New Style of IT” [when talking about the slide shown above]

Through the Telescope: 3 Minutes on HP Moonshot [HewlettPackardVideos YouTube channel, July 24, 2013]

Steven Hagler (Senior Director, HP Americas Moonshot) provides insight on Moonshot, why it’s right for the market, and what it means for your business. http://hp.com/go/moonshot

HIGHLY RECOMMENDED READING:
HP Offers Exclusive Peek Inside Impending Moonshot Servers [Enterprise Tech, Nov 26, 2013]: “The company is getting ready to launch a bunch of new server nodes for Moonshot in a few weeks”.
– So far, the most simple and understandable info is serviced in Visual Configuration Moonshot diagram set: http://www.goldeneggs.fi/documents/GE-HP-MOONSHOT-A.pdf  This site includes also full visualisation for all x86 rack, desktop and blade servers.

From HP Launches Investment Solutions to Ease Organizations’ Transitions to “New Style of IT” [press release, Dec 6, 2013]

The HP accelerated migration program for cloud—helps …

The HP Pre-Provisioning Solution—lets …

New investment solutions for HP Moonshot servers and HP Converged Systems—provide customers and channel partners with quick access to the latest HP products through a simple, scalable and predictable monthly payment that aligns technology and financial requirements to business needs.   

Access the world’s first software defined server [HP offering, Nov 27, 2013]
With predictable and scalable monthly payments

HP Moonshot Financing
Cloud, Mobility, Security and Big Data require a different level of technology efficiency and scalability. Traditional systems may no longer be able to handle the increasing internet workloads with optimal performance. Having and investment strategy that gives you access to newer technology such as HP Moonshot allows you to meet the requirements for the New Style of IT.
A simple and flexible payment structure can help you access the latest technology on your terms.
Why leverage a predictable monthly payment?
• Provides financial flexibility to scale up your business
• May help mitigate the financial risk of your IT transformation
Enables IT refresh cycles to keep up with latest technology
• May help improve your cash flow
• Offers predictable monthly payments which can help you stay within budget
How does it work?
• Talk to your HP Sales Rep about acquiring HP Moonshot using a predictable monthly payment
Expand your capacity easily with a simple add-on payment
• Add spare capacity needed for even greater agility
• Set your payment terms based on your business needs
• After an agreed term, you’ll be able to refresh your technology

From The HP Moonshot team provides answers to your questions about the datacenter of the future [The HP Blog Hub, as of Aug 29, 2013]

Q: WHAT IS THE FUNDAMENTAL IDEA BEHIND THE HP MOONSHOT SYSTEM?

A: The idea is simple—use energy-efficient CPU’s attuned to a particular application to achieve radical power, space and cost savings. Stated another way; creating software defined servers for specific applications that run at scale.

Q: WHAT IS INNOVATIVE ABOUT THE HP MOONSHOT ARCHITECTURE?

A: The most innovative characteristic of HP Moonshot is the architecture. Everything that is a common resource in a traditional server has been converged into the chassis. The power, cooling, management, fabric, switches and uplinks are all shared across 45 hot-pluggable cartridges in a 4.3U chassis.

Q: EXPLAIN WHAT IS MEANT BY “SOFTWARE DEFINED” SERVER

A: Software defined servers achieve optimal useful work per watt by specializing for a given workload: matching a software application with available technology that can provide the most optimal performance. For example, the firstMoonshot server is tuned for the web front end LAMP (Linux/Apache/MySQL/PHP) stack. In the most extreme case of a future FPGA (Field Programmable Gate Array) cartridge, the hardware truly reflects the exact algorithm required.

Q: DESCRIBE THE FABRIC THAT HAS BEEN INTEGRATED INTO THE CHASSIS

A: The HP Moonshot 1500 Chassis has been built for future SOC designs that will require a range of network capabilities including cartridge to cartridge interconnect. Additionally, different workloads will have a range of storage needs. 

There are four separate and independent fabrics that support a range of current and future capabilities; 8 lanes of Ethernet; storage fabric (6Gb SATA) that enable shared storage amongst cartridges or storage expansion to a single cartridge; a dedicated iLO management network to manage all the servers as one; a cluster fabric with point to point connectivity and low latency interconnect between servers.

image

Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]

Calxeda Midway in HP Moonshot [Janet Bartleson YouTube channel, Oct 28, 2013]

HP’s Paul Santeler encourages you to test Calxeda’s Midway-based Moonshot server cartridges in the HP Discovery Labs. http://www.hp.com/go/moonshot http://www.calxeda.com

Details about the latest and future Calxeda SoCs see in the closing part of this Dec 7 update

@SC13: HP Moonshot ProLiant m800 Server Cartridge with Texas Instruments [Janet Bartleson YouTube channel, Nov 26, 2013]

@SC13, Texas Instruments’ Arnon Friedmann shows the HP ProLiant m800 Server Cartridge with 4 66K2H12 Keystone II SoCs each with 4 ARM Cortex A15 cores and 8 C66x DSP cores–alltogether providing 500 gigaflops of DSP performance and 8Gigabytes of data on the server cartridge. It’s lower power, lower cost than traditional servers.

Details about the latest Texas Instruments DSP+ARM SoCs see after the Calxeda section in the closing part of this Dec 7 update

The New Style of IT & HP Moonshot: Keynote by HP’s Martin Fink at ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 29, published on Nov 11, 2013]

Keynote Presentation: The New Style of IT Speaker: Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company It’s an exciting time to be in technology. The IT industry is at a major inflection point driven by four generation-defining trends: the cloud, social, Big Data, and mobile. These trends are forever changing how consumers and businesses communicate, collaborate, and access information. And to accommodate these changes, enterprises, governments and fast growing companies desperately need a “New Style of IT.” Shaping the future of IT starts with a radically different approach to how we think about compute — for example, in servers, HP has a game-changing new category that requires 80% less space, uses 89% less energy, costs 77% less–and is 97% less complex. There’s never been a better time to be part of the ecosystem and usher in the next-generation of innovation.

From Big Data and the future of computing – A conversation with John Sontag [HP Enterprise 20/20 Blog, October 28, 2013]

20/20 Team: Where is HP today in terms of helping everyone become a data scientist?
John Sontag: For that to happen we need a set of tools that allow us to be data scientists in more than the ad hoc way I just described. These tools should let us operate productively and repeatably, using vocabulary that we can share – so that each of us doesn’t have to learn the same lessons over and over again. Currently at HP, we’re building a software tool set that is helping people find value in the data they’re already surrounded by. We have HAVEn for data management, which includes the Vertica data store, and Autonomy for analysis. For enterprise security we have ArcSight and ThreatCentral. We have our work around StoreOnce to compress things, and Express Query to allow us to consume data in huge volumes. Then we have hardware initiatives like Moonshot, which is bringing different kinds of accelerators to bear so we can actually change how fast – and how effectively – we can chew on data.
20/20 Team: And how is HP Labs helping shape where we are going?
John Sontag: One thing we’re doing on the software front is creating new ways to interrogate data in real time through an interface that doesn’t require you to be a computer scientist.  We’re also looking at how we present the answers you get in a way that brings attention to the things you most need to be aware of. And then we’re thinking about how to let people who don’t have massive compute resources at their disposal also become data scientists.
20/20 Team: What’s the answer to that?
John Sontag: For that, we need to rethink the nature of the computer itself. If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data. Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways. And then we’re thinking about how to package these re-imagined computers into boxes of different sizes that match the needs of everyone from the individual to the massive, multinational entity. On top of all that, we need to reduce costs – if we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you, your colleagues, and partners across the world can conduct experiments on this data literally as fast as we can think them up.
About John Sontag:
John Sontag is vice president and director of systems research at HP Labs. The systems research organization is responsible for research in memristor, photonics, physical and system architectures, storing data at high volume, velocity and variety, and operating systems. Together with HP business units and partners, the team reaches from basic research to advanced development of key technologies.
With more than 30 years of experience at HP in systems and operating system design and research, Sontag has had a variety of leadership roles in the development of HP-UX on PA-RISC and IPF, including 64-bit systems, support for multiple input/output systems, multi-system availability and Symmetric Multi-Processing scaling for OLTP and web servers.
Sontag received a bachelor of science degree in electrical engineering from Carnegie Mellon University.

Meet the Innovators [HewlettPackardVideos YouTube channel, May 23, 2013]

Meet those behind the innovative technology that is HP Project Moonshot http://www.hp.com/go/moonshot

From Meet the innovators behind the design and development of Project Moonshot [The HP Blog Hub, June 6, 2013]

This video introduces you to key HP team members who were part of the team that brings you the innovative technology that fundamentally changes how hyperscale servers are built and operated such as:
• Chandrakant Patel – HP Senior Fellow and HP Labs Chief Engineer
• Paul Santeler  – Senior Vice President and General Manager of the HyperScale Business Unit
• Kelly Pracht – Moonshot Hardware Platform Manager, HyperScale Business Unit
• Dwight Barron – HP Fellow, Chief Technologist, HyperScale Business Unit

From Six IT technologies to watch [HP Enterprise 20/20 Blog, Sept 5, 2013]

1. Software-defined everything
Over the last couple of years we have heard a lot about software defined networks (SDN) and more recently, software defined data center (SDDC). There are fundamentally two ways to implement a cloud. Either you take the approach of the major public cloud providers, combining low-cost skinless servers with commodity storage, linked through cheap networking. You establish racks and racks of them. It’s probably the cheapest solution, but you have to implement all the management and optimization yourself. You can use software tools to do so, but you will have to develop the policies, the workflows and the automation.
Alternatively you can use what is becoming known as “converged infrastructure,” a term originally coined by HP, but now used by all our competitors. Servers, storage and networking are integrated in a single rack, or a series of interconnected ones, and the management and orchestration software included in the offering, provides an optimal use of the environment. You get increased flexibility and are able to respond faster to requests and opportunities.
We all know that different workloads require different characteristics. Infrastructures are typically implemented using general purpose configurations that have been optimized to address a very large variety of workloads. So, they do an average job for each. What if we could change the configuration automatically whenever the workload changes to ensure optimal usage of the infrastructure for each workload? This is precisely the concept of software defined environments. Configurations are no longer stored in the hardware, but adapted as and when required. Obviously this requires more advanced software that is capable of reconfiguring the resources.
A software-defined data center is described as a data center where the infrastructure is virtualized and also delivered as a service. Control of the data center is automated by software – meaning hardware configuration is maintained through intelligent software systems. Three core components comprise the SDDC, server virtualization, network virtualization and storage virtualization. It remains to be said that some workloads still require physical systems (often referred to as bare metal), hence the importance of projects such as OpenStack’s Ironic which could be defined as a hypervisor for physical environments.

2. Specialized servers

As I mentioned, all workloads are not equal, but run on the same, general purpose servers (typically x86). What if we create servers that are optimized for specific workloads? In particular, when developing cloud environments delivering multi-tenant SaaS services, one could well envisage the use of servers specialized for a specific task, for example video manipulation, dynamic web service management. Developing efficient, low energy specialized servers that can be configured through software is what HP’s Project Moonshot is all about. Today, although still in its infancy, there is much more to come. Imagine about 45 server/storage cartridges linked through three fabrics (for networking, storage and high speed cartridge to cartridge interconnections), sharing common elements such as network controllers, management functions and power management. If you then build the cartridges using low energy servers, you reduce energy consumption by nearly 90%. If you build SaaS type environments, using multi-tenant application modules, do you still need virtualization? This simplifies the environment, reduces the cost of running it and optimizes the use of server technology for every workload.

Particularly for environments that constantly run certain types of workloads, such as analyzing social or sensor data, the use of specialized servers can make the difference. This is definitely an evolution to watch.

3. Photonics

Let’s now complement those specialized servers with photonic based connections enabling flat, hyper-efficient networks boosting bandwidth, and we have an environment that is optimized to deliver the complex tasks of analyzing and acting upon signals provided by the environment in its largest sense.

But technology is going even further. I talked about the three fabrics, over time; why not use photonics to improve the speed of the fabrics themselves, increasing the overall compute speed. We are not there yet, but early experiments with photonic backplanes for blade systems have shown overall compute speed increased up to a factor seven. That should be the second step.

The third step takes things further. The specialized servers I talked about are typically system on a chip (SoC) servers, in other words, complete computers on a single chip. Why not use photonics to link those chips with their outside world? On-chip lasers have been developed in prototypes, so we are not that far out. We could even bring things one step further and use photonics within the chip itself, but that is still a little further out. I can’t tell you the increase in compute power that such evolutions will provide you, but I would expect it to be huge.

4. Storage
Storage is at a crossroads. On the one hand, hard disk drives (HDD) have improved drastically over the last 20 years, both in reading speed and in density. I still remember the 20MB hard disk drive, weighing 125Kg of the early 80’s. When I compare that with the 3TB drive I bought a couple months ago for my home PC, I can easily depict this evolution. But then the SSD (solid state disk) has appeared. Where a HDD read will take you 4 ms, the SDD read is down at 0.05 ms.

Using nanotechnologies, HP Labs did develop prototypes of the Memristor, a new approach to data storage, faster than Flash memory and consumes way less energy. Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.


Details about the latest and future Calxeda SoCs:

Calxeda EnergyCore ECX-2000 family – ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 30, 2013]

Calxeda tells us about their new EnergyCore ECX-2000 product line based on ARM Cortex-A15. http://www.calxeda.com/ecx-2000-family/

From ECX-2000 Product Brief [October, 2013]

The Calxeda EnergyCore ECX-2000 Series is a family of SoC (Server-on-Chip) products that delivers the power efficiency of ARM® processors, and the OpenStack, Linux, and virtualization software needed for modern cloud infrastructures. Using the ARM Cortex A15 quad-core processor, the ECX-2000 delivers roughly twice the performance, three times the memory bandwidth, and four times the memory capacity of the ground-breaking ECX-1000. It is extremely scalable due to the integrated Fleet Fabric Switch, while the embedded Fleet Engine simultaneously provides out-of-band control and intelligence for autonomic operation.

In addition to enhanced performance, the ECX-2000 provides hardware virtualization support via KVM and Xen hypervisors. Coupled with certified support for Ubuntu 13.10 and the Havana Openstack release, this marks the first time an ARM SoC is ready for Cloud computing. The Fleet Fabric enables the highest network and interconnect bandwidth in the MicroServer space, making this an ideal platform for streaming media and network-intensive applications.

The net result of the EnergyCore SoC architecture is a dramatic reduction in power and space requirements, allowing rapidly growing data centers to quickly realize operating and capital cost savings.

image

Scalability you can grow into. An integrated EnergyCore Fabric Switch within every SoC provides up to five 10 Gigabit lanes for connecting thousands of ECX-2000 server nodes into clusters capable of handling distributed applications at extreme scale. Completely topology agnostic, each SoC can be deployed to work in a variety of mesh, grid, or tree network structures, providing opportunities to find the right balance of network throughput and fault resiliency for any given workload.

Fleet Fabric Switch
• Integrated 80Gb (8×8) crossbar switch with through-traffic support
• Five (5) 10Gb external channels, three (3) 10Gb internal channels
• Configurable topology capable of connecting up to 4096 nodes
• Dynamic Link Speed Control from 1Gb to 10Gb to minimize power and maximize performance
• Network Proxy Support maintains network presence even with node powered off
• In-order flow delivery
• MAC learning provider support for virtualization

ARM Servers and Xen — Hypervisor Support at Hyperscale – Larry Wikelius, [Co-Founder of] Calxeda [TheLinuxFoundation YouTube channel, Oct 1, 2013]

[Xen User Summit 2013] The emergence of power optimized hyperscale servers is leading to a revolution in Data Center design. The intersection of this revolution with the growth of Cloud Computing, Big Data and Scale Out Storage solutions is resulting in innovation at rate and pace in the Server Industry that has not been seen for years. One particular example of this innovation is the deployment of ARM based servers in the Data Center and the impact these servers have on Power, Density and Scale. In this presentation we will look at the role that Xen is playing in the Revolution of ARM based server design and deployment and the impact on applications, systems management and provisioning.

Calxeda Launches Midway ARM Server Chips, Extends Roadmap [EnterpriseTech, Oct 28, 2013]

ARM server chip supplier Calxeda is just about to ship its second generation of EnergyCore processors for hyperscale systems and most of its competitors are still working on their first products. Calxeda is also tweaking its roadmap to add a new chip to its lineup, which will bridge between the current 32-bit ARM chips and its future 64-bit processors.
There is going to be a lot of talk about server-class ARM processors this week, particularly with ARM Holdings hosting its TechCon conference in Santa Clara.
A month ago, EnterpriseTech told you about the “Midway” chip that Calxeda had in the works and as well as its roadmap to get beefier 64-bit cores and extend its Fleet Services fabric to allow for more than 100,000 nodes to be linked together.
The details were a little thin on the Midway chip, but we now know that it will be commercialized as the ECX-2000, and that Calxeda is sending out samples to server makers right now. The plan is to have the ECX-2000 generally available by the end of the year, and that is why company is ready to talk about some feeds and speeds. Karl Freund, vice president of marketing at Calxeda, walked EnterpriseTech through the details.

image

The Midway chip is fabricated in the same 40 nanometer process as the existing “High Bank” ECX-1000 chip that Calxeda first put into the field in November 2011 in the experimental “Redstone” hyperscale servers from Hewlett-Packard. That 32-bit chip, based on the ARM Cortex-A9 core, was subsequently adopted in systems from Penguin Computing, Boston, and a number of other hyperscale datacenter operators who did proofs of concept with the chips. The ECX-1000 has four cores and was somewhat limited in its performance and was definitely limited in its main memory, which topped out at 4 GB across the four-core processor. But the ECX-2000 addresses these issues.
The ECX-2000 is based on ARM Holding’s Cortex-A15 core and has the 40-bit physical memory extensions, which allows for up to 16 GB of memory to be physically attached to each socket. With the 40-bit physical addressing added with the Cortex-A15, the memory controller can, in theory, address up to 1 TB of main memory; this is called Large Physical Address Extension (LPAE) in the ARM lingo, and it maps the 32-bit physical addressing on the core to a 40-bit virtual address space. Each core on the ECX-2000 has 32 KB of L1 instruction cache and 32 KB of L1 data cache, and ARM licensees are allowed to scale the L2 cache as they see fit. The ECX-2000 has 4 MB of L2 cache shared across the four cores on the die. These are exactly the same L1 and L2 cache sizes as used in the prior ECX-1000 chips.
The Cortex-A15 design was created to scale to 2.5 GHz, but as you crank up the clocks on any chip, the amount of energy consumed and heat radiated grows progressively larger as clock speeds go up. At a certain point, it just doesn’t make sense to push clock speeds. Moreover, every drop in clock speed gives a proportionately larger increase in thermal efficiency, and this is why, says Freund, Calxeda is making its implementation of the Cortex-A15 top out at 1.8 GHz. The company will offer lower-speed parts running at 1.1 GHz and 1.4 GHz for customers that need an even better thermal profile or a cheaper part where low cost is more important than raw performance or thermals.
What Calxeda and its server and storage array customers are focused on is the fact that the Midway chip running at 1.8 GHz has twice the integer, floating point, and Java performance of a 1.1 GHz High Bank chip. That is possible, in part, because the new chip has four times the main memory and three times the memory bandwidth as the old chip in addition to a 64 percent boost in clock speed. Calxeda is not yet done benchmarking systems using the chips to get a measure of their thermal efficiency, but is saying that there is as much as a 33 percent boost in performance per watt comparing old to new ECX chips.
The new ECX-2000 chip has a dual-core Cortex-A7 chip on the die that is used as a controller for the system BIOS as well as a baseboard management controller and a power management controller for the servers that use them. These Fleet Engines, as Calxeda calls them, eliminate yet another set of components, and therefore their cost, in the system. These engines also control the topology of the Fleet Services fabric, which can be set up in 2D torus, mesh, butterfly tree, and fat tree network configurations.
The Fleet Services fabric has 80 Gb/sec of aggregate bandwidth and offers multiple 10 Gb/sec Ethernet links coming off the die to interconnect server nodes on a single card, multiple cards in an enclosure, multiple enclosures in a rack, and multiple racks in a data center. The Ethernet links are also used to allow users to get to applications running on the machines.
Freund says that the ECX-2000 chip is aimed at distributed, stateless server workloads, such as web server front ends, caching servers, and content distribution. It is also suitable for analytics workloads like Hadoop and distributed NoSQL data stores like Cassandra, all of which tend to run on Linux. Both Red Hat and Canonical are cooking up commercial-grade Linuxes for the Calxeda chips, and SUSE Linux is probably not going to be far behind. The new chips are also expected to see action in scale-out storage systems such as OpenStack Swift object storage or the more elaborate Gluster and Ceph clustered file systems. The OpenStack cloud controller embedded in the just-announced Ubuntu Server 13.10 is also certified to run on the Midway chip.
Hewlett-Packard has confirmed that it is creating a quad-node server cartridge for its “Moonshot” hyperscale servers, which should ship to customers sometime in the first or second quarter of 2014. (It all depends on how long HP takes to certify the system board.) Penguin Computing, Foxconn, Aaeon, and Boston are expected to get beta systems out the door this year using the Midway chip and will have them in production in the first half of next year. Yes, that’s pretty vague, but that is the server business, and vagueness is to be expected in such a young market as the ARM server market is.
Looking ahead, Calxeda is adding a new processor to its roadmap, code-named “Sarita.” Here’s what the latest system-on-chip roadmap looks like now:

image

The future “Lago” chip is the first 64-bit chip that will come out of Calxeda, and it is based on the Cortex-A57 design from ARM Holdings –one of several ARMv8 designs, in fact. (The existing Calxeda chips are based on the ARMv7 architecture.)
Both Sarita and Lago will be implemented in TSMC’s 28 nanometer processes, and that shrink from the current 40 nanometer to 28 nanometer processes is going to allow for a lot more cores and other features to be added to the die and also likely a decent jump in clock speed, too. Freund is not saying at the moment which way it will go.
But what Freund will confirm is that Sarita will be pin-compatible with the existing Midway chip, meaning that server makers who adopt Midway will have a processor bump they can offer in a relatively easy fashion. It will also be based on the Cortex-A57 cores from ARM Holdings, and will sport four cores on a die that deliver about a 50 percent performance increase compared to the Midway chips.
The Lago chips, we now know, will scale to eight cores on a die and deliver about twice the performance of the Midway chips. Both Lago and Sarita are on the same schedule, in fact, and they are expected to tape out this quarter. Calxeda expects to start sampling them to customers in the second quarter of 2014, with production quantities being available at the end of 2014.
Not Just Compute, But Networking, Too
As important as the processing is to a system, the Fleet Services fabric interconnect is perhaps the key differentiator in its design. The current iteration of that interconnect, which is a distributed Layer 2 switch fabric that is spread across each chip in a cluster, can scale across 4,096 nodes without requiring top-of-rack and aggregation switches.

image

Both of the Lago and Sarita chips will be using the Fleet Services 2.0 intehttp://www.ti.com/product/66ak2h12rconnect that is now being launched with Midway. This iteration of the interconnect has all kinds of tweaks and nips and tucks but no scalability enhancements beyond the 4,096 nodes in the original fabric.
Freund says that the Fleet Services 3.0 fabric, which allows the distributed switch architecture to scale above 100,000 nodes in a flat network, will probably now come with the “Ratamosa” chips in 2015. It was originally – and loosely – scheduled for Lago next year. The circuits that do the fabric interconnect is not substantially different, says Freund, but the scalability is enabled through software. It could be that customers are not going to need such scalability as rapidly as Calxeda originally thought.
The “Navarro” kicker to the Ratamosa chip is presumably based on the ARMv9 architecture, and Calxeda is not saying anything about when we might see that and what properties it might have. All that it has said thus far is that it is aimed at the “enterprise server era.”


Details about the latest Texas Instruments DSP+ARM SoCs:

A Better Way to Cloud [MultiVuOnlineVideo YouTube channel, Nov 13, 2012]

To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. To view Multimedia News Release, go to http://www.multivu.com/mnr/54044-texas-instruments-keystone-multicore-socs-revitalize-cloud-applications

Infinite Scalability in Multicore Processors [Texas Instruments YouTube channel, Aug 27, 2012]

Over the years, our industry has preached how different types of end equipments and applications are best served by distinctive multicore architectures tailored to each. There are even those applications, such as high performance computing, which can be addressed by more than one type of multicore architecture. Yet most multicore devices today tend to be suited for a specific approach or a particular set of markets. This keynote address, from the 2012 Multicore Developer’s Conferece, touches upon why the market needs an “infinitely scalable” multicore architecture which is both scalable and flexible enough to support disparate markets and the varied ways in which certain applications are addressed. The speaker presents examples of how a single multicore architecture can be scalable enough to address the needs of various high performance markets, including cloud RAN, networking, imaging and high performance computing. Ramesh Kumar manages the worldwide business for TI’s multicore growth markets organization. The organization develops multicore processors and software that are targeted for the communication infrastructure space, including multimedia and networking infrastructure equipment, as well as end equipment that requires multicore processors like public safety, medical imaging, high performance computing and test and measurement. Ramesh is a graduate of Northeastern University, where he obtained an executive MBA, and Purdue University where he received a master of science in electrical engineering.

From Imagine the impact…TI’s KeyStone SoC + HP Moonshot [TI’s Multicore Mix Blog, April 19, 2013]

TI’s participation in HP’s Pathfinder Innovation Ecosystem is the first step towards arming HP’s customers with optimized server systems that are ideally suited for workloads such as oil and gas exploration, Cloud Radio Access Networks (C-RAN), voice over LTE and video transcoding. This collaboration between TI and HP is a bold step forward, enabling flexible, optimized servers to bring differentiated technologies, such as TI’s DSPs, to a broader set of application providers. TI’s KeyStone II-based SoCs, which integrate fixed- and floating- point DSP cores with multiple ARM® Cortex™A-15 MPCore processors, packet and security processing, and high speed interconnect, give customers the performance, scalability and programmability needed to build software-defined servers. HP’s Moonshot system integrates storage, networking and compute cards with a flexible interconnect, allowing customers to choose the optimized ratio enabling the industry’s first software-defined server platform. Bringing TI’s KeyStone II-based SoCs into HP’s Moonshot system opens up several tantalizing possibilities for the future. Let’s look at a few examples:
Think about the number of voice conversations happening over mobile devices every day. These conversations are independent of each other, and each will need transcoding from one voice format to another as voice travels from one mobile device, through the network infrastructure and to the other mobile device. The sheer number of such conversations demand that the servers used for voice transcoding be optimized for this function. Voice is just one example. Now think about video and music, and you can imagine the vast amount of processing required. Using TI’s KeyStone II-based SoCs with DSP technology provides optimized server architecture for these applications because our SoCs are specifically tuned for signal processing workloads.
Another example can be with C-RAN. We have seen a huge push for mobile operators to move most of the mobile radio processing to the data center. There are several approaches to achieve this goal, and each has pros and cons associated with them. But one thing is certain – each approach has to do wireless symbol processing to achieve optimum 3G or 4G communications with smart mobile devices. TI’s KeyStone II-based SoCs are leading the wireless communication infrastructure market and combine key accelerators such as BCP (Bit Rate Co-Processor), VCP (Viturbi Co-Processor) and others to enable 3G/4G standards compliant for wireless processing. These key accelerators offload standard-based wireless processing from the ARM and/or DSP cores, freeing the cores for value-added processing. The combination of ARM/DSP with these accelerators provides an optimum SoC for 3G/4G wireless processing. By combining TI’s KeyStone II-based SoC with HP’s Moonshot system, operators and network equipment providers can now build customized servers for C-RAN to achieve higher performance systems at lower cost and ultimately provide better experiences to their customers.

A better way to cloud: TI’s new KeyStone multicore SoCs [embeddednewstv YouTube channel, published on Jan 12,2013 (YouTube: Oct 21, 2013)]

Brian Glinsman, vice president of multicore processors at Texas Instruments, discusses TI’s new KeyStone multicore SoCs for cloud infrastructure applications. TI announced six new SoCs, based on their 28-nm KeyStone architecture, featuring the Industry’s first implementation of quad ARM Cortex-A15 MPCore processors and TMS320C66x DSPs for purpose built servers, networking, high performance computing, gaming and media processing applications.

Texas Instruments Offers System on a Chip for HPC Applications [RichReport YouTube channel, Nov 20, 2012]

In this video from SC12, Arnon Friedmann from Texas Instruments describes the company’s new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”

A better way to cloud: TI’s new KeyStone multicore SoCs revitalize cloud applications, enabling new capabilities and a quantum leap in performance at significantly reduced power consumption

    • Industry’s first implementation of quad ARM® Cortex™-A15 MPCore™ processors in infrastructure-class embedded SoC offers developers exceptional capacity & performance at significantly reduced power for networking, high performance computing and more
    • Unmatched combination of Cortex-A15 processors, C66x DSPs, packet processing, security processing and Ethernet switching, transforms the real-time cloud into an optimized high performance, power efficient processing platform
    • Scalable KeyStone architecture now features 20+ software compatible devices, enabling customers to more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices

ELECTRONICA – MUNICH (Nov.13, 2012) /PRNewswire/ — To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption.

To TI, a BETTER way to cloud means:

    • Safer communities thanks to enhanced weather modeling;
    • Higher returns from time sensitive financial analysis;
    • Improved productivity and safety in energy exploration;
    • Faster commuting on safer highways in safer cars;
    • Exceptional video on any screen, anywhere, any time;
    • More productive and environmentally friendly factories; and
    • An overall reduction in energy consumption for a greener planet.
    TI’s new KeyStone multicore SoCs are enabling this – and much more. These 28-nm devices integrate TI’s fixed-and floating-point TMS320C66x digital signal processor (DSP) generation cores – yielding the best performance per watt ratio in the DSP industry – with multiple ARM® Cortex™-A15 MPCore™ processors – delivering unprecedented processing capability combined with low power consumption – facilitating the development of a wide-range of infrastructure applications that can enable more efficient cloud experiences. The unique combination of Cortex-A15 processors and C66x DSPcores, with built-in packet processing and Ethernet switching, is designed to efficiently offload and enhance the cloud’s first generation general purpose servers; servers that struggle with big data applications like high performance computing and video processing.
    “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”
    TI’s six new high-performance SoCs include the 66AK2E02, 66AK2E05, 66AK2H06, 66AK2H12, AM5K2E02 and AM5K2E04, all based on the KeyStone multicore architecture. With KeyStone’s low latency high bandwidth multicore shared memory controller (MSMC), these new SoCs yield 50 percent higher memory throughput when compared to other RISC-based SoCs. Together, these processing elements, with the integration of security processing, networking and switching, reduce system cost and power consumption, allowing developers to support the development of more cost-efficient, green applications and workloads, including high performance computing, video delivery and media and image processing. With the matchless combination TI has integrated into its newest multicore SoCs, developers of media and image processing applications will also create highly dense media solutions.

    image

    “Visionary and innovative are two words that come to mind when working with TI’s KeyStone devices,” said Joe Ye, CEO, CyWee. “Our goal is to offer solutions that merge the digital and physical worlds, and with TI’s new SoCs we are one step closer to making this a reality by pushing state-of-the-art video to virtualized server environments. Our collaboration with TI should enable developers to deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.”
    Simplified development with complete tools and support
    TI continues to ease development with its scalable KeyStone architecture, comprehensive software platform and low-cost tools. In the past two years, TI has developed over 20 software compatible multicore devices, including variations of DSP-based solutions, ARM-based solutions and hybrid solutions with both DSP and ARM-based processing, all based on two generations of the KeyStone architecture. With compatible platforms across TI’s multicore DSPs and SoCs, customers can more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices, starting at just $30 and operating at a clock rate of 850MHz all the way to 15GHz of total processing power.
    TI is also making it easier for developers to quickly get started with its KeyStone multicore solutions by offering easy-to-use, evaluation modules (EVMs) for less than $1K, reducing developers’ programming burdens and speeding development time with a robust ecosystem of multicore tools and software.
    In addition, TI’s Design Network features a worldwide community of respected and well established companies offering products and services that support TI multicore solutions. Companies offering supporting solutions to TI’s newest KeyStone-based multicore SoCs include 3L Ltd., 6WIND, Advantech, Aricent, Azcom Technology, Canonical, CriticalBlue Enea, Ittiam Systems, Mentor Graphics, mimoOn, MontaVista Software, Nash Technologies, PolyCore Software and Wind River.
    Availability and pricing
    TI’s 66AK2Hx SoCs are currently available for sampling, with broader device availability in 1Q13 and EVM availability in 2Q13. AM5K2Ex and 66AK2Ex samples and EVMs will be available in the second half of 2013. Pricing for these devices will start at $49 for 1 KU.

    66AK2H14 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, Nov 10, 2013]
    The same as below for 66AK2H12 SoC with addition of:

    More Literature:

    From that the below excerpt is essential to understand the added value above 66AK2H12 SoC:

    image

    Figure 1. TI’s KeyStone™ 66AK2H14 SoC

    The 66AK2H14 SoC shown in Figure 1, with the raw computing power of eight C66x processors and quad ARM Cortex-A15s at over 1GHz performance, enables applications such as very large fast fourier transforms (FFT) in radar and multiple camera image analytics where a 10Gbit/s networking connection is needed. There are, and have been, several sophisticated technologies that have offered the bandwidth and additional features to fill this role. Some such as Serial RapidIO® and Infiniband have been successful in application domains that Gigabit Ethernet could not address, and continue to make sense, but 10Gbit/s Ethernet will challenge their existence.

    66AK2H12 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, created on Nov 8, 2012]

    Datasheet manual [351 pages]:

    More Literature:

    Description

    The 66AK2Hx platform is TI’s first to combine the quad ARM® Cortex™-A15 MPCore™ processors with up to eight TMS320C66x high-performance DSPs using the KeyStone II architecture. Unlike previous ARM Cortex-A15 devices that were designed for consumer products, the 66AK2Hx platform provides up to 5.6 GHz of ARM and 11.2 GHz of DSP processing coupled with security and packet processing and Ethernet switching, all at lower power than multi-chip solutions making it optimal for embedded infrastructure applications like cloud computing, media processing, high-performance computing, transcoding, security, gaming, analytics and virtual desktop. Using TI’s heterogeneous programming runtime software and tools, customers can easily develop differentiated products with 66AK2Hx SoCs.

    image

    Taking Multicore to the Next Level: KeyStone II Architecture [Texas Instruments YouTube channel, Feb 26, 2012]

    TI’s scalable KeyStone II multicore architecture includes support for both TMS320C66x DSP cores and multiple cache coherent quad ARM Cortex™-A15 clusters, for a mixture of up to 32 DSP and RISC cores. With significant updates to its award-winning KeyStone architecture, TI is now paving the way for a new era of high performance 28-nm devices that meld signal processing, networking, security and control functionality, with KeyStone II. Ideal for applications that demand superior performance and low power, devices based on the KeyStone architecture are optimized for high performance markets including communications infrastructure, mission critical, test and automation, medical imaging and high performance and cloud computing. For more information, please visit http://www.ti.com/multicore.

    Introducing the EVMK2H [Texas Instruments YouTube channel, Nov 15, 2013]

    Introducing the EVMK2H evaluation module, the cost-efficient development tool from Texas Instruments that enables developers to quickly get started working on designs for the 66AK2H06, 66AK2H12, and 66AK2H14 multicore DSP + ARM devices based on the KeyStone architecture.

    Kick start development of high performance compute systems with TI’s new KeyStone™ SoC and evaluation module [TI press release, Nov 14, 2013]

    Combination of DSP + ARM® cores and high-speed peripherals offer developers an optimal compute solution at low power consumption

    DALLAS, Nov. 14, 2013 /PRNewswire/ — Further easing the development of processing-intensive applications, Texas Instruments (TI) (NASDAQ: TXN) is unveiling a new system-on-chip (SoC), the 66AK2H14, and evaluation module (EVM) for its KeyStoneTM-based 66AK2Hx family of SoCs. With the new 66AK2H14 device, developers designing high-performance compute systems now have access to a 10Gbps Ethernet switch-on-chip. The inclusion of the 10GigE switch, along with the other high-speed, on-chip interfaces, saves overall board space, reduces chip count and ultimately lowers system cost and power. The EVM enables developers to evaluate and benchmark faster and easier. The 66AK2H14 SoC provides industry-leading computational DSP performance at 307 GMACS/153 GFLOPS and 19600 DMIPS of ARM performance, making it ideal for a wide variety of applications such as video surveillance, radar processing, medical imaging, machine vision and geological exploration.

    “Customers today require increased performance to process compute-intensive workloads using less energy in a smaller footprint,” said Paul Santeler, vice president and general manager, Hyperscale Business, HP. “As a partner in HP’s Moonshot ecosystem dedicated to the rapid development of new Moonshot servers, we believe TI’s KeyStone design will provide new capabilities across multiple disciplines to accelerate the pace of telecommunication innovations and geological exploration.”

    Meet TI’s new 10Gbps Ethernet DSP + ARM SoC
    TI’s newest silicon variant, the 66AK2H14, is the latest addition to its high-performance 66AK2Hx SoC family which integrates multiple ARM Cortex™-A15 MPCore™ processors and TI’s fixed- and floating-point TMS320C66x digital signal processor (DSP) generation cores. The 66AK2H14 offers developers exceptional capacity and performance (up to 9.6 GHz of cumulative DSP processing) at industry-leading size, weight, and power. In addition, the new SoC features a wide array of unique high-speed interfaces, including PCIe, RapidIO, Hyperlink, 1Gbps and 10Gbps Ethernet, achieving total I/O throughput of up to 154Gbps. These interfaces are all distinct and not multiplexed, allowing designers tremendous flexibility with uncompromising performance in their designs.
    Ease development and debugging with TI’s tools and software
    TI helps simplify the design process by offering developers highly optimized software for embedded HPC systems along with development and debugging tools for the EVMK2H – all for under $1,000. The EVMK2H features a single 66AK2H14 SoC, a status LCD, two 1Gbps Ethernet RJ-45 interfaces and on-board emulation. An optional EVM breakout card (available separately) also provides two 10Gbps Ethernet optical interfaces for 20Gbps backplane connectivity and optional wire rate switching in high density systems.
    The EVMK2H is bundled with TI’s Multicore Software Development Kit (MCSDK), enabling faster development with production ready foundational software. The MCSDK eases development and reduces time to market by providing highly-optimized bundles of foundational, platform-specific drivers, optimized libraries and demos.
    Complementary analog products to increase system performance
    TI offers a wide range of power management and analog signal chain components to increase the system performance of 66AK2H14 SoC-based designs. For example, the TPS53xx integrated FET DC/DC converters provide the highest level of power conversion efficiency even at light loads, while the LM10011 VID converter with dynamic voltage control helps reduce system power consumption. The CDCM6208 low-jitter clock generator also eliminates the need for external buffers, jitter cleaners and level translators.
    Availability and pricing
    TI’s EVMK2H is available now through TI distribution partners or TI.com for $995. In addition to TI’s Linux distribution provided in the MCSDK, Wind River® Linux is available now for the 66AK2Hxx family of SoCs. Green Hills® INTEGRITY® RTOS and Wind River VxWorks® RTOS support will each be available before the end of the year. Pricing for the 66AK2H14 SoC will start at $330 for 1 KU. The 10Gbps Ethernet breakout card will be available from Mistral.

    Ask the Expert: How can developers accelerate scientific computing with TI’s multicore DSPs? [Texas Instruments YouTube channel, Feb 7, 2012]

    Dr. Arnon Friedmann is the business manager for TI’s high performance computing products in the multicore and media infrastructure business. In this video, he explains how TI’s multicore DSPs are well suited for computing applications in oil and gas exploration, financial modeling and molecular dynamics, where ultra- high performance, low power and easy programmability are critical requirements.

    Ask the Expert: Arnon Friedmann [Texas Instruments YouTube channel, Sept 6, 2012]

    How are TI’s latest multicore devices a fit for video surveillance and smart analytic camera applications? Dr. Arnon Friedmann, PhD, is a business manager for multicore processors at Texas Instruments. In this role, he is responsible for growing TI’s business in high performance computing, mission critical, test and measurement and imaging markets. Prior to his current role, Dr. Friedmann served as the marketing director for TI’s wireless base station infrastructure group, where he was responsible for all marketing and design activities. Throughout his 14 years of experience in digital communications research and development, Dr. Friedmann has accumulated patents in the areas of disk drive systems, ADSL modems and 3G/4G wireless communications. He holds a PhD in electrical engineering and bachelor of science in engineering physics, both from the University of California, San Diego.

    End of Updates as of Dec 6, 2013


    The original post (8 months ago):

    HP Moonshot: Designed for the Data Center, Built for the Planet [HP press kit, April 8, 2013]

    On April 8, 2013, HP unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

    For more details on the disruptive potential of HP Moonshot, visit TheDisruption.com

    Introducing HP Moonshot [HewlettPackardVideos April 11, 2013]

    See how HP is defining disruption with the introduction of HP Moonshot.

    HP’s Cutting Edge Data Center Innovation [Ramón Baez, Senior Vice President and Chief Information Officer (CIO) of HP, HP Next [launched on April 2], April 10, 2013]

    This is an exciting time to be in the IT industry right now. For those of you who have been around for a while — as I have — there have been dramatic shifts that have changed how businesses operate.
    From the early days of the mainframes, to the explosion of the Internet and now social networks, every so often very important game-changing innovation comes along. We’re in the midst of another sea change in technology.
    Inside HP IT, we are testing the company’s Moonshot servers. With these servers running the same chips found in smart phones and tablets, they are using incredibly less power, require considerably less cooling and have a smaller footprint.

    We currently are running some of our intensive hp.com applications on Moonshot and are seeing very encouraging results. Over half a billion people will visit hp.com this year, and the new Moonshot technology will run at a fraction of the space, power and cost – basically we expect to run HP.com off of the same amount of energy needed for a dozen 60-watt light bulbs.

    This technology will revolutionize data centers.
    Within HP IT, we are fortunate in that over the past several years we have built a solid data center foundation to run our company. Like many companies, we were a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States.
    With the addition of four new EcoPODs to our infrastructure and these new Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.
    Moonshot is just the beginning.The product roadmap for Moonshot is extremely promising and I am excited to see what we can do with it within HP IT, and what benefits our customers will see.

    What Calxeda is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013] which is best to start with for its simple and efficient message, as well as what Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] already contained on this blog earlier:

    Calxeda discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

    Then we can turn to the Moonshot product launch by HP 2 days ago:

    Note that the first three videos following here were released 3 days later, so don’t be surpised by YouTube dates, in fact the same 3 videos (as well as the “Introducing HP Moonshot” embedded above) were delivered on April 8 live webcast, see the first 18 minutes of that, and then follow according HP’s flow of the presentation if you like. I would certainly recommend my own presentation compiled here.

    HP president and CEO Meg Whitman on the emergence of a new style of IT [HewlettPackardVideos YouTube channel, April 11, 2013]

    HP president and CEO Meg Whitman outlines the four megatrends causing strain on current infrastructure and how HP Project Moonshot servers are built to withstand data center challenges.

    EVP and GM of HP’s Enterprise Group Dave Donatelli discusses HP Moonshot [HewlettPackardVideos YouTube channel, April 11, 2013]

    EVP and GM of HP’s Enterprise Group Dave Donatelli details how HP Moonshot redefines the server market.

    Tour the Houston Discovery Lab — where the next generation of innovation is created [HewlettPackardVideos YouTube channel, April 11, 2013]

    SVP and GM of HP’s Industry Standard Servers and Software Mark Potter and VP and GM of HP’s Hyperscale Business Unit Paul Santeler tour HP’s Discovery Lab in Houston, Texas. HP’s Discovery Lab allows customers to test, tune and port their applications on HP Moonshot servers in-person and remotely.

    A new era of accelerated innovation [HP Moonshot minisite, April 8, 2013]

    Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale.

    Watch the unveiling [link to HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’]image

    On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part  of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.

    image

    imageWith up to a 180 servers inside the box (45 now) it was necessary to integrate network switching. There are two sockets (see left) for the network switch so you can configure for redundancy. The downlink module which talks to the cartridges is on left of the below image. This module is paired with an uplink module (see on the middle of the below image as taken out, and then shown with the uplink module on the right) that is in the back of the server. There will be more options available.image

    More information:
    Enterprise Information Library for Moonshot
    HP Moonshot System [Technical white paper from HP, April 5, 2013] from which I will include here the following excerpts for more information:

    HP Moonshot 1500 Chassis

    The HP Moonshot 1500 Chassis is a 4.3U form factor and slides out of the rack on a set of rails like a file cabinet drawer. It supports 45 HP ProLiant Moonshot Servers and an HP Moonshot-45G Switch Module that are serviceable from the top.
    It is a modern architecture engineered for the new style of IT that can support server cartridges, server and storage cartridges, storage only cartridges and a range of x86, ARM or accelerator based processor technologies.
    As an initial offering, the HP Moonshot 1500 Chassis is fully populated 45 HP ProLiant Moonshot Servers and one HP Moonshot-45G Switch Module and a second HP Moonshot-45G Switch Module can be purchased as an option. Future offerings will include quad server cartridges and will result in up to 180 servers per chassis. The 4.3U form factor allows for 10 chassis per rack, which with the quad server cartridge amounts to 1800 servers in a single rack.
    The Moonshot 1500 Chassis simplifies management with four iLO processors that share management responsibility for the 45 servers, power, cooling, and switches.

    Highly flexible fabric

    Built into the HP Moonshot 1500 Chassis architecture are four separate and independent fabrics that support a range of current and future capabilities:
    • Network fabric
    • Storage fabric
    • Management fabric
    • Integrated cluster fabric
    Network fabric
    The Network fabric provides the primary external communication path for the HP Moonshot 1500 Chassis.
    For communication within the chassis, the network switch has four communication channels to each of the 45 servers. Each channel supports a 1-GbE or 10-GbE interface. Each HP Moonshot-45G Switch Module supports 6 channels of 10GbE interface to the HP Moonshot-6SFP network uplink modules located in the rear of the chassis.
    Storage fabric
    The Storage fabric provides dedicated SAS lanes between server and storage cartridges. We utilize HP Smart Storage firmware found in the ProLiant family of servers to enable multiple core to spindle ratios for specific solutions. A hard drive can be shared among multiple server cartridges to enable low cost boot, logging, or attached to a node to provide storage expansion.
    The current HP Moonshot System configuration targets light scale-out applications. To provide the best operating environment for these applications, it includes HP ProLiant Moonshot Servers with a hard disk drive (HDD) as part of the server architecture. Shared storage is not an advantage for these environments. Future releases of the servers thattarget different solutions will take advantage of the storage fabric.
    Management fabric
    We utilize the Integrated Lights-Out (iLO) application-specific integrated circuit (ASIC) standard in the HP ProLiant family of servers to provide the innovative management features in the HP Moonshot System. To handle the range of extreme low energy processors we provide a device neutral approach to management, which can be easily consumed by data center operators to deploy at scale.
    The Management fabric enables management of the HP Moonshot System components as one platform with a dedicated iLO network. Benefits of the management fabric include:
    • The iLO Chassis Manager aggregates data to a common set of management interfaces.
    • The HP Moonshot 1500 Chassis has a single Ethernet port gateway that is the single point of access for the Moonshot Chassis manager.
    • Intelligent Platform Management Interface (IPMI) and Serial Console for each server
    • True out-of-band firmware update services
    • SL-APM Rack Management spans rack or multiple racks
    Integrated Cluster fabric
    The Integrated Cluster fabric provides a high-speed interface among future server cartridge technologies that will benefit from high bandwidth node-to-node communication. North, south, east, and west lanes are provided between individual server cartridges.
    The current HP ProLiant Moonshot Servertargets light scale-out applications. These applications do not benefit from the node-to-node communications, so the Integrated Cluster fabric is not utilized. Future releases of the cartridges that target different workloads that require low latency interconnects will take advantage of the Integrated Cluster fabric.

    HP ProLiant Moonshot Server

    HP will bring a growing library of cartridges, utilizing cutting-edge technology from industry leading partners. Each server will target specific solutions that support emerging Web, Cloud, and Massive-Scale Environments, as well as Analytics and Telecommunications. We are continuing server development for other applications, including Big Data, High-Performance Computing, Gaming, Financial Services, Genomics, Facial Recognition, Video Analysis, and more.
    Figure 4. Cartridges target specific solutions

    image

    The first server cartridge now available is HP ProLiant Moonshot Server, which includes the Intel® Atom Processor S1260. This is a low power processor that is right-sized for the light workloads. It has dedicated memory and storage, with discrete resources. This server design is idealfor light scale-out applications. Light scale-out applications require relatively little processing but moderately high I/O and include environments that perform the following functions:
    • Dedicated web hosting
    • Simple content delivery
    The HP ProLiant Moonshot Server can hot plug in the HP Moonshot 1500 Chassis. If service is necessary, it can be removed without affecting the other servers in the chassis. Table 1 defines the HP ProLiant Moonshot Server specifications.
    Table 1. HP ProLiant Moonshot Server specifications

    Processor
    One Intel® Atom Processor S1260
    Memory
    8 GB DDR3 ECC 1333 MHz
    Networking
    Integrated dual-port 1Gb Ethernet NIC
    Storage
    500 GB or 1 TB HDD or SSD, non-hot-plug, small form factor
    Operating systems
    Canonical Ubuntu 12.04
    Red Hat Enterprise Linux 6.4
    SUSE Linux Enterprise Server 11 SP2

    imageWith that HP CEO Seeks Turnaround Unveiling ‘Moonshot’ Super-Server: Tech [Bloomberg, April, 2013] as well as HP Moonshot: Say Goodbye to the Vanilla Server [Forbes, April 8, 2013]. HP however is much more eyeing the ARM based Moonshot servers which are expected to come later, because of the trends reflected on the left (source: HP). The software defined server concept is very general. image

    There are a number of quite different server cartridges expected to come, all specialised by server software installed on it. Typical specialised servers, for example, are the ones on which CyWee from Taiwan is working on with Texas Instruments’ new KeyStone II architecture featuring both ARM Cortex-A15 CPU cores and TI’s own C66x DSP cores for a mixture of up to 32 DSP and RISC cores in TI’s new 66AK2Hx family of SoCs, first of which is the TMS320TCI6636 implemented in 28nm foundry technology. Based on that CyWee will deliver multimedia Moonshot server cartridges for cloud gaming, virtual office, video conferencing and remote education (see even the first Keystone announcement). This CyWee involvement in HP Moonshot effort is part of HP’s Pathfinder Partner Program which Texas Instruments also joined recently to exploit a larger opportunity as:

    TI’s 66AK2Hx family and its integrated c66x multicore DSPs are applicable for workloads ranging from high performance computing, media processing, video conferencing, off-line image processing & analytics, video recorders (DVR/NVR), gaming, virtual desktop infrastructure and medical imaging.

    But Intel was able to win the central piece of the Moonshot System launch (originally initiated by HP as the “Moonshot Project” in November 2011 for disruption in terms of power and TCO for servers, actually with a Calxeda board used for research and development with other partners), at least as it was productized just two days ago:
    Raejeanne Skillern from Intel – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel]

    Raejeanne Skillern, Intel Director of Marketing for Cloud Computing, at HP Moonshot 2013 with John Furrier and Dave Vellante

    However ARM was not left out either just relegated in the beginning to highly advanced and/or specialised server roles with its SoC partners, and coming later in the year:

    • Applied Micro with networking and connectivity background having now the X-Gene ARM 64-bit Server on a Chip platform as well which features 8 ARM 64-bit high-performance cores developed from scratch according to an architecture license (i.e. not ARM’s own Cortex-A50 series core), clocked at up to 2.4GHz and also has 4 smaller cores for network and storage offloads (see AppliedMicro on the X-Gene ARM Server Platform and HP Moonshot [SiliconANGLE blog [April 9, 2013]). Sample reference boards to key customers were shipped in March (see Applied Micro’s cloud chip is an ARM-based, switch-killing machine [GigaOM, April 3, 2013]). In the latest X-Gene Arrives in Silicon [Open Compute Summit Winter 2013 presentation, Jan 16, 2013] video you can have the most recent strategic details (upto 2014 with FinFET implementation of a “Software defined X-Gene based data center components”, should be assumed that at 16nm). Here I will include a more product-oriented AppliedMicro Shows ARM 64-bit X-Gene Server on a Chip Hardware and Software [Charbax YouTube channel, Nov 3, 2012] overview video:
      Vinay Ravuri, Vice President and General Manager, Server Products at AppliedMicro gives an update on the 64bit ARM X-Gene Server Platform. At ARM Techcon 2012, AppliedMicro, ARM and several open-source software providers gave updates on their support of the ARM 64-bit X-Gene Server on a Chip Platform.

      More information: A 2013 Resolution for the Data Center [Applied Micro on Smart Connected Devices blog from ARM, Feb 4, 2013] about “plans from Oracle, Red Hat, Citrix and Cloudera to support this revolutionary architecture … Dell’s “Iron” server concept with X-Gene … an X-Gene based ARM server managed by the Dell DCS Software suite …” etc.

    • Texas Instruments with digital signal processing (DSP) background, as it was already presented above. 
    • Calxeda with integration of storage fabric and Internet switching background, with details coming later, etc.:

    This is what is empasized by Lakshmi Mandyam from ARM – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

    Lakshmi Mandyam, Director of Server Systems and Ecosystems, ARM, at HP Moonshot 2013, with John Furrier and Dave Vellante

    She is also mentioning in the talk the achievements which could put ARM and its SoC partners into a role which Intel now has with its general Atom S1200 based server cartridge product fitting into the Moonshot system. Perspective information on that is already available on my ‘Experiencing the Cloud’ blog here:
    The state of big.LITTLE processing [April 7, 2013]
    The future of mobile gaming at GDC 2013 and elsewhere [April 6, 2013]
    TSMC’s 16nm FinFET process to be further optimised with Imagination’s PowerVR Series6 GPUs and Cadence design infrastructure [April 8, 2013]
    With 28nm non-exclusive in 2013 TSMC tested first tape-out of an ARM Cortex™-A57 processor on 16nm FinFET process technology [April 3, 2013]

    The absence of Microsoft is even more interesting as AMD is also on this Moonshot bandwagon: Suresh Gopalakrishnan from AMD – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

    Suresh Gopalakrishnan, Vice President and General Manager, Server Business, AMD, at HP Moonshot 2013, with John Furrier and Dave Vellante

    already showing a Moonshot fitting server cartridge with AMD’s four next-generation SoCs (while Intel’s already productized cartridge is not yet at an SoC level). We know from CES 2013 that AMD Unveils Innovative New APUs and SoCs that Give Consumers a More Exciting and Immersive Experience [press release, Jan 7, 2013] with the:

    Temash” … elite low-power mobility processor for Windows 8 tablets and hybrids … to be the highest-performance SoC for tablets in the market, with 100 percent more graphics processing performance2 than its predecessor (codenamed “Hondo.”)
    Kabini” [SoC which] targets ultrathin notebooks with exceptional battery life and offers impressive levels of performance in both dual- and quad-core options. “Kabini” is expected to deliver an increase of more than 50 percent in performance3 over the previous generation of AMD essential computing APUs (codenamed “Brazos 2.0.”)
    Both APUs are scheduled to ship in the first half of 2013

    so AMD is really close to a server SoC to be delivered soon as well.

    The “more information” sections which follow her are:

    1. The Announcement
    2. Software Partners
    3. Hardware Partners


    1. The Announcement

    HP Moonshot [MultiVuOnlineVideo YouTube channel, April 8, 2013]

    HP today unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

    HP Launches New Class of Server for Social, Mobile, Cloud and Big Data [press release, April 8, 2013]

    Software defined servers designed for the data center and built for the planet
    … Built from HP’s industry-leading server intellectual property (IP) and 10 years of extensive research from HP Labs, the company’s central research arm, HP Moonshot delivers a significant improvement in energy, space, cost and simplicity. …
    The HP Moonshot system consists of the HP Moonshot 1500 enclosure and application-optimized HP ProLiant Moonshot servers. These servers will offer processors from multiple HP partners, each targeting a specific workload.
    With support for up to 1,800 servers per rack, HP Moonshot servers occupy one-eighth of the space required by traditional servers. This offers a compelling solution to the problem of physical data center space.(3) Each chassis shares traditional components including the fabric, HP Integrated Lights-Out (iLo) management, power supply and cooling fans. These shared components reduce complexity as well as add to the reduction in energy use and space.  
    The first HP ProLiant Moonshot server is available with the Intel® Atom S1200 processor and supports web-hosting workloads. HP Moonshot 1500, a 4.3u server enclosure, is fully equipped with 45 Intel-based servers, one network switch and supporting components.
    HP also announced a comprehensive roadmap of workload-optimized HP ProLiant Moonshot servers incorporating processors from a broad ecosystem of HP partners including AMD, AppliedMicro, Calxeda, Intel and Texas Instruments Incorporated.

    Scheduled to be released in the second half of 2013, the new HP ProLiant Moonshot servers will support emerging web, cloud and massive scale environments, as well as analytics and telecommunications. Future servers will be delivered for big data, high-performance computing, gaming, financial services, genomics, facial recognition, video analysis and other applications.

    The HP Moonshot system is immediately available in the United States and Canada and will be available in Europe, Asia and Latin America beginning next month.
    Pricing begins at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch.(4)
    (4) Estimated U.S. street prices. Actual prices may vary.

    More information:
    HP Moonshot System [Family data sheet, April 8, 2013]
    HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’ with embedded video gallery, press kit and more, originally created on April 12, 2010, obviously updated for the April 8, 2013 event]

    Moonshot 101 [HewlettPackardVideos YouTube channel, April 8, 2013]

    Paul Santeler, Vice President & GM of Hyperscale Business Unit at HP, discusses how HP Project Moonshot creates the new style of IT.http://hp.com/go/moonshot

    Alert for Microsoft:

    [4:42] We defined the industry standard server market [reference to HP’s Compaq heritage] and we’ve been the leader for years. With Moonshot we bring to find the market and taking it to the next level. [4:53]

    People Behind HP Moonshot [HP YouTube channel, April 10, 2013]

    HP Moonshot is a groundbreaking new class of server that requires less energy, less space and less cost. Built from HP’s industry-leading server IP and 10 years of research from HP Labs, HP Moonshot is an example of the best of HP working together. In the video: Gerald Kleyn, Director of Platform Research and Development, Hyperscale Business Unit, Industry Standard Servers; Scott Herbel, Worldwide Product Marketing Manager, Hyperscale Business Unit, Industry Standard Servers; Ron Mann, Director of Engineering, Industry Standard Servers; Kelly Pracht, Hardware Platform Manager R&D, Hyperscale Business Unit, Industry Standard Servers; Mike Sabotta, Distinguished Technologist, Hyperscale Business Unit, Industry Standard Servers; Dwight Barron, HP Fellow, Chief Technologist, Hyperscale Business Unit, Industry Standard Servers. For more information, visit http://www.hpnext.com.

    HP Moonshot System Tour [HewlettPackardVideos YouTube channel, April 8, 2013]

    Kelly Pracht, Moonshot Hardware Platform Program Manager, HP, takes you on a private tour of the HP Moonshot System and introduces the foundational HW components of HP Project Moonshot. This video guides you around the entire system highlighting the cartridges and switches.http://hp.com/go/moonshot

    HP Moonshot System is Hot Pluggable [HewlettPackardVideos YouTube channel, April 8, 2013]

    “Show me around the HP Moonshot System!” Vicki Doehring, Moonshot Hardware Engineer, HP, shows us just how simple and intuitive it is to remove components in the HP Moonshot System. This video explains how HP’s hot pluggable technology works with the HP Moonshot System.http://hp.com/go/moonshot

    Alert for Microsoft: how and when will you have a system like this with all the bells and whistles as presented above, as well as the rich ecosystem of hardware and software partners given below 

    HP Pathfinder Innovation Ecosystem [HewlettPackardVideos YouTube channel, April 8, 2013]

    A key element of HP Moonshot, the HP Pathfinder Innovation Ecosystem brings together industry leading sofware and hardware partners to accelerate the development of workload optimized applications. http://hp.com/go/moonshot

    Software partners:

    What Linaro is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

    Linaro discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

    Alert for Microsoft:

    [0:11] In HP approach Linaro is about forming an enterprise group. What they were hoping for, what’s happened is to get a bunch of companies who are interested in taking the ARM architecture into the server space. [0:26]

    Canonical joins Linaro Enterprise Group (LEG) and commits Ubuntu Hyperscale Availability for ARM V8 in 2013 [press release, Nov 1, 2012]

      • Canonical continues its leadership of commercial deployment for ARM-based servers through membership of Linaro Enterprise Group (LEG)
      • Ubuntu, the only commercially supported OS for ARM v7 today, commits to support ARM v8 server next year
      • Ubuntu extends its position as the natural choice for hyperscale  server computing with long term support

    … “Canonical has been supporting our work optimising and consolidating the Linux kernel since our founding in June 2010”, said George Grey, CEO of Linaro. “We’re very happy to welcome them as a member of the Linaro Enterprise Group, building on our relationship to help accelerate development of the ARM server software ecosystem.” …

    … “Calxeda has been thrilled with Canonical’s leadership in developing the ARM ecosystem”,  said Karl Freund, VP marketing at Calxeda. “These guys get it. They are driving hard and fast, already delivering enterprise-class code and support for Calxeda’s 32-bit product today to our mutual clients.  Working together in LEG will enable us to continue to build on the momentum we have already created.” …

    What Canonical is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

    HP Moonshot and Ubuntu work together [Ubuntu partner site, April 9, 2013]

    … Ubuntu, as the lead operating system platform for x86 and ARM-based HP Moonshot Systems, featured extensively at the launch of the program in April 2013. …
    Ubuntu Server is the only OS fully operational today across HP Moonshot x86 and ARM servers, launched in April 2013.
    Ubuntu is recognised as the leader in scale out and Hyperscale. Together, Canonical and HP are delivering massive reductions in data-center energy, space and costs. …

    Canonical has been working with HP for the past two years
    on HP Moonshot
    , and with Ubuntu, customers can achieve higher performance with greater manageability across both x86 and ARM chip sets” Paul Santeler, VP & GM, Hyperscale Business Unit, HP

    Ubuntu & HP’s project Moonshot [Canonical blog, Nov 2, 2011]

    Today HP announced Project Moonshot  – a programme to accelerate the use of low power processors in the data centre.
    The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86),  the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.
    Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.
    imageThe HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.
    The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?
    Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.
    The good news is that Canonical has been working with ARM and Calxeda for several years now and we released the first version of Ubuntu Server ported for ARM Cortex A9 class  processors last month.
    The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM.  This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.
    As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.

    The biggest enterprise alert for Microsoft because of what was discussed in Will Microsoft Stand Out In the Big Data Fray? [Redmondmag.com, March 22, 2013]: What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013] especially as it is a brand new offering, see NuoDB Announces General Availability of Industry’s First & Only Cloud Data Management System at Live-Streamed Event [press release, Jan 15, 2013] now available in archive at this link: http://go.nuodb.com/cdms-2013-register-e.html

    Barry Morris, founder and CEO of NuoDB discusses HP’s Project Moonshot and the database innovations delivered by the combined offering

    Extreme density on HP’s Project Moonshot [NuoDB Techblog, April 9, 2013]

    A few months ago HP came to us with something very cool. It’s called Project Moonshot, and it’s a new way of thinking about how you design infrastructure. Essentially, it’s a composable system that gives you serious flexibility and density.

    A single Moonshot System is 4.3u tall and holds 45 independent servers connected to each other via 1-Gig Ethernet. There’s a 10-Gig Ethernet interface to the system as a whole, and management interfaces for the system and each individual server. The long-term design is to have servers that provide specific capabilities (compute, storage, memory, etc.) and can scale to up to 180 nodes in a single 4.3u chassis.
    The initial system, announced this week, comes with a single server configuration: an Intel Atom S1260 processor, 8 Gigabytes of memory and either a 200GB SSD or a 500GB HDD. On its own, that’s not a powerful server, but when you put 45 of these into a 4.3 rack-unit space you get something in aggregate that has a lot of capacity while still drawing very little power (see below). The challenge, then, is how to really take advantage of this collection of servers.

    NuoDB on Project Moonshot: Density and Efficiency

    We’ve shown how NuoDB can scale a single database to large transaction rates. For this new system, however, we decided to try a different approach. Rather than make a single database scale to large volume we decided to see how many individual, smaller databases we could support at the same time. Essentially, could we take a fully-configured HP Project Moonshot System and turn it into a high-density, low-power, easy to manage hosting appliance.

    To put this in context, think about a web site that hosts blogs. Typically, each blog is going to have a single database supporting it (just like this blog you’re reading). The problem is that while a few blogs will be active all the time, most of them see relatively light traffic. This is known as a long-tail pattern. Still, because the blogs always need to be available, so too the backing databases always need to be running.

    This leads to a design trade-off. Do you map the blogs to a single database (breaking isolation and making management harder) or somehow try to juggle multiple database instances (which is hard to automate, expensive in resource-usage and makes migration difficult)? And what happens when a blog suddenly takes off in popularity? In other words, how do you make it easy to manage the databases and make resource-utilization as efficient as possible so you don’t over-spend on hardware?

    As I’ve discussed on this blog NuoDB is a multi-tenant system that manages individual databases dynamically and efficiently. That should mean that we’re a perfect fit for this very cool (pun intended) new system from HP.

    The Design

    After some initial profiling on a single server, we came up with a goal: support 7,200 active databases. You can read all about how we did the math, but essentially this was a balance between available CPU, Memory, Disk and bandwidth. In this case a “database” is a single Transaction Engine and Storage Manager pair, running on one of the 45 available servers.

    When we need to start a database, we pick the server that’s least-utilized. We choose this based on local monitoring at each server that is rolled up through the management tier to the Connection Brokers. It’s simple to do given all that NuoDB already provides, and because we know what each server supports it lets us calculate a single capacity percentage.
    It gets better. Because a NuoDB database is made of an agile collection of processes, it’s very inexpensive to start or stop a database. So, in addition to monitoring for server capacity we also watch what’s going on inside each database, and if we think it’s been idle long enough that something else could use the associated resources more effectively we shut it down. In other words, if a database isn’t doing anything active we stop it to make room for other databases.
    When an SQL client needs to access that database, we simply re-start it where there are available resources. We call this mechanism hibernating and waking a database. This on-demand resource management means that while there are some number of databases actively running, we can really support a much larger in total (remember, we’re talking about applications that exhibit a long-tail access pattern). With this capability, our original goal of 7,200 active databases translates into 72,000 total supported databases. On a single 4.3u System.
    The final piece we added is what we call database bursting. If a single database gets really popular it will start to take up too many resources on a single server. If you provision another server, separate from the Moonshot System, then we’ll temporarily “burst” a high-activity database to that new host until activity dies down. It’s automatic, quick and gives you on-demand capacity support when something gets suddenly hot.
    The Tests
    I’m not going to repeat too much here about how we drove our tests. That’s already covered in the discussion on how we’re trying to design a new kind of benchmark focused on density and efficiency. You should go check that out … it’s pretty neat. Suffice it say, the really critical thing to us in all of this was that we were demonstrating something that solves a real-world problem under real-world load.
    You should also go read about how we setup and ran on a Moonshot System. The bottom-line is that the system worked just like you’d expect, and gave us the kinds of management and monitoring features to go beyond basic load testing.
    The Results
    We were really lucky to be given access to a full Moonshot System. It gave us a chance to test out our ideas, and we actually were able to do better than our target. You can see this in the view from our management interface running against a real system under our benchmark load. You can see there that when we hit 7200 active databases we were only at about 70% utilization, so there was a lot more room to grow. Huge thanks to HP for giving us time on a real Moonshot System to see all those idea work!

    Something that’s easy to lose track of in all this discussion is the question of power. Part of the value proposition from Project Moonshot is in energy efficiency, and we saw that in spades. Under load a single server only draws 18 Watts, and the system infrastructure is closer to 250 Watts. Taken together, that’s a seriously dense system that is using very little energy for each database.

    Bottom Line
    We were psyched to have the chance to test on a Moonshot System. It gave us the chance to prove out ideas around automation and efficiency that we’ll be folding into NuoDB over the next few releases. It also gave us the perfect platform to put our architecture through its paces and validate a lot about the flexibility of our core architecture.
    We’re also seriously impressed by what we experienced from Project Moonshot itself. We were able to create something self-contained and easy to manage that solves a real-world problem. Couple that with the fact that a Moonshot System draws so little power, the Total Cost of Ownership is impressively low.  That’s probably the last point to make about all this: the combination of our two technologies gave us something where we could talk concretely about capacity and TCO, something that’s usually hard to do in such clear terms.
    In case it’s not obvious, we’re excited. We’ve already been posting this week about some ideas that came out of this work, and we’ll keep posting as the week goes on. Look for the moonshot tag and please follow-up with comments if you’re curious about anything specific and would like to hear more!

    Project Moonshot by the Numbers [NuoDB Techblog, April 9, 2013]

    To really understand the value from HP Project Moonshot you need to think beyond the list price of one system and focus instead on the Total Cost of Ownership. Figuring out the TCO for a server running arbitrary software is often a hard (and thankless?) task, so one of the things we’ve tried to do is not just demonstrate great technology but something that naturally lets you think about TCO in a simple way. We think the final metrics are pretty simple, but to get there requires a little math.

    Executive Summary

    If you’re a CIO, and just want to know the bottom line, then we’ll ruin the suspense and cut to the chase. It will cost you about $70,500 up-front, $1,800 in your first year’s electricity bills and take 8.3 rack-units to support the web-front end and database back-end for 72,000 blogs under real-world load.

    Cost of a Single Database
    Recall that we set the goal at 72,000 databases within a single system. At launch the list price for a fully-configured Moonshot System is around $60,000, so we start out at 83 cents per-database. In practice were seeing much higher capacity in our tests, but let’s start with this conservative number.
    Now consider the power used by the system. From what we’ve measured through the iLO interfaces a single server draws no more than 18 Watts at peak load (measured against CPU and IO activity). The System itself (fans, switches etc.) draws around 250 Watts in our tests. That means that under full load each database is drawing about .015 Watts.
    NuoDB is a commercial software offering, which means that you pay up-front to deploy the software (and get support as part of that fee). For anyone who wants to run a Moonshot System in production as a super-dense NuoDB appliance we’ll offer you a flat-rate license.
    Put together, we can say that the cost per database-watt is 1.22 cents. That’s on a 4.3 rack-unit system. Awesome.
    Quantify the Supported Load
    As we discussed in our post on benchmarking, we’re trying to test under real-world load. As a simple starting-point we chose a profile based on WordPress because it’s fairly ubiquitous and has somewhat serious transactional requirements. In our benchmarking discussion we explain that a typical application action (post, read, comment) does around 20 SQL operations.
    Given 72,000 databases most of these are fairly inactive, so on average we’ll say that each database gets about 250 hits a day (generous by most reports I’ve seen). That’s 18,000,000 hits a day or 208 hits per-second. 4,166 SQL statements a second isn’t much for a single database, but it’s pretty significant given that we’re spreading it across many databases some of which might have to be “woken” on-demand.
    HP was generous enough not only to give us time on a Moonshot System but also access to some co-located servers for driving our load tests. In this case, 16 lower-powered ARM-based Calxeda systems that all went through the same 1-Gig ethernet connection to our Moonshot System. These came from HP’s Discovery Lab; check out our post about working with the Moonshot System for more details.
    From these load-drivers we able to run our benchmark application with up to 16 threads per server, simulating 128 simultaneous clients. In this case a typical “client” would be a web server trying to respond to a web client request. We averaged around 320 hits per-second, well above the target of 208. From what we could observe, we expect that given more capable network and client drivers we would be able to get 3 or 4 times that rate easily.
    Tangible Cost
    We have the cost of the Moonshot System itself. We also know that it can support expected load from a fairly small collection of low-end servers. In our own labs we use systems that cost around $10,000, fit in 3 rack-units and would be able to drive at least the same kind of load we’re citing here. Add a single switch at around $500 and you have a full system ready to serve blogs. That’s $70,500 total in 8.3 rack units, still under $1 per database.
    I don’t know what power costs you have in your data center, but I’ve seen numbers ranging from 2.5 to 25 cents per Kilowatt-Hour. In our tests, where we saw .015 Watts per-database, if you assume an average rate of 13.75 cents per KwH that comes out to .00020625 cents per-hour per-database in energy costs. In one year, with no down-time, that would cost you $1,276.77 in total electricity fees.
    Just as an aside, according to the New York Times, Facebook uses around 60,000,000 Watts a year!
    One of the great things about a Moonshot System is that the 45 servers are already being switched inside the chassis. This means that you don’t need to buy switches & cabling, and you don’t need to allocate all the associated space in your racks. For our systems administrator that alone would make him very happy.
    Intangible Cost
    What I haven’t been talking about in all of this are the intangible costs. This is where figuring out TCO becomes harder.
    For instance, one of the value-propositions here is that the Moonshot System is a self-contained, automated component. That means that systems administrators are freed up from the tasks of figuring out how to allocate and monitor databases, and how to size the data-center for growth. Database developers can focus more easily on their target applications. CIOs can spend less time staring at spreadsheets … or, at least, can allocate more time to spreadsheets on different topics.
    Providing a single number in terms of capacity makes it easy to figure out what you need in your datacenter. When a single server within a Moonshot System fails you can simply replace it, and in the meantime you know that the system will still run smoothly just with slightly lower capacity. From a provisioning point of view, all you need to figure out is where your ceiling is and how much stand-by capacity you need to have at the ready.
    NuoDB by its nature is dynamic, even when you’re doing upgrades. This means that you can roll through a running Moonshot System applying patches or new versions with no down-time. I don’t know how you calculate the value in saved cost here, but you probably do!
    Comparisons and Planned Optimizations
    It’s hard to do an “apples-to-apples” comparison against other database software here. Mostly, this is because other databases aren’t designed to be dynamic enough to support hibernation, bursting and capacity-based automated balancing. So, you can’t really get the same levels of density, and a lot of the “intangible” cost benefits would go away.
    Still, to be fair, we tried running MySQL on the same system and under the same benchmarks. We could indeed run 7200 instances, although that was already hitting the upper-bounds of memory/swap. In order to get the same density you would need 10 Moonshot Systems, or you would need larger-powered expensive servers. Either way, the power, density, automation and efficiency savings go out the window, and obviously there’s no support for bursting to more capable systems on-demand.
    Unsurprisingly, the response time was faster on-average (about half the time) from MySQL instances. I say “unsurprisingly” for two reasons. First, we tried to use schema/queries directly from WordPress to be fair in our comparison, and these are doing things that are still known to be less-optimized in NuoDB. They’re also in the path of what we’re currently optimizing and expect to be much faster in the near-term.
    The second is that NuoDB clients were originally designed assuming longer-running connections (or pooled connections) to databases that always run with security & encryption enabled. We ran all of our tests in our default modes to be fair. That means we’re spending more time on each action setting up & tearing down a connection. We’ve already been working on optimizations here that would shrink the gap pretty substantially.
    In the end, however, our response time is still on the order of a few hundred milliseconds worst-case, and is less important than the overall density and efficiency metrics that we proved out. We think the value in terms of ease of use, density, flexibility on load spikes and low-cost speaks for itself. This setup is inexpensive by comparison to deploying multiple servers and supports what we believe is real-world load. Just wait until the next generation of HP Project Moonshot servers roll out and we can start scaling out individual databases at the same time!

    More information:
    Benchmarking Density & Efficiency [NuoDB Techblog, April 9, 2013]
    Database Hibernation and Bursting [NuoDB Techblog, April 8, 2013]
    An Enterprise Management UI for Project Moonshot [NuoDB Techblog, April 9, 2013]Regarding the cloud based version of NuoDB see:
    NuoDB Partners with Amazon [press release, March 26, 2013]
    NuoDB Extends Database Leadership in Scalability & Performance on a Private Cloud [press release, March 14, 2013] “… the industry’s first and only patented, elastically scalable Cloud Data Management System (CDMS), announced performance of 1.84 million transactions per second (TPS) running on 32 machines. … With NuoDB Starlings release 1.0.1, available as of March 1, 2013, the company has made advancements in performance and scalability and customers can now experience 26% improvement in TPS per machine.
    Google Compute Engine: interview with NuoDB [GoogleDevelopers YouTube channel, March 21, 2013]

    Meet engineers from NuoDB: an elastically scalable SQL database built for the cloud. We will learn about their approach to distributed SQL databases and get a live demo. We’ll cover the steps they took to get NuoDB running on Google Compute Engine, talk about how they evaluate infrastructure (both physical hardware and cloud), and reveal the results of their evaluation of Compute Engine performance.

    Actually Calxeda was best to explain the preeminence of software over the SoC itself:
    Karl Freund from Calxeda – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013], see also HP Moonshot: It’s a lot closer than it looks! [Calxeda’s ‘ARM Servers, Now!’ blog, April 8, 2013]

    Karl Freund, VP of Marketing, Calxeda, at HP Moonshot 2013 with John Furrier and Dave Vellante.

    as well as ending with Calxeda’s very practical, gradual approach to ARM based served market with things like:

    [16.03] Our 2nd generation platform called Midway, which will be out later this year [in the 2nd half of the year], that’s probably the target for Big Data. Our current product is great for web serving, it’s great for media serving, it’s great for storage. It doesn’t have enough memory for Big Data … in a large. So we’ll getting that 2nd generation product out, and that should be a really good Big Data platform. Why? Because it’s low power, it’s low cost, but it’s also got a lot of I/O. Big Data is all that moving a lot of data around. And if you do that more cost effectively you save a lot of money. [16:38]

    mentioning also that their strategy is using standard ARM cores like the Cortex-A57 for their H1 2014 product, and focus on things like the fabric and the management, which actually allows them to work with a streamlined staff of around 150 people.

    Detailed background about Calxeda in a concise form:
    Redefining Datacenter Efficiency: An Overview of Calxeda’s architecture and early performance measurements [Karl Freund, Nov 12, 2012] from where the core info is:

      • Founded in 2008   
      • $103M Funding       
      • 1st Product Announced with HP,  Nov  2011   
      • Initial Shipments in Q2 2012   
      • Volume production in Q4 2012

    image

    image* The power consumed under normal operating conditions
    under full application load (ie, 100% CPU utilization)

    imageA small Calxeda Cluster: a Simple Example
    • Start with four ServerNodes
    • Consumes only 20W total power   
    • Connected via distributed fabric switches   
    • Connect up to 4 SATA drives per node   
    • Then scale this to thousands of ServerNodes

    EnergyCard: a Quad-Node Reference Design

      • Four-node reference platform from Calxeda
      • Available as product and/or design
      • Plugs into OEM system board with passive fabric, no additional switch HW
        EnergyCard delivers 80Gb Bandwidth to the system board. (8 x 10Gb links)

    image

    image

    It is also important to have a look at what were the Open Source Software Packages for Initial Calxeda Shipments [Calxeda’s ‘ARM Servers, Now!’ blog, May 24, 2012]

    We are often asked what open-source software packages are available for initial shipments of Calxeda-based servers.

    Here’s the current list (changing frequently).  Let us know what else you need!

    image

    Then Perspectives From Linaro Connect [Calxeda’s ‘ARM Servers, Now!’ blog, March 20, 2013] sheds more light on the recent software alliances which make Calxeda to deliver:

    – From Larry Wikelius,   Co-Founder and VP Ecosystems,  Calxeda:

    The most recent Linaro Connect (Linaro Connect Asia 2013 – LCA), held in Hong Kong the first week of March, really put a spotlight on the incredible momentum around ARM based technology and products moving into the Data Center.  Yes – you read that correctly – the DATA CENTER!

    When Linaro was originally launched almost three years ago the focus was exclusively on the mobile and client market – where ARM has and continues to be dominant.  However, as Calxeda has demonstrated, the opportunity for the ARM architecture goes well beyond devices that you carry in your pocket.  Calxeda was a key driver in the formation of the Linaro Enterprise Group (LEG), which was publicly launched at the previous LinaroConnect event in Copenhagen in early November, 2012.

    LEG has been an exciting development for Linaro and now has 13 member companies that include server vendors such as Calxeda, Linux distribution companies Red Hat and Canonical, OEM representation from HP and even Hyperscale Data Center end user Facebook.  There were many sessions throughout the week that focused on Server specific topics such as UEFI, ACPI, Virtualization, Hyperscale Testing with LAVA and Distributed Storage.  Calxeda was very active throughout the week with the team participating directly in a number of roadmap definition sessions, presenting on Server RAS and providing guidance in key areas such as application optimization and compiler focus for Servers.

    Linaro Connect is proving to be a tremendous catalyst for the the growing eco-system around the ARM software community as a whole and the server segment in particular.  A great example of this was the keynote presentation given jointly by Mark Heath and Lars Kurth from Citrix on Tuesday morning.  Mark is the VP of XenServer at Citirix and Lars is well know in the OpenSource community for his work with Xen.  The most exciting announcement coming out of Mark’s presentation is that Citrix will be joining Linaro as a member of LEG.  Citrix will be certainly prove to be another valuable member of the Linaro team and during the week attendees were able to appreciate how serious Citrix is about supporting ARM servers.  The Xen team has not only added full support for ARM V7 systems in the Xen 4.3 release but they have accomplished some very impressive optimizations for the ARM platform.  The Xen team has leveraged Device Tree for optimal device discovery.  Combined with a number of other code optimizations they showed a dramatically smaller code base for the ARM platform.  We at Calxeda are thrilled to welcome Citrix into LEG!

    As an indication of the draw that the Linaro Connect conference is already having on the broader industry the Open Compute Project (OCP) held their first International Event co-incident with LCA at the same venue.  The synergy between Linaro and OCP is significant with the emphasis on both organizations around Open Source development (one software and one hardware) along with the dramatically changing design points for today’s Hyperscale Data Center.  In fact the keynote at LCA on Wednesday morning really put a spotlight on how significant this is likely to be.  Jason Taylor, Director of Capacity Engineering and Analysis at Facebook, presented on Facebook’s approach to ARM based servers.   Facebook’s consumption of Data Center equipment is quite stunning – Jason quoted from Facebook’s 10-Q filed in October 2012 which stated that “The first nine months of 2012 … $1.0 billion for capital expenditures” related to data center equipment and infrastructure.  Clearly with this level of investment Facebook is extremely motivated to optimize where possible.  Jason focused on the strategic opportunity for ARM based severs in a disaggregated Data Center of the future to provide lower cost computing capabilities with much greater flexibility.

    Calxeda has been very active in building the Server Eco-System for ARM based servers.  This week in Hong Kong really underscored how important that investment has become – not just for Calxeda but for the industry as a whole. Our commitment to Open Source software development in general and Linaro in particular has resulted in a thriving Linux Infrastructure for ARM servers that allows Calxeda to leverage and focus on key differentiation for our end users.  The Open Compute Project, which we are an active member in and have contributed to key projects such as the Knockout Storage design as well as the Open Slot Specification, demonstrates how the combination of an Open Source approach for both Software and Hardware can compliment each other and can drive Data Center innovation.  We are early in this journey but it is very exciting!

    Calxeda will continue to invest aggressively in forums and industry groups such as these to drive the ARM based server market.  We look forward to continue to work with the incredibly innovative partners that are members in these groups and we are confident that more will join this exciting revolution.  If you are interested in more information on these events and activities please reach out to us directly at info@calxeda.com.

    The next Linaro Connnect is scheduled for early July in Dublin. We expect more exciting events and topics there and hope to see you there!

    They are also referring on their blog to Mobile, cloud computing spur tripling of micro server shipments this year [IHS iSuppli press release, Feb 6, 2013] which showing the general market situation well into the future as:

    Driven by booming demand for new data center services for mobile platforms and cloud computing, shipments of micro servers are expected to more than triple this year, according to an IHS iSuppli Compute Platforms Topical Report from information and analytics provider IHS (NYSE: IHS).
    Shipments this year of micro servers are forecast to reach 291,000 units, up 230 percent from 88,000 units in 2012. Shipments of micro servers commenced in 2011 with just 19,000 units. However, shipments by the end of 2016 will rise to some 1.2 million units, as shown in the attached figure.

    image

    The penetration of micro servers compared to total server shipments amounted to a negligible 0.2 percent in 2011. But by 2016, the machines will claim a penetration rate of more than 10 percent—a stunning fiftyfold jump.
    Micro servers are general-purpose computers, housing single or multiple low-power microprocessors and usually consuming less than 45 watts in a single motherboard. The machines employ shared infrastructure such as power, cooling and cabling with other similar devices, allowing for an extremely dense configuration when micro servers are cascaded together.
    “Micro servers provide a solution to the challenge of increasing data-center usage driven by mobile platforms,” said Peter Lin, senior analyst for compute platforms at IHS. “With cloud computing and data centers in high demand in order to serve more smartphones, tablets and mobile PCs online, specific aspects of server design are becoming increasingly important, including maintenance, expandability, energy efficiency and low cost. Such factors are among the advantages delivered by micro servers compared to higher-end machines like mainframes, supercomputers and enterprise servers—all of which emphasize performance and reliability instead.”
    Server Salad Days
    Micro servers are not the only type of server that will experience rapid expansion in 2013 and the years to come. Other high-growth segments of the server market are cloud servers, blade servers and virtualization servers.
    The distinction of fastest-growing server segment, however, belongs solely to micro servers.
    The compound annual growth rate for micro servers from 2011 to 2016 stands at a remarkable 130 percent—higher than that of the entire server market by a factor of 26. Shipments will rise by double- and even triple-digit percentages for each year during the period.
    Key Players Stand to Benefit
    Given the dazzling outlook for micro servers, makers with strong product portfolios of the machines will be well-positioned during the next five years—as will their component suppliers and contract manufacturers.
    A slew of hardware providers are in line to reap benefits, including microprocessor vendors like Intel, ARM and AMD; server original equipment manufacturers such as Dell and Hewlett-Packard; and server original development manufacturers including Taiwanese firms Quanta Computer and Wistron.
    Among software providers, the list of potential beneficiaries from the micro server boom extends to Microsoft, Red Hat, Citrix and Oracle. For the group of application or service providers that offer micro servers to the public, entities like Amazon, eBay, Google and Yahoo are foremost.
    The most aggressive bid for the micro server space comes from Intel and ARM.
    Intel first unveiled the micro server concept and reference design in 2009, ostensibly to block rival ARM from entering the field.
    ARM, the leader for many years in the mobile world with smartphone and tablet chips because of the low-power design of its central processing units, has been just as eager to enter the server arena—dominated by x86 chip architecture from the likes of Intel and a third chip player, AMD. ARM faces an uphill battle, as the majority of server software is written for x86 architecture. Shifting from x86 to ARM will also be difficult for legacy products.
    ARM, however, is gaining greater support from software and OS vendors, which could potentially put pressure on Intel in the coming years.
    Read More > Micro Servers: When Small is the Next Big Thing

    Then there are a number of Intel competitive posts on Calxeda’s ‘ARM Servers, Now!’ blog:
    What is a “Server-Class” SOC? [Dec 12, 2012]
    Comparing Calxeda ECX1000 to Intel’s new S1200 Centerton chip [Dec 11, 2012]
    which you can also find in my Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] with significantly wider additional information upto binary translation from x86 to ARM with Linux

    See also:
    ARM Powered Servers: 2013 is off to a great start & it is only March! [Smart Connected Devices blog of ARM, March 6, 2013]
    Moonshot – a shot in the ARM for the 21st century data center [Smart Connected Devices blog of ARM, April 9, 2013]
    Are you running out of data center space? It may be time for a new server architecture: HP Moonshot [Hyperscale Computing Blog of HP, April 8, 2013]
    HP Moonshot: the HP Labs team that did some of the groundbreaking research [Innovation @ HP Labs blog of HP, April 9, 2013]
    HP Moonshot: An Accelerator for Hyperscale Workloads [Moor Insights White Paper, April 8, 2013]
    Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop [HP Vertica blog, April 9, 2013] by team of HP Labs researchers show how the Vertica Analytics Platform can be used to find patterns from a billion records in a couple of minutes, about 9x faster than Hadoop.
    PCs and cloud clients are not parts of Hewlett-Packard’s strategy anymore [‘Experiencing the Cloud’, Aug 11, 2011 – Jan 17, 2012] see the Autonomy IDOL related content there
    ENCO Systems Selects HP Autonomy for Audio and Video Processing [HP Autonomy press release, April 8, 2013]

    HP Autonomy today announced that ENCO Systems, a global provider of radio automation and live television audio solutions, has selected Autonomy’s Intelligent Data Operating Layer (IDOL) to upgrade ENCO’s latest-generation enCaption product.

    ENCO Systems provides live automated captioning solutions to the broadcast industry, leveraging technology to deliver closed captioning by taking live audio data and turning it into text. ENCO Systems is capitalizing on IDOL’s unique ability to understand meaning, concepts and patterns within massive volumes of spoken and visual content to deliver more accurate speech analytics as part of enCaption3.

    “Many television stations count on ENCO to provide real-time closed captioning so that all of their viewers get news and information as it happens, regardless of their auditory limitations,” said Ken Frommert, director, Marketing, ENCO Systems. “Autonomy IDOL helps us provide industry-leading automated closed captioning for a fraction of the cost of traditional services.”
    enCaption3 is the only fully automated speech recognition-based closed captioning system for live television that does not require speaker training. It gives broadcasters the ability to caption their programming, including breaking news and weather, any time, day or night, since it is always on and always available. enCaption3 provides captioning in near real time-with only a 3 to 6 second delay-in nearly 30 languages.
    “Television networks are under increasing pressure to provide real-time closed captioning services-they face fines if they don’t, and their growing and diverse viewers demand it,” said Rohit de Souza, general manager, Power, HP Autonomy. “This is another example of a technology company integrating Autonomy IDOL to create a stronger, faster and more accurate product offering, and demonstrates yet another powerful way in which IDOL can be applied to help organizations succeed in the human information era.”

    Using Big Data to change the game in the Energy industry [Enterprise Services Blog of HP, Oct 24, 2012]

    … Tools like HP’s Autonomy that analyzes the unstructured data found in call recordings, survey responses, chat logs, e-mails, social media posts and more. Autonomy’s Intelligent Data Operating Layer (IDOL) technology uses sophisticated pattern-matching techniques and probabilistic modeling to interpret information in much the same way that humans do. …

    Stouffer Egan turns the tables on computers in keynote address at HP Discover [Enterprise Services Blog of HP, June 8, 2012]

    For decades now, the human mind has adjusted itself to computers by providing and retrieving structured data in two-dimensional worksheets with constraints on format, data types, list of values, etc. But, this is not the way the human mind has been architected to work. Our minds have the uncanny ability to capture the essence of what is being conveyed in a facial expression in a photograph, the tone of voice or inflection in an audio and the body language in a video. At the HP Discover conference, Autonomy VP for United States, Stouffer Egan showed the audience how software can begin to do what the human mind has being doing since the dawn of time. In a demonstration where Iron Man came live out of a two-dimensional photograph, Egan turned the tables on computers. It is about time computers started thinking like us rather than us forcing us to think like them.
    Egan states that the “I” in IT is where the change is happening. We have a newfound wealth of data through various channels including video, social, click stream, audio, etc. However, data unprocessed without any analysis is just that — raw data. For enterprises to realize business value from this unstructured data, we need tools that can process it across multiple media. Imagine software that recognizes the picture in a photograph and searches for a video matching the person in the picture. The cover page of a newspaper showing a basketball star doing a slam dunk suddenly turns live pulling up the video of this superstar’s winning shot in last night’s game. …


    2. Software Partners

    image
    HP Moonshot is setting the roadmap for next generation data centers by changing the model for density, power, cost and innovation. Ubuntu has been designed to meet the needs of Hyperscale customers and, combined with its management tools, is ideally suited be the operating system platform for HP Moonshot. Canonical has been working with HP since the beginning of the Moonshot Project, and Ubuntu is the only OS integrated and fully operational across the complete Moonshot System covering x86 and ARM chip technologies.
    What Canonical is saying about HP Moonshot
    image
    As mobile workstyles become the norm, the scalability needs of today’s applications and devices are increasingly challenging what traditional infrastructures can support. With HP’s Moonshot System, customers will be able to rapidly deploy, scale, and manage any workload with dramatically lower space and energy constraints. The HP Pathfinder Innovation Ecosystem is a prime opportunity for Citrix to help accelerate the development of innovative solutions that will benefit our enterprise cloud, virtualization and mobility customers.
    image
    We’re committed to helping enterprises achieve the most from their Big Data initiatives. Our partnership with HP enables joint customers to keep and query their data at scale so they can ask bigger questions and get bigger answers. By using HP’s Moonshot System, our customers can benefit from the improved resource utilization of next generation data center solutions that are workload optimized for specific applications.
     
    imageToday’s interactive applications are accessed 24×365 by millions of web and mobile users, and the volume and velocity of data they generate is growing at an unprecedented rate. Traditional technologies are hard pressed to keep up with the scalability and performance demands of these new applications. Couchbase NoSQL database technology combined with HP’s Moonshot System is a powerful offering for customers who want to easily develop interactive web and mobile applications and run them reliably at scale. image
    Our partnership with HP facilitates CyWee’s goal of offering solutions that merge the digital and physical worlds. With TI’s new SoCs, we are one step closer to making this a reality by pushing state-of-the-art video to specialized server environments. Together, CyWee and HP will deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.
    image
    HP’s new Moonshot System will enable organizations to increase the energy efficiency of their data centers while reducing costs. Our Cassandra-based database platform provides the massive scalability and multi-datacenter capabilities that are a perfect complement to this initiative, and we are excited to be working with HP to bring this solution to a wide range of customers.
    image
    Big data comes in a wide range for formats and types and is a result of the connected everything world we live in. Through Project Moonshot, HP has enabled a new class of infrastructure to run more efficient workloads, like Apache Hadoop, and meet the market demand of more performance for less.
    image
    The unprecedented volume and variety of data introduces unique challenges to organizations today… By combining the HP Moonshot system with Autonomy IDOL’s unique ability to understand concepts in information, organizations can dramatically reduce the cost, space, and energy requirements for their big data initiatives, and at the same time gain insights that grow revenue, reduce risk, and increase their overall Return on Information.
    image
    Big Data is not just for Big Companies – or Big Servers – anymore – it’s affecting all sectors of the market. At HP Vertica we’re very excited about the work we’ve been doing with the Moonshot team on innovative configurations and types of analytic appliances which will allow us to bring the benefits of real-time Big Data analytics to new segments of the market. The combination of the HP Vertica Analytics Platform and Moonshot is going to be a game-changer for many.
    image
    HP worked closely with Linaro to establish the Linaro Enterprise Group (LEG). This will help accelerate the development of the software ecosystem around ARM Powered servers. HP’s Moonshot System is a great platform for innovation – encouraging a wide range of silicon vendors to offer competing ‘plug-and-play’ server solutions, which will give end users maximum choice for all their different workloads.
    What Linaro is saying about HP Moonshot[HewlettPackardVideos YouTube channel, April 8, 2013]
    image
    Organizations are looking for ways to rapidly deploy, scale, and manage their infrastructure, with an architecture that is optimized for today’s application workloads. HP Moonshot System is an energy efficient, space saving, workload-optimized solution to meet these needs, and HP has partnered with MapR Technologies, a Hadoop technology leader, to accelerate innovation and deployment of Big Data solutions.
    image
    NuoDB and HP are shattering the scalability and density barriers of a traditional database server. NuoDB on the HP Moonshot System delivers unparalleled database density, where customers can now run their applications across thousands of databases on a single box, significantly reducing the total cost across hardware, software, and power consumption. The flexible architecture of HP Moonshot coupled with NuoDB’s hyper-pluggable database design and its innovative “database hibernation” technology makes it possible to bring this unprecedented hardware and software combination to market.
    What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013]
    image
    As the leading solution provider for the hosting market, Parallels is excited to be collaborating in the HP Pathfinder Innovation Ecosystem. The HP Moonshot System in concert with Parallels Plesk Panel and Parallels Containers provides a flexible and efficient solution for cloud computing and hosting.
    image
    Red Hat Enterprise Linux on HP’s converged infrastructure means predictability, consistency and stability. Companies around the globe rely on these attributes when deploying applications every day, and our value proposition is just as important in the Hyperscale segment. When customers require a standard operating environment based on Red Hat Enterprise Linux, I believe they will look to the HP Moonshot System as a strong platform for high-density Hyperscale implementations.
    What Red Hat is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
    image
    HP Project Moonshot’s promise of extreme low-energy servers is a game changer, and SUSE is pleased to partner with HP to bring this new innovation to market. For more than twenty years, SUSE has adapted its enterprise-grade Linux operating system to achieve ever-increasing performance needs that succeed both today and tomorrow in areas such as Big Data and cloud computing.
    What SUSE is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]


    3. Hardware Partners

    image
    AMD is excited to continue our deep collaboration with HP to bring extreme low-energy, ultra dense, specialized server solutions to the market. Both companies share a passion to bring innovative workload optimized solutions to the market, enabling customers to scale-out to new levels within existing energy and space constraints. The new low-power x86 AMD Opteron™ APU is optimized in the HP Moonshot System to dramatically lower TCO in quickly emerging media oriented workloads.
    What AMD is saying about HP Moonshot
    image

    It is exciting to see HP take the lead in innovating low-energy servers for the cloud. Applied Micro’s ARM 64-bit X-Gene Server on a Chip will enable performance levels seen in today’s deployments while offering higher densities, greatly improved I/O, and substantial reductions in the total cost of ownership. Together, we will unleash innovation unlike anything we’ve seen in the server market for decades.

    What Applied Micro is saying about HP Moonshot

    image
    In the current economic and power realities, today’s server infrastructure cannot meet the needs of the next billion data users, or the evolving needs of currently supported users. Customers need innovative SoC solutions which deliver more integration and optimization than has historically been required by traditional enterprise workloads. HP’s Moonshot System is a departure from the one size fits all approach of traditional enterprise and embraces a range of ARM partner solutions that address different performance, workloads and cost points.
    What ARM is saying HP Moonshot
    image
    Calxeda and HP’s new Moonshot System are a powerful combination, and sets a new standard for ultra-efficient web and application serving. Fulfilling a journey started together in November 2011, Project Moonshot creates the foundation for the new age of application-specific computing.
    What Calxeda is saying about HP Moonshot
    image
    HP Moonshot System is a game changer for delivering optimized server solutions. It beautifully balances the need for mixing different processor solutions optimized for different workloads under a standard hardware and software framework. Cavium’s Project Thunder will provide a family of 64-bit ARM v8 processors with dense and scalable sever class performance at extremely attractive power and cost metrics. We are doing this by blending performance and power efficient compute, high performance memory and networking into a single, highly integrated SoC.
    What Cavium is saying about HP Moonshot
    image
    Intel is proud to deliver the only server class, 64-bit SoC technology that powers the first and only production shipping HP ProLiant Moonshot Server today. 64-bit Intel Atom processor S1200 family features extreme low power combined with required datacenter class capabilities for lightweight web scale workloads, such as low end dedicated hosting and static web serving. In collaboration with HP, we have a strong roadmap of additional server solutions shipping later this year, including Intel’s 2nd generation 64-bit SoC, “Avoton” based on leading 22nm manufacturing technology, that will deliver best in class energy efficiency and density for HP Moonshot System.
    What Intel is saying about HP Moonshot
    image What Marvell is saying about HP Moonshot
    image
    HP Moonshot System’s high density packaging coupled with integrated network capability provides the perfect platform to enable HP Pathfinder Innovation Ecosystem partners to deliver cutting edge technology to the hyper-scale market. SRC Computers is excited to bring its history of delivering paradigm shifting high-performance, low-power, reconfigurable processors to HP Project Moonshot’s vision of optimizing hardware for maximum application performance at lowest TCO.
    What SRC Computers is saying about HP Moonshot
    image
    The scalability and high performance at low power offered through HP’s Moonshot System gives customers an unmatched ability to adapt their solutions to the ever-changing and demanding market needs in the high performance computing, cloud computing and communications infrastructure markets. The strong collaboration efforts between HP and TI through the HP Pathfinder Innovation Ecosystem ensure that customers understand and get the most benefit from the processors at a system-level.
    What TI is saying about HP Moonshot

    AMD 2012-13: a new Windows 8 strategy expanded with ultra low-power APUs for the tablets and fanless clients

    AMD Strategy Transformation Brings Agile Delivery of Industry-Leading IP to the Market [AMD press release, Feb 2, 2012]

    At its annual Financial Analyst Day, AMD (NYSE: AMD) detailed a new “ambidextrous” strategythat builds on the company’s long history of x86 and graphics innovation while embracing other technologies and intellectual property to deliver differentiated products.

    AMD is adopting an SoC-centric roadmap designed to speed time-to-market, drive sustained execution, and enable the development of more tailored customer solutions. SoC design methodology is advantageous because it is a modular approach to processor design, leveraging best practice tools and microprocessor design flows with the ability to easily re-use IP and design blocksacross a range of products.

    image“AMD’s strategy capitalizes on the convergence of technologies and devices that will define the next era of the industry,” said Rory Read, president and CEO, AMD. “The trends around consumerization, the Cloud and convergence will only grow stronger in the coming years. AMD has a unique opportunity to take advantage of this key industry inflection point.  We remain focused on continuing the work we began last year to re-position AMD.  Our new strategy will help AMD embrace the shifts occurring in the industry, marrying market needs with innovative technologies and become a consistent growth engine.”

    Roadmap Updates Focus on Customer Needs

    Additionally, AMD today announced updates to its product roadmaps for AMD Central Processing Unit (CPU) and Accelerated Processing Unit (APU) products it plans to introduce in 2012 and 2013. The roadmap modifications address key customer priorities across form factors including ultrathin notebooks, tablets, all-in-ones, desktops and servers with a clear focus on low power, emerging markets and the Cloud.

    AMD’s updated product roadmap features second generationmainstream (“Trinity”) and low-power (“Brazos 2.0”) APUs for notebooks and desktops; “Hondo,” an APU specifically designed for tablets; new CPU cores in 2012 and 2013 with “Piledriver” and its successor “Steamroller,” as well as “Jaguar,” which is the successor to AMD’s popular “Bobcat” core. In 2012, AMD plans to introduce four new AMD Opteron™ processors. For a more in-depth look at AMD’s updated product roadmap, please visit http://blogs.amd.com.

    Next-generation Architecture Standardizes and Facilitates Software Development

    AMD also provided further details on its Heterogeneous System Architecture (HSA), which enables software developers to easily program APUs by combining scalar processing on the CPU with parallel processing on the Graphics Processing Unit (GPU), all while providing high bandwidth access to memory at low power. AMD is proactively working to make HSA an open industry standard for the developer community. The company plans to hold its 2nd annual AMD Fusion Developer Summitin June, 2012.

    New Company Structure Strengthens Execution

    In conjunction with announcing its restructuring plan in November 2011, AMD has strengthened its leadership team with the additions of Mark Papermaster as senior vice president and chief technology officer, Rajan Naik as senior vice president and chief strategy officer, and Lisa Su as senior vice president and general manager, Global Business Units. These executives will help ensure that sustainable, dependable execution becomes a hallmark of AMD.

    Supporting Resources

    AMD started talking about ‘Trinity’ and ‘Hondo’ last summer. See in Acer repositioning for the post Wintel era starting with AMD Fusion APUs [June 17, 2011]


    What AMD could definitely be proud of for 2011 is A “Brazos” Story: The Little Chip That Could (And Then Just Kept On Going) [AMD Fusion blog, Feb 1, 2012]:

    In late 2010, AMD shipped its first-ever Accelerated Processing Units (APUs), internally codenamed “Brazos,”which combined the tremendous processing power of graphics and x86 on a single chip.

    We had high expectations for the low-voltage “Brazos” APU: great computing, HD, long battery life and DirectX 11 capable graphics, all on a single chip. Yet still we were blown away by the initial industry reception. It was only a year ago we left CES with seven highly-sought after innovation and technology awardsfor the little product we ultimately named the C- and E-Series APUs, including:

    After CES we should have re-nicknamed “Brazos” the “Little Chip That Could.” And all throughout 2011, “Brazos” kept on chugging. We added the “Best in Show” Award at Embedded Systems Conference and the “2011 Best Choice of Computex TAIPEI Award” to the list of accolades. In the second quarter we sold more than five million C- and E-Series APUs. What a tremendous start to a new way of processing for AMD and the industry.

    But “Brazos” kept on impressing, showing up in a variety of form factorsnotebooks, netbooks, small desktops and all-in-ones– from top global OEM partners.

    So it was no surprise or mistake that we ended 2011 with more than 30 million APUs shipped. It all started with little “Brazos,” which has now earned its place in history as AMD’s fastest ramping platform ever.

    John Taylor, Director of Worldwide Product Marketing at AMD

    CES 2012 Consumer Showcase Tour [amd, Jan 11, 2012]

    Leslie Sobon of AMD talks about how APU’s help enhance your digital lifestyle in any room in of your home.
    AMD Codename Decoder – November 9, 2010 [AMD Business blog]
    APU
    An APU is an accelerated processing unit, a new generation of processors that combine either low-power or high-performance x86 CPU cores with the latest GPU technology (such as DirectX® 11) on a single die.
    Planned for introduction: Q1 2011
    “Bobcat”
    Market: Multiple devices, including notebooks ultrathins, HD netbooks and small form factor desktops.
    What is it? A sub-one watt capable x86 CPU core that first comes to market in the “Ontario” and “Zacate” Accelerated Processing Units (APU) for mainstream, ultrathin, value, and netbook form factors as well as small form factor desktop solutions. “Bobcat” is designed to be an extremely small, highly flexible, out-of-order execution x86 core that easily can be scaled up and combined with other IP in SoC configurations.
    Planned for introduction: Q1 2011
    “Brazos”
    Markets: Value Mainstream Notebooks, HD Netbooks and Small Form Factor Desktops
    What is it? “Brazos” is AMD’s 2011 low-power platform, available with two APUs; “Zacate” – currently planned to be marketed as the E Series – is an 18-watt TDP APU for ultrathin, mainstream and value notebooks as well as desktops and all-in-ones. “Ontario” – currently planned to be marketed as the C Series – is a 9-watt
    APU for netbooks and small form factor desktops and devices. Both “Brazos” platform APUs include a DirectX® 11-capable GPU.
    Planned for introduction: Q1 2011
    “Bulldozer”
    Market: Server and Client
    What is it? A multi-threaded high-performance x86 CPU core contained in the “Zambezi” processor for client PCs and “Interlagos” and “Valencia” processors for servers. Included in the “Scorpius” enthusiast desktop PC platform and “Maranello,” “Adelaide,” and “San Marino” server platforms, “Bulldozer” is designed to be a completely new, high performance architecture that employs a new approach to multithreaded compute performance for achieving advanced efficiency and throughput. “Bulldozer” is designed to give AMD an exceptional CPU option for linking with GPUs in highly scalable, single-chip APU configurations. “Bulldozer” offers AMD another exceptional CPU option for combining with GPUs in highly scalable, single chip APU configurations, beginning in 2012 APU designs.
    Planned for introduction: Client (1H 2011); Server (2H 2011)
    “Llano”
    Market: Notebooks and Desktops
    What is it? Part of the “Sabine” platform, “Llano” is a 32nm APU including up to four x86 cores and a DirectX® 11-capable GPU, primarily intended for performance and mainstream notebooks and mainstream desktops. “Llano” is engineered to deliver impressive visual computing experiences, outstanding performance with low power and long battery life.
    Planned for introduction: Mid-2011
    “Ontario”
    Market: Primarily ultrathin notebooks and HD netbooks
    What is it? A 9W APU featuring dual or single “Bobcat” x86 cores currently planned to be marketed as the C Series, and primarily intended to serve the low power and highly portable PC markets for netbooks and small form factor desktops and devices.
    Planned for introduction: Q1 2011
    “Zacate”
    Market: Notebook/Desktop
    What is it? “Zacate” is AMD’s 18W APU designed for the mainstream notebook and desktop market. Zacate will feature low-power “Bobcat” CPU cores and support DirectX 11 technology.
    Planned for introduction: Q1 2011

    More information about 2011 AMD APU past on this blog:
    Acer repositioning for the post Wintel era starting with AMD Fusion APUs [June 17, 2011]
    Supply chain battles for much improved levels of price/performance competitiveness [Aug 16, 2011]
    Acer & Asus: Compensating lower PC sales by tablet PC push [March 29 – Aug 2, 2011]
    CES 2011 presence with Microsoft moving to SoC & screen level slot management that is not understood by analysts/observers at all [Jan 7, 2011]
    Changing purchasing attitudes for consumer computing are leading to a new ICT paradigm [Jan 5, 2011]


    AMD 2012 APU, code name “Trinity” [amd, Jan 11, 2012]

    From the Technology Showcase at CES, John Taylor discusses the next-generation AMD APU, code name “Trinity”, and it’s benefits.

    AMD started talking about ‘Trinity’ last summer. See in Acer repositioning for the post Wintel era starting with AMD Fusion APUs [June 17, 2011]

    Advanced Micro Devices’ CEO Discusses Q4 2011 Results – Earnings Call Transcript [Seeking Alpha, Jan 24, 2012]

    We are seeing particularly strong customer interest in our expanded low-power APUs for 2012. The low-power versions of our next-generation chip, Trinity APU, delivers mainstream performance while using half the power of our traditional notebook processor. This processor fits into an ultrathin notebook design, as thin as 17 millimeters, providing industry-leading visual performance and battery life at very attractive price points. Trinity remains on track to launch for midyear.

    We achieved record quarter client revenue driven by an increase in supply of Llano APUs. And in Q4 of 2011, APUs accounted for nearly 100% of mobile microprocessors shipped and more than 60% of the total client microprocessors shipped. Microprocessor ASP increased sequentially due to an increase in mobile microprocessor ASP and an increase in server units shipped.

    Question-and-Answer Session

    There is no doubt that the customer acceptance of our APU architecture is quite strong. We’ve now shipped over 30 million of these APUs to date. And we’re seeing a strong uptake in terms of that architecture, what it means to the customer. They are looking for a better experience, and I think that’s a key reason why we’ve seen the momentum in our business and the ability to deliver on that. Our focus on execution around the APUs and around Llano is definitely paying off. And I think as we move forward, we should be able to continue to build on that momentum.

    We’ve actually increased our Llano 32-nanometer product delivery by 80% from the third quarter, and now Llano makes up almost 60% of the mobile microprocessing revenue. … We’re going to continue to build on the strong relationships that we’ve been developing with GLOBALFOUNDRIES as we move forward.

    The movement to thin and light is nothing new. Customers want mobility. And the idea of ultrathin is something that we’re very focused on. And if you think about it with our APU strategy that I mentioned, with the next-generation product, Trinity APU, we already are well ahead of the pace last year when we set a record-setting year for design wins with the Trinity product in 2012. With that product, we can deliver ultrathin in the range of 17 millimeters. And what’s really important and I think we have to all focus on is ultrathin and mobility, the ability for computing to reach customers across the planet. … And I’ll add that the improvements that we’ve made in Trinity in both our CPU and the GPU are really delivering outstanding results in performance per watt. So as well for the ultrathins being able to hit the 17-millimeter low-profile, we’re also getting a doubling of the performance per watt. So it’s an exciting application of our APU technology.

    … as you think of the industry trends around consumerization, cloud and convergence, there’s no doubt, as we’ve seen these kinds of inflection points in the industry, there’s always a significant downward pressure in terms of the price points. So if you’re dragging huge asset base along with you and there comes pressure into the market around those price points, that could put pressure into their [Intel’s] — into a business model. … We think the emerging market and the entry — and the high-growth markets around entry and mainstream will be the hottest segment, and I think that’s playing to our hand. We’re going to emphasize this strategy. We want to embrace this inflection point that’s emerging. We want to accelerate it, because shift happens when there’s these inflecting points.

    Of course, we see the investment of our competitor, but the fabless ecosystem is not sitting still. And if you look at the investments that are done on their — TSMC, at a GLOBALFOUNDRIES and a GLOBALFOUNDRIES and alliances level, then the numbers are very comparable. GLOBALFOUNDRIES and their partnership models invest about $9 billion this year. TSMC seeds around $6 billion, if I recall the number correctly. So this is, in terms of scale and absolute numbers, are very comparable to what Intel is putting on the table.

    … I feel pretty good about where we are in terms of the transition around 32 nm. … And I want to emphasize, we’ve made real progress, but we’re not finished with that. And we need to continue to work every day with those tiger teams we’ve put in place. We’re tracking the test vehicles through the lines to make sure that we’re getting that consistent improvement, because that will reduce our consumption of wafers and give us far more flexibility in our supply chain. So while we have improved by 80% from the third quarter, we’re not all the way there yet … there’s more yield improvements possible on that 32-nanometer line. … And those same techniques and practices that the teams — the tiger teams applied on 32-nanometer, that momentum continues in the 28-nanometer. And so that poises us well going into the coming 2012.

    … I think it’s fair to say from the improvements we have seen and the — and our foundry partners that we are not going to be supply-constrained in the first quarter. … I think the progress we have seen on Trinity has impressed us. And of course, all the learnings that have been done on 32-nanometer with the Llano product will be transferred to Trinity. So the start-off pace with Trinity is going to be significantly better from a yield perspective compared to where we were at Llano launch. So that makes us quite optimistic looking forward.

    Here are also a couple of illustrations highlighting that 2011 APU success with the details of new APU strategy additions from Lisa Su‘s (Senior Vice President and General Manager, Global Business Units) presentation for the 2012 Financial Analyst Day held on February 2, 2012 (see her full presentation in PDF):

    APUs BRING LEADERSHIP GRAPHICS/COMPUTE IP TO MAINSTREAM [#10]

    image2011: AMD first to introduce heterogeneous computing to mainstream applications

    “Llano” APU offers nearly 3X the performance in the same power envelope over conventional CPUs (2)

    Fully leverages the growing ecosystem of GPU-accelerated apps

    Source: AMD Performance labs
    (1) Testing performed by AMD Performance Labs. Calculated compute performance or Theoretical Maximum GFLOPS score for 2013 Kaveri (4C, 8CU) 100w APU, use standard formula of (CPU Cores x freq x 8 FLOPS) + (GPU Cores x freq x 2 FLOPS). The calculated GFLOPS for the 2013 Kaveri (4C, 8CU) 100w APU was 1050. GFLOPs scores for 2011 A-Series “Llano” was 580 and the 2013 [2012] A-Series “Trinity” was 819. Scores rounded to the nearest whole number.
    (2) Testing performed by AMD Performance Labs. Calculated compute performance or Theoretical Maximum GFLOPS score (use standard formula of CPU Cores x freq x 8 FLOPS) for conventional CPU alone in 2011 was 210 GFLOPs while the calculated GFLOPs for the 1st Gen APU using standard formula (CPU Cores x freq x 8 FLOPS) + (GPU Cores x freq x 2 FLOPS) was 580 or 2.8 times greater compute performance.

    Related new codenames (from the AMD provided At-a-Glance Codename Decoder [Feb 2, 2012]):

    “Trinity” APU (Traditional Notebooks, Ultrathin Notebooks and Desktops)

    • “Trinity” is AMD’s second generation APU and improves the power and performance of AMD’s A-Series APU lineup for mainstream and high-performance notebooks and desktops. “Trinity” will feature next-generation “Piledriver” CPU cores and new, DirectX® 11-capable, second generation AMD Radeon™ HD 7000 series graphics.
    • New for 2012, AMD will offer a BGA or pin-less format, low power “Trinity” APU specifically designed for ultrathin notebooks.
    • Planned for introduction: Mid-2012

    “Piledriver” Core Micro Architecture

    • “Piledriver” is the next evolution of AMD’s revolutionary “Bulldozer” core architecture.
    • The “Trinity” line-up of APUs will be the first introduction of “Piledriver.”

    “Kaveri” APU (Notebooks and Desktops)

    • “Kaveri” is AMD’s third generation APU for mainstream desktop and notebooks.
    • These APUs will include “Steamroller” cores, and new HSA-enabling features for easier programming of accelerated processing capabilities.
    • Planned for introduction: 2013

    “Steamroller” Core Micro Architecture

    • “Steamroller” is the evolution of AMD’s “Piledriver” core architecture.

    AMD OPTERON™ FUTURE TECHNOLOGY [#26]

    image

    Additional new codename (from the AMD provided At-a-Glance Codename Decoder):

    “Excavator” Core Micro Architecture

    • “Excavator” is the evolution of AMD’s “Steamroller” core architecture.

    APU ADOPTION: RECORD DESIGN WINS, STRONG END-USER DEMAND [#11]

    image

    Shipped > 30m APUs to date

    11 of the world’s top 12 OEMs shipping AMD APU-based platforms

    “Brazos” APUs shipped more units in its first year than any previous mobile platform in AMD history

    “Llano” APUs ramped to represent nearly 60% of mobile processor revenue by Q4 2011

    image

    Additional new codenames (from the AMD provided At-a-Glance Codename Decoder):

    “Southern Islands” Discrete Graphics

    • Internal codename for the entire family of desktop graphics ASICs based on Graphics Core Next architecture and utilizing 28nm process technology.
    • “Southern Islands” products include “Tahiti” (AMD Radeon™ HD 7900 series), “Pitcairn,” “Cape Verde” and “New Zealand.”

    “Brazos 2.0” APU (Essential Desktop and Notebook, Netbook, All-In-One and Small Desktop)

    • The “Brazos 2.0” family of APUs will follow “Brazos”, AMD’s fastest ramping platform ever.
    • In addition to increased CPU and GPU frequencies, “Brazos 2.0” will offer additional features and functionality as compared to “Brazos”.
    • Planned for introduction: H1 2012

    “Hondo” APU (Tablet)

    • “Hondo” is AMD’s sub-5W APU designed for tablets. “Hondo” will feature low-power “Bobcat” CPU cores and support DirectX® 11 technology in a BGA or pin-less format.
    • Planned for introduction: H2 2012

    AMD started talking about ‘Hondo’ (as well as ‘Trinity’) last summer. See in Acer repositioning for the post Wintel era starting with AMD Fusion APUs [June 17, 2011]

    image
    (3) Projections and testing developed by AMD Performance Labs. Projected score for 2012 AMD Mainstream Notebook Platform “Comal” on the “Pumori” reference design for PC Mark Vantage Productivity benchmark is projected to increase by up to 25% over actual scores from the 2011 AMD Mainstream Notebook Platform “Sabine”. Projections were based on AMD A8/A6/A4 35w APUs for both platforms.
    (4) Projections and testing developed by AMD Performance Labs. Projected score for the 2012 AMD Mainstream Notebook Platform “Comal” the “Pumori” reference design for 3D Mark Vantage Performance benchmark is projected to increase by up to 50% over actual scores from the 2011 AMD Mainstream Notebook Platform “Sabine”. Projections were based on AMD A8/A6/A4 35w APUs for both platforms.
    (5) Testing performed by AMD Performance Labs. Battery life calculations using the “Pumori” reference design based on average power draw based on multiple benchmarks and usage scenarios. For Windows Idle calculations indicate 732 minutes (12:12 hours) as a resting metric; 421 minutes (7:01 hours) of DVD playback on Hollywood movie, 236 minutes (3:56 hours) of Blu-ray playback on Hollywood movie, and 205 minutes (3:25 hours) using 3D Mark ‘06 as an active metric.
    Projections for the 2012 AMD Mainstream Platform Codename “Comal” assume a configuration of “Pumori” reference board, Trinity A8 35W 4C – highest performance GPU, AMD A70M FCH, 2 x 2G DDR3 1600, 1366 x 768 eDP Panel / LED Backlight, HDD (SATA) – 250GB 5400rpm, 62Whr Battery Pack and Windows 7 Home Premium.

    image

    image

    image

    Additional new codenames (from the AMD provided At-a-Glance Codename Decoder):

    “Sea Islands” Graphics Architecture

    • New GPU Architecture and HSA Features
    • Planned for introduction: 2013

    “Kabini” APU (Essential Desktop and Notebook, Netbook, All-In-One and Small Desktop)

    • The “Kabini” APU is AMD’s second generation low-power APU and follow-on to “Brazos 2.0.”
    • In addition to new “Jaguar” cores, these APUs will be enhanced with new Heterogeneous Systems Architecture (HSA), enabling features for easier programming of accelerated processing capabilities.
    • Planned for introduction: 2013

    “Temash” APU (Tablet and Fanless Client)

    • The “Temash” APU is AMD’s second generation tablet APU and follow-on to “Hondo.”
    • In addition to new “Jaguar” cores, these APUs will be enhanced with new Heterogeneous Systems Architecture-enabling features for easier programming of accelerated processing capabilities.
    • Planned for introduction: 2013

    “Jaguar” Core Micro Architecture

    • “Jaguar” is the evolution of AMD’s “Bobcat” core architecture for low-power APUs.

    MOBILE MARKET PROJECTIONS [#29]                             AMD Direction:

    imageFocus on true productivity and user experience in ultra-low power devices

    Leadership graphics, web applications and video processing leveraging APUs

    Agile, flexible SoC designs

    Ambidextrous solutions across ISAs and ecosystems

    Fanless, sealed designs


    These APU related strategic moves have been summarized by the same John Taylor as Strengthening our Client Roadmap [AMD Fusion blog, Feb 2, 2012]:

    Roadmaps signify our plans to customers and business partners, outlining the new products and technologies that we are bringing online. In an ideal world plans would never change. But in reality, change is a certainty in the tech industry – new form factors immerge, technologies and applications shift and consumer tastes remake technology plans.

    Like any technology company, AMD desires to anticipate change in the industry. So we course-correct as we work with customers to ensure that we create products that address the optimal blend of timing, features and performance, cost and form factors.

    Today at our Financial Analyst Day in Sunnyvale, AMD senior staff detailed how AMD will focus its investments in R&D and marketing going forward, including roadmaps for 2012-2013. As Phil Hughes summarized, the announced roadmaps are designed to extend platform longevity, accelerate time to market and enhance performance and features. These roadmaps strengthen AMD’s ability to make the most of shifting market dynamics, all the while giving stand-out experience across device categories through our graphics and video IP. This blog provides some insight into our 2012 and 2013 roadmaps – the words in quotes are the codenames for the particular AMD processor offerings discussed today.

    2012 Client Roadmap

    AMD’s “Brazos 2.0” Accelerated Processor Unit (APU) family will be used for essential desktop and notebook, netbook, tablet, all-in-one and small desktop form factors. This allows us to address a fast-growing segment of the PC market where we have proven success with the original “Brazos” line-up – the C-Series, E-Series and Z-SeriesAPUs. We will add plenty of new features to the “Brazos 2.0” APU family, including increased CPU and GPU performance, longer battery life, a bevy of integrated I/O options and improvements to AMD Steady Video technology. “Brazos 2.0” is scheduled to hit the market in the first half of 2012.

    As we demoed at CES, AMD’s “Trinity” APU for desktop and notebook remains on track for introduction in mid-2012, with plans to pack up to four “Piledriver” CPU cores and next-generation DirectX® 11-capable graphics technology, together delivering up to 50% more compute performance than our “Llano” offerings, including superior entertainment potential, longer battery-life and an even more incredibly brilliant HD visual experience.

    New for 2012, AMD will introduce a low voltage “Trinity” APU that will be ideal for the next-generation of ultrathin notebook. This “Trinity” APU matches the experience enabled by the AMD 2011 APU in up to half the TDP. As we said, “Trinity” is on track for introduction in mid-2012.

    In 2012 we will also introduce the ultra-low voltage “Hondo” APU for tablets. These low-power (power maxes out at 5W TDP) APUs will have “Bobcat” CPU cores and support DirectX 11 technology in a BGA or pin-less, thin processor package. Look for these in the second half of 2012 – more details to come later.

    On the desktop platform side of things, the “Vishera” CPU will replace the “Komodo” CPU for desktop. This change enables accelerated time to market for improved performance and next-generation CPU features while maintaining the existing AM3+ motherboards. The “Vishera” CPU ushers in many exciting updates, includes 8 “Piledriver” cores, and when compared with the previous generation, provides higher frequencies, improved instruction per clock performance, advanced instruction sets (thus increasing application performance), additional DDR3 memory support and next-generation AMD Turbo Core Technology. We plan to launch “Vishera” in the second half of 2012.

    2013 Client Roadmap

    2013 brings major evolution to the client roadmaps as the vision presented by Rory, Mark and Lisa today begin to manifest – including moving our low power APUs to a system on a chip (SoC) design with the AMD Fusion Controller Hub integrated right into a single chip design.

    In the performance APU category our third-generation APU, “Kaveri,”will employ “Steamroller” (the evolution of AMD’s “Piledriver” core architecture) x86 cores for enhanced instructions per clock and power advantages. Applications that take advantage of GPU accelerate will give users an amazing experience thanks to our Graphics Core Next and new Heterogeneous Systems Architecture (HSA) enabling features for easier programming of accelerated processing capabilities.

    In the low power category, the “Kabini” SoC APU takes over for “Brazos 2.0.” This second generation low power APU integrates “Jaguar” x86 cores for augmented performance and energy efficiency. These APUs will also benefit from select HSA features and functionality.

    We keep on innovating for the ultra-low power space in 2013. Our second generation, ultra-low-power “Temash” SoC APU will follow “Hondo” for tablet and other fanless form factors. This APU will also leverage the “Jaguar” low-power x86 cores and HSA features.

    We at AMD strongly believe these roadmap updates help us time new product introductions with customer design phases to hit key sales cycles across a range of form factors and experiences. We are moving with the market and on the path to deliver exceptional productivity and user experience in a wide array of form factors.

    John Taylor, Director of Worldwide Product Marketing at AMD

    He also provided the following answers to questions regarding how AMD spells out Windows 8 tablet strategy [CNET, Feb 2, 2012]:

    Q: Before, we go to Windows 8, what is your smartphone strategy, if any?
    Taylor: The smartphone market is eight, nine, ten, maybe a dozen players. [They have] lower ASPs (average selling price), lower [profit] margins, different competitive dynamic. So, there is no shift on the smartphone strategy.

    And Window 8?
    Taylor: But you will see much more focus on tablets, the convertible or hybrid devices that fit between tablets and notebooks, very thin [designs].

    What chips exactly will get you there?
    Taylor: For tablets, it will decidedly be the Hondo chip. We’re acknowledging that we still have a couple of watts to shave off to really be a more ideal tablet platform (to achieve optimal power efficiency). But we think that Temash gets us much, much closer to that in 2013.

    And Windows 8 convertibles?
    A 17-watt [power consumption] is the lowest that we’ll offer. That’s called Trinity. It will be unmatched in that [17-watt design] space. Discrete graphics-like performance. All types of dedicated video processing capabilities, better battery life than the competition. And all of these ways that we’re driving the new generation of accelerated applications. If you think about the Web apps that are being built for Win 8, using HTML5 and the graphics enginethat drives that higher level experience.

    I will add to that the following two illustrations from the AMD Product and Technology Roadmaps[AMD FAD, Feb 2, 2012]:
    image

    “Vishera” CPU (Desktop)

    • The “Vishera” desktop CPU incorporates up to eight “Piledriver” cores, advanced instruction sets and other performance enhancing additions
    • This next-generation CPU will maintain the AM3+ infrastructure.
    • Planned for introduction: H2 2012

    image


    In addition to the above described expansion of the original APU strategy for the clients there is a kind of naming change with AMD Fusion System Architecture is now Heterogeneous Systems Architecture [AMD Fusion blog, Jan 18, 2012]

    Since its introduction to the public in June 2011 at the AMD Fusion11 Developer Summit, the AMD Fusion System Architecture (FSA) has received widespread support and interest from our business partners and technology industry leaders. FSA was the blueprint for AMD’s overarching design for utilizing CPU and GPU processor cores as a unified processing engine, which we are making into an open platform standard. This architecture enables many benefits, including high application performance and low power consumption.

    Our software partners are already taking advantage of the power and performance advantage of APU and GPU acceleration, with more than 200 accelerated applications shipped to date. The combination of industry standards like OpenCL and C++ AMP, alongside FSA, is ushering in the era of heterogeneous computing.

    Together with these software partners, we have built a heterogeneous compute ecosystem that is built on industry standards. As such, we believe it’s only fitting that the name of this evolving architecture and platform be representative of the entire, technical community that is leading the way in this very important area of technology and programing development.

    FSA will now be known as Heterogeneous Systems Architecture or HSA. The HSA platform will continue to be rooted in industry standards and will include some of the best innovations that the technology community has to offer.

    Manju Hegde and I will be hosting a breakout session on HSA at AMD’s Financial Analyst Day on February 2nd 2012, which will be webcast live here.  More information on the latest advances in HSA design will be released at a future date.

    Also, if you haven’t already made plans to attend the AMD Fusion12 Developer Summit in June 2012 in Bellevue, Washington, I encourage you to save the date. Leaders from the technology and programming development communities will converge at the summit to discuss Heterogeneous Computing and the next-generation user experiences that are enabled by this platform.

    Phil Rogers, corporate fellow at AMD.

    From the Analyst Day breakout session presentation I will include the following illustrations here as the food for thoughts and further interests:

    image

    image

    image

    image

    For Windows 8 related HSA, “C++ AMP” (indicated on the last illustration) is worth to expand on via Introducing C++ Accelerated Massive Parallelism (C++ AMP) [MSDN Blogs, June 15, 2011]

    A few months ago, Herb Sutter told about a keynote he was to delivered today in the AMD Fusion Developer Summit (happening these days). He said by then:

    “Parallelism is not just in full bloom, but increasingly in full variety. We know that getting full computational performance out of most machines—nearly all desktops and laptops, most game consoles, and the newest smartphones—already means harnessing local parallel hardware, mainly in the form of multicore CPU processing. (…) More and more, however, getting that full performance can also mean using gradually ever-more-heterogeneous processing, from local GPGPU and Accelerated Processing Unit (APU) flavors to “often-on” remote parallel computing power in the form of elastic compute clouds. (…)”

    In that sense, S. Somasegar, Senior Vice President of the Developer Division made this morning the following announcement:

    “I’m excited to announce that we are introducing a new technology that helps C++ developers use the GPU for parallel programming. Today at the AMD Fusion Developer Summit, we announced C++ Accelerated Massive Parallelism (C++ AMP). (…) By building on the Windows DirectX platform, our implementation of C++ AMP allows you to target hardware from all the major hardware vendors. (…)”

    C++ AMP, as Soma tells in his post, is actually an open specification. Microsoft will deliver an implementation based on its Windows DirectX platform (DirectCompute, as Daniel Moth specifies in a later posta few minutes ago).

    Daniel added that C++ AMP will lower the barrier to entry for heterogeneous hardware programmability, bringing performance to the mainstream. Developers will get an STL-like library as part of the existing concurrency namespace (whose Parallel Patterns Library –PPL and its Concurrency Runtime –ConcRT are also being enhanced in the next version of Visual C++ –check references at the end of this post for further details) in a way that developers won’t need to learn a different syntax, nor using a different compiler.

    Update (6/16/2011): “Heterogeneous Parallelism at Microsoft, the keynote where Herb Sutter and Daniel Moth introduced this technology with code and graphic demos is available for on-demand watching.

    Update (6/17/2011): Daniel Moth’s session “Blazing-fast Code Using GPUs and More, with C++ AMP” is available as well! Beside, Dana Groff tells what’s new in Visual Studio 11 for PPL and ConcRT.

    Pedal to the metal, let’s go native at full speed!

    References:

    1. S. Somasegar’s announcement: http://blogs.msdn.com/b/somasegar/archive/2011/06/15/targeting-heterogeneity-with-c-amp-and-ppl.aspx
    2. Daniel Moth’s blog post: http://www.danielmoth.com/Blog/C-Accelerated-Massive-Parallelism.aspx
    3. Herb Sutter’s keynote at the AMD Fusion Developer Summit: http://channel9.msdn.com/Events/AMD-Fusion-Developer-Summit/AMD-Fusion-Developer-Summit-11/KEYNOTE
    4. Daniel Moth: Blazing-fast Code Using GPUs and More, with C++ AMP (session presented at AMD Fusion Developer Summit): http://channel9.msdn.com/Events/AMD-Fusion-Developer-Summit/AMD-Fusion-Developer-Summit-11/DanielMothAMP
    5. Announcing the PPL, Agents and ConcRT efforts for Visual Studio 11, by Dana Groff: http://blogs.msdn.com/b/nativeconcurrency/archive/2011/06/16/announcing-the-ppl-agents-and-concrt-efforts-for-v-next.aspx
    6. AMD Fusion Developer Summit Webcasts: http://developer.amd.com/afds/pages/webcast.aspx

    With that in mind the upcoming 2012 AMD Fusion Developer Summit will definitely bring quite important updates as promised by the last breakout session illustration:

    image
    More on that: Adobe and Cloudera among Keynotes at AMD Fusion12 Developers Summit [AMD Fusion blog, Feb 3, 2012]


    Finally, regarding the ‘ambidextrous’ strategy mentioned in the first sentence of the press release:

    1. ‘ambidextrous’ generally means ‘very skillful and versatile’ coming from ‘able to use the right and the left hand with equal skill’
    2. it is described in the press release as:
    3. adopting an SoC-centric roadmap designed to speed time-to-market, drive sustained execution, and enable the development of more tailored customer solutions. SoC design methodology is advantageous because it is a modular approach to processor design, leveraging best practice tools and microprocessor design flows with the ability to easily re-use IP and design blocks across a range of products. …

    4. and detailed in Mark Papermaster‘s (Senior Vice President and Chief Technology Officer) presentation for the 2012 Financial Analyst Day held on February 2, 2012 (see his full presentation in PDF) via the following illustrations:

    image
    as the Go-to-market approach together with ODM / OEM relationships

    image
    specifically highlighting the differentiation with it for the datacenter
    image
    related to MDC [Multi-DataCenter] workloads and HSA.

    But also mentioning it in more generic terms as:
    image
    ”Flexible around ISA [Instruction Set Architecture]” and
    “Flexible around combination of AMD IP and third party IP”

    Which caused probably the biggest interest and questions among participating analysts what made even The Wall Street Journal to report as AMD Will Incorporate Others’ Technology in Its Chips [Feb 3, 2011]:

    Advanced Micro Devices Inc., the microprocessor maker whose fortunes have long been closely tied to the same technology as bigger rival IntelCorp., is planning a more flexible future.

    The company on Thursday said it may pursue what it calls an “ambidextrous” strategy that would allow it to offer chips that include circuitry developed by other companies as well as its own. One obvious option would be low-power microprocessor technology from ARM HoldingsPLC that now dominates chip markets for cellphones and tablet computers.

    AMD Chief Executive Rory Read, at a meeting with analysts here and in a subsequent interview, stopped short of saying that AMD would definitely add ARM-based technology to its chips in the future. But he noted that the company is laying the technical groundwork for modular chips that could accept blocks of circuitry developed by ARM as well as other companies.

    “We have a relationship with ARM, and we will continue to build on it,” Mr. Read said in an interview. “We will continue to evolve that relationship as the market continues to evolve.”

    Such possibilities are a sign of how the exploding market for mobile devices is causing many companies to alter their strategies. The x86 design used by AMD and Intel is the foundation of virtually all personal and most server computers.

    But the two companies have struggled to make headway in the mobile-device market, in large part because of the lower power consumption of ARM-based designs. Meanwhile, ARM licensees—which include Qualcomm Inc., Texas Instruments Inc. and Nvidia Corp.—are adding to the pressures by edging toward the PC market, as MicrosoftCorp. finishes development of a new operating system that supports ARM and x86 chips.

    AMD’s management team, in a meeting with analysts here, took pains to dispute the notion that AMD may become marginalized as ARM-powered competitors enter the PC market. Rather, they argued, AMD’s strength in graphics and microprocessors—and a strategy of customizing chips for large customers—will expand AMD’s opportunities.

    Indeed, Mr. Read argued, it is Intel’s outsize influence of the tech industry that will tend to decline. “We will see the breakdown of proprietary control points,” Mr. Read said.

    Though Mr. Read didn’t commit to embracing ARM’s designs, others who heard his presentation said the direction is clear. “AMD was very deliberate today about their goal to integrate more third-party intellectual property,” said Patrick Moorhead, a former AMD vice president and now principal analyst at Moor insights & Strategy. “Nothing they communicated excluded the potential for ARM.”

    AMD’s remarks also underscore an industry shift—driven largely by the mobile market—away from separate chips and toward multi-function products that the industry calls SoCs, for systems on a chip, which save space and power in mobile devices and other hardware.

    Intel and AMD have begun offering SoCs for laptop computers. But AMD discussed extensive plans to create more such products at a faster rate, using a flexible design scheme that can accommodate technology submitted by other companies.

    Mr. Read, who previously served as a senior executive at PC maker Lenovo GroupLtd., has recruited others that also worked at IBM and have experience with other chip technologies than x86.

    One is Mark Papermaster, AMD’s senior vice president and chief technology officer, who worked at Apple Inc. and Cisco Systems Inc. after leaving IBM in 2008. Another is Lisa Su, a senior vice president and general manager of AMD’s global business units, who most recently worked at Freescale Semiconductor HoldingsLtd., an ARM user.

    Ms. Su gave an updated road map for a series of future chips, including products that AMD expects to be used in tablets that are powered by Microsoft’s forthcoming Windows 8 operating system. But Mr. Read said AMD would likely stay away from trying to sell chips for smartphones soon, characterizing the market as too crowded with competitors.

    Kindle Fire with its $200 price pushing everybody up, down or out of the Android tablet market

    Suggested preliminary reading: $199 Kindle Fire: Android 2.3 with specific UI layer and cloud services [Sept 29 – Nov 13, 2011]

    Update (when neither up or down the market is an option for the company):
    Acer Likely to Withdraw From Tablet PC Market [Dec 28, 2011]

    Routed by Apple Inc. in the tablet PC competition, the Taiwan-based Acer Inc., one of the world’s top five PC suppliers by market shares, has intended to disband its touch business group in January, 2012, indicating its withdrawal from the competitive landscape to follow the footsteps of HP and Research In Motion.

    Headed by Acer’s corporate president Jim Wong, the touch business group was set up in April 2011 to develop and promote tablet PCs and smartphones, regarded as the company’s best promising business unit then.

    However, the momentary impression has proven unable to secure the business unit an expected success, as the company, after struggling with the sluggishness of tablet PC sales in the past months, is determined to dissolve the unit starting in January, 2012. Of over 300 workers of the touch business unit, 150, mostly R&D engineers, will be transferred to other business divisions, and only 100 will be retained, with the remainder likely to be laid off, according to industry insiders.

    Although the disbandment has yet to be publicized, Acer directors have confirmed that the company has recently merged its Android tablet business, which originally belonged to the touch business group, into its global logistics center management, saying that the once-promising division now exists in name only.

    With the touch division to be streamlined, market observers believe that Acer, which just halved its tablet PC sales projection to the range of only 2.5 million to 3 million units from 5 million units optimistically set right after the division was established, is likely to leave the challenging market that has been dominated by Apple with its iPad.

    Although global PC makers have eagerly ventured into tablet PC business in the wake of iPad’s success over the past year, many of them, however, have proven unmatchable with Apple in the competition, with HP and RIM already out of the market. Taiwanese contract manufacturers, such as Quanta Computer Inc. and Inventect Corp., have also been jeopardized by customer’s withdrawal from the segment, forced to cut their employees as a result.

    The Kindle Fire Is On Fire: Amazon Expected To Ship 3.9 Million This Quarter [Seeking Alpha, Dec 2, 2011]

    The Kindle Fire looks like a bona fide hit right out of the gate. New estimates from IHS iSuppli have Amazon.com (AMZN) shipping 3.9 million Kindle Fires this quarter, which would make it the No. 2 tablet after the iPad 2 (with an estimated 18.6 million shipments). The Kindle Fire will become the No. 1 Android tablet by a wide margin (the Samsung (SSNLF.PK) Galaxy Tab is the next biggest, with an estimated 1.4 million shipments).

    To put this 3.9 million number in context, just remember that the very first quarter Apple sold the iPad back in the September quarter of 2010, it sold 3.3 million. So the Kindle Fire sold more in its first quarter than the iPad did in its first quarter on the market. Of course, Apple sold 7.3 million iPads the second quarter it was on the market, which was the 2010 holiday quarter.

    Quanta shipments of Kindle Fire reach 3-4 million units [Dec 2, 2011]

    Shipments of 7-inch Kindle Fire tablet PCs from Quanta Computer to Amazon have reached 3-4 million units, according to industry watchers. However, Quanta declined to comment.

    The sources said Amazon has continued to increase its orders for Kindle Fire and aims to see total OEM Kindle Fire shipments reach five million units by the end of December or early January.

    Wintek, a major supplier of touch panels for Kindle Fire, has recently raised its internal forecast of shipments to Amazon. Industry sources have estimated that Wintek will ship about 3-3.5 million touch panels for Kindle Fire before January.

    However, some makers in the supply chain have built up inventory of needed parts and components steadily, and OEM Quanta has also kept its shipments regular, for the sake of avoiding over stockpiling inventory in case there is a reverse in order visibility, the sources pointed out.

    The out-of-the-market case #1: White-box players in China quitting tablet PC market [Dec 8, 2011]

    As non-Apple tablet PC players are dropping their tablet PC prices to compete against Kindle Fire, white-box players in China are starting to quit the tablet PC market and can only wait for the rise of the next innovative device to appear in the market.

    Since China-based Lenovo is offering its tablet PCs at a price of CNY1,000 (US$158), several large white-box players have quickly dropped their tablet PC prices to help clear their inventory, while several white-box players that offer tablet PCs at below CNY800 are even preparing to sell their devices at cost and then quit the market.

    With the launch of Android 4.0 and Nvidia Tegra 3, first-tier brand vendors have been dropping their tablet PC prices to compete for market share, especially Lenovo, which has recently dropped its 7-inch 16GB LaPad A1 from CNY2,500 [$US393] originally to less than CNY1,400 [$US220] and its entry-level 2GB model is offered at CNY1,000 [$US157], cheaper than most of the large white-box players’ models.

    Since Lenovo is stronger in the retail channel, while offering warranty and its products have basic quality, these advantages are all piling strong pressure upon white-box players.

    Some China-based ODMs pointed out that their orders from white-box players have dropped sharply by about 30-50% with several clients clearing their inventory by dropping prices; however, since they still cannot outmatch first-tier players, some of them have already decided to temporary quit the tablet PC market.

    As the situation may become worse, the ODMs expect that more than 70% of the existing white-box players could quit the market by the first quarter of 2012.

    Note: White-box is a term often used to describe computer makers who are not the well-known name brands, but rather B- or C-tier players.

    The down-the-market case #1: Players drop tablet PC prices to compete against Kindle Fire [Nov 24, 2011]

    Several tablet PC players including RIM, High Tech Computer (HTC), Lenovo, and ViewSonic, have dropped their 7-inch tablet PC prices to compete against Amazon’s Kindle Fire, priced at US$199, according to sources from channel retailers.

    The sources pointed out that RIM has recently cooperated with Best Buy to offer its 7-inch 16GB PlayBook at a price of US$199, down from US$499 originally. Meanwhile, the price of HTC’s 7-inch Android 2.3-based Flyer tablet PC has dropped to US$299, Lenovo’s 7-inch A1 tablet PC to US$199, and ViewSonic’s 7-inch Viewbook 730 to US$169.

    Meanwhile, several China-based white-box players are also offering their 7-inch tablet at prices as low as US$75.

    In addition to the 16GB model, RIM also dropped its 32GB model from US$599 to US$299.

    Since part of the reason consumers buy Kindle Fire is because of its strong content support, even though other brand vendors are trying to attract consumers by lowering their prices, they may not be able to achieve the same sales results as Amazon.

    The sources also revealed that several vendors are already in talking with upstream suppliers hoping to develop a tablet PC that costs less than US$199, but since there is still not yet a suitable solution to accomplish such a goal, most of the brand vendors are halting their 7-inch tablet PC projects.

    The out-of-the-market case #2: Dell kills off its last Android tablet in the US [Dec 6, 2011]

    Dell has taken its 7-inch Streak Android tablet out of commission, according to its website. While some retail sites still have stock, the company no longer offers the Streak for sale from its own website and will no longer produce it. The Dell Android tablet species is officially extinct in the US.

    The fadeout of the 7-inch Streak follows the disappearance of the 5-inch Streak in August, when it failed to corner (read: create) the 5-inch tablet market. The 7-inch Streak went on sale in January and was priced at $200 with a T-Mobile contract, but has failed to generate any significant interest in the last year. The only Dell tablet still in production is the 10-inch Streak, sold in China.

    From here, Dell will move on to making Windows 8 tablets when the operating system launches next year. Speaking at the Dell World 2011 conference, Michael Dell, the company’s CEO, said that “the Android market has not developed the expectations [Dell] would have had.”

    Lenovo Reaffirms Android Commitment In Wake Of Dell Streak 7 Demise [Dec 7, 2011]

    Lenovo is reaffirming its commitment to its Android-based tablets – at least for now – in the wake of the demise of Dell (NSDQ:Dell)’s Streak 7 Android tablet. Dell nixed the 7-inch tablet on Tuesday, posting a note on the Streak 7’s landing page saying that the product, unfortunately, is “no longer available for sale.”

    Dell declined to comment on exactly why it discontinued the tablet, which was its last Android-based device on the U.S. market.

    Many reports, however, are suggesting that Dell pulled the reins on the Streak 7 to start transitioning from Android-based tablets to Windows 8-based tablets, upon the new OS’ release next year. Dell declined to confirm the move, but other PC makers, such as Lenovo, have expressed their commitment to Google’s OS – even if just for now.

    Our tablet strategy today is an Android operating system,” said Chris Frey, vice president of North America Commercial Channels at Lenovo in an interview with CRN. “As operating systems evolve next year and new operating systems become available, we’ll make decisions on the hardware and the operating system that will go on that hardware as we get closer. Right now [Android] is the operating system we have and are driving in the market.”

    Lenovo’s ThinkPad Tablet: An Android Business Slate [Review] [Dec 7, 2011]

    Conclusion

    Lenovo designed the ThinkPad Tablet with business users in mind. The optional pen accessory and the preloaded software are options business users may appreciate. During our tests, we felt the ThinkPad tablet was great for taking notes, surfing the web, checking email, and many other daily tasks that are typical of a business user.

    Battery life with the ThinkPad Tablet is a bit of a mixed bag. Although the tablet is rated at up to five days of use, this longevity is dependent upon the user putting the tablet into suspend mode each time he or she is finished using the tablet. Even then, battery life is sure to vary greatly depending on how much you use the tablet. We would expect that many users may place the tablet on their desk to take a phone call or deal with another interruption and forget to press the power button. In doing so, you’ll suffer a considerable hit in terms of battery life.

    In terms of connectivity, the ThinkPad Tablet has a lot going for it. Not only does the ThinkPad Tablet have a full-size USB port, but it also offers a card reader, microUSB port, mini HDMI port, a ThinkPad Tablet dock connector, and headphone jack. Most tablets on the market today offer considerably fewer ports, so this is an area where the ThinkPad Tablet really shines.

    IT departments will also appreciate the encryption and remote wipe capabilities of the ThinkPad Tablet. The optional pen accessory is definitely a nice add on that gives the tablet some additional functionality, and we found ourselves using it often during our evaluation process. The biggest drawback to this tablet is its battery managment. Assuming you’re religious about pressing the power button each time you’re finished using the tablet, it won’t be a problem. If you’re like us and tend to forget however, you’ll want to keep a charging cord nearby at all times. Regardless, we feel the ThinkPad Tablet is a great tablet for business users who want some of the added capabilities and software that Lenovo includes.  It’s a full-featured device that offers a tablet experience not found in many others on the market right now.

    Hot

    • NVIDIA Tegra 2 dual-core 1GHz ARM SoC w/ NVIDIA graphics
    • 1GB of RAM, 16 – 64GB Storage
    • Lots of ports: mini HDMI, USB 2.0, micro USB, dock connector
    • Full size media card reader

    Not

    • Relatively short battery life in idle mode
    • Pen is not included (costs $30)

    [Price: 16GB: $499, 32GB: $569, 64GB: $669]

    Apple iPad Sales Slowing as Amazon Lights Kindle Fire [Dec 7, 2011]

    Since launching in 2010, Apple’s iPad has been the global leader in tablets. But since Amazon’s first table, the all-new low-priced Kindle Fire came out in November Apple’s dominance may be sagging. In a new analyst note, Shaw Wu of the brokerage firm Stern Ageesees iPad sales as a “little light” in the current quarter.

    Wu assigns the blame for light iPad sales to stiff competition, namely from Amazon’s Kindle Fire, priced at $199 while the starting price for the Apple iPad is $499. He also notes that some Apple customers are buying the MacBook Air instead of an iPad, but in lowering his estimate for iPad sales in the quarter from to 13.5 million units from 15 million units, it’s clear the Kindle Fire is the leading culprit.

    [from: Apple’s iPad sales look light amid Kindle Fire, MacBook Air popularity [Dec 7, 2011]

    Wu wrote in a research note:

    In the Mac business, we are seeing particular strength in the MacBook Air, arguably the best ultra-mobile PC on the market. Last but not least, iPads appear a little light of expectations due in part to competition from Amazon’s Kindle Fire but also as some users opt for a more full-featured MacBook Air.]

    IHS iSuppli estimates Amazon will sell nearly four million Kindle Fire tablets by the end of the year— not bad for a product that didn’t ship until mid-November. Reviewers note that the Kindle Fire isn’t the Apple iPad — it is short on apps and isn’t known for content creation abilities. Yet it seems to serve at a low price what most tablet buyers want — a handy device good for watching videos and Web browsing and content reading on the go.

    It’s not like Apple’s iPad dominance is going away, either. If the company sells 13.5 million tablets in the quarter as Wu estimates, the Cupertino, Ca.-based company still has a global leader on its hands. But the Kindle Fire has shown out of the gate that a device can ably compete with the iPad after others like the HP TouchPad and the BlackBerryPlayBook failed.

    Wu isn’t the only analyst who thinks the Amazon Kindle Fire is dipping into Apple iPad dominance, either. Another new report from Michael Walkley of Canaccord Genuitysees the same trend.

    “With our expectations for a new iPad launch during the March quarter leading to potentially lower inventory levels combined with increased competition from the $200 Kindle Fire,” Walkley said in a note, “we have slightly lowered our December quarter iPad estimates from 14M to 13M units.”

    But it’s interesting to note that some analysts don’t think Apple is overly concerned with the low-priced Kindle.

    “If anything, we believe that Apple is not too concerned about the low-priced entrants,” wrote Mark Moskowitz, an analyst with J.P. Morgan, in a Dec. 2 research note. “Recall, it has been our view that low-priced, reduced feature-set entrants, such as the Kindle Fire, are soap box derby devices stuck between a tablet and an e-reader.”

    iPad feeling some heat from Amazon’s Kindle Fire [Dec 1, 2011]

    Apple’s iPad seems to have run into the one Android tabletthat could knock it down a peg or two.

    After hitting retailers on November 15 at $199, Amazon’s Kindle Fire tablet is already outselling the iPad at Best Buy. Sorting tablets by the top sellers at the Best Buy Web Siteshows the Fire in first place followed by the 16GB Wi-Fi-only iPad 2 at $499 coming in second. A range of other iPad flavors from different carriers are scattered throughout the top 40 tablets.

    Amazon itself shows the Kindle Fire as the top-selling tableton its site, with the 16GB iPad further down the list. But that seems a less accurate gauge of popularity since Fire buyers may be more likely to pick up the tablet directly from Amazon.

    Even before the Fire launched a little more than two weeks ago, the tablet was proving to be a big seller, racking up a huge number of preorders. Pegging the Fire as one of the hottest consumer devices among holiday buyers, research firm DisplaySearch recently increased its shipment projectionsfor the current quarter.

    DisplaySearch analyst Richard Shim now expects Amazon to ship up to 6 million Fire tablets this season, up from 4 million previously.

    Another analyst also sees the Fire giving the iPad some competition, but to a lesser degree.

    In an investor note out today, J.P. Morgan analyst Mark Moskowitz said he’d trimmed his fourth-quarter sales estimates for Apple’s tablet to 13 million from 13.3 million previously. Moskowitz attributed the lower forecast mostly to more limited growth in production but also pointed to the Fire.

    “To a lesser extent, the Amazon Kindle Fire’s better-than-expected momentum with more price sensitive consumers is a factor, too,” the analyst wrote.

    Of course, Apple is certainly in no danger of losing its current dominance in the tablet market. Moskowitz believes that over time the iPad will actually gain more traction in the business and educational markets. And despite the hot holiday demand for the Fire, the analyst doesn’t see Amazon’s current version of its tablet as a strong enough competitor over the long haul.

    “We think that for any vendor to wrestle momentum longer-term from Apple, a fully loaded offering is a must, and here, the current revision of the Kindle Fire falls short,” Moskowitz wrote. “We think that, over time, consumers may come away disappointed with the Kindle Fire’s lack of functionality and smaller screen size. In our view, the Kindle Fire is the current Netbook of the media tablet market. The bigger question is whether the Fire evolves into a bona fide tablet in its next-generation release.”

    As a consequence of the above two articles one observer dares to note that:
    Not even Apple understands the tablet market [Dec 7, 2011]

    Just last quarter, iPhone sales took a big dip. Apple (AAPL) was fine as iPads saved the day. This quarter could turn out to be the complete opposite.

    If Sterne Agee analyst Shaw Wu is right, iPad sales will be lower than expected because of the popularity of both Amazon’s (AMZN) Kindle Fire table and Apple’s own MacBook Air, as ZDNet’s Larry Dignon notes. It’s a competition sandwich that underscores how little, still, anyone in the tablet market, including Apple, thoroughly understands the dynamics and what people ultimately want to do with the devices.

    Initial trials are over

    Not that the iPad — or other tablets — will whimper and crawl to a corner. Far from it. But given what products that Wu thinks are drawing attention, Kindle Fire and MacBook Air, you have to question whether anyone knows, yet, what consumers want from tablets, particularly as we’ve yet to see any solid numbers (and are unlikely to) for Kindle sales.

    The presumption is that Kindle Fire snags the price-sensitive and Amazon fans. The MacBook Air switch is by people who need a lot more than what the iPad can deliver. That throws open a lot of assumptions. What percentage of buyers expected a tablet to be a media access device only? How many realized that they needed more than an on-screen keyboard? What price points will maximize sales?

    For most of the Android tablet vendors, the answer to “What do consumers want?” has been, “Something other than what you sell.” Maybe Apple has all the answers, but even that seems pretty unlikely. Last quarter, unit sales were up. This month, maybe down. Steve Jobs was certain that a 7-inch tablet couldn’t see any success, but Amazon seems to be disproving that.

    It’s time for everyone to take a step back and reconsider the basic questions. Maybe talk to a lot of customers, do some usability studies, and follow individuals around (with their permission) to better understand how they use the devices. Only some determined research is going to get beyond the seat-of-the-pants navigation that the tech industry seems to heartily embrace so often.

    Evercore: Amazon will own 50% of Android tablet market in ’12 [Dec 5, 2011]

    The Kindle Fire may “vaporize” the market for every for-profit tablet maker except Apple

    In a note to clients Monday about Apple (AAPL), Evercore Partners’ Robert Cihra summarizes the impact of Amazon’s (AMZN) Kindle Fire on the tablet market in stark terms:

    While Amazon’s Kindle Fire has come out of the gates strong, as expected, we see Apple maintaining its competitive lead, if anything accentuated by what now looks like the only tablet to so far mount any credible iPad challenge apparently needing to do so by selling at cost; not to mention Amazon’s success may just vaporize other “for profit” Android tablet OEM roadmaps (e.g., we est Amazon 50% of all Android tablets in CY12). Meanwhile Apple goes on as the only vendor able to cream off the most profitable segment of each market it targets, whether tablet, smartphone or PC. (emphasis ours)

    The up-the-market case #1: Asustek sets shipment goal for 2012 [Dec 6]

    Asustek Computer, at its global sales meeting on December 5, has set the shipment goals for its four major product lines for 2012 with notebooks and netbooks together to surpass 22 million units and the company internally expecting shipments to reach 23.8 million units, while tablet PCs will reach at least three million units with the company internally expecting the volume to reach six million units, surpassing Samsung Electronics.
    image

    for tablet PCs, Asustek expects its shipments will reach about 1.8 million units in 2011.

    As for the recent report that Asustek was not invited into the Windows on ARM (WOA) development project, Asustek noted that it has the strongest R&D ability among notebook vendors and is the largest client of Nvidia; therefore, the company will continue to have tight partnership with ARM-based processor makers over development of the WOA platform.

    See also: NVIDIA Tegra 3 and ASUS Eee Pad Transformer Prime [Nov 10 – Dec 2, 2011]
    for all related information + Asus Eee Pad Transformer Prime: The Rolls-Royce of Android tablets [Dec 2, 2011] as one of the first reviews

    Note: Wistron Enters Asustek’s Tablet PC Supply Chain [Dec 8, 2011]

    Aimed at becoming the largest brand for the Android- and Windows8-enabled tablet PCs, Asustek has aimed to challenge a goal of six million tablet PCs in 2012, three times that of this year’s 1.8 million units.

    Asustek Unveils Transformer Prime Amid Aggressive Goal for Tablet Market [Dec 5, 2011]

    Asustek Chief Executive Officer (CEO) Jerry Shen … vowed that his company will become one of the top tablet brands, next only to Apple (iPad) and Amazon (Kindle Fire). His pledge is considered by some industry executives as a challenge against Samsung, which is now the most popular brand name supplier of tablets only trailing Apple and Amazon.

    Demo: Ice Cream Sandwich on Asus Transformer Prime [nvidia, Nov 17, 2011]

    A quick demo showing Ice Cream Sandwich (Android 4.0 OS) running on an Asus Eee Pad Transformer Prime.

    The up-the-market case #2: Acer, Lenovo to launch quad-core tablet PCs [Nov 29, 2011]

    Acer and Lenovo are set to launch quad-core tablet PCs featuring Google’s Android 4.0 (Ice Cream Sandwich) and Nvidia’s Tegra 3 in the first quarterto compete against Asustek Computer, which has already launched its latest Eee Pad Transformer Prime with Tegra 3 and Samsung Electronics, according to sources from notebook players.

    The sources pointed out that the competition over the quad-core tablet PCs will be difficult as these quad-core devices will only see improvements over their performance and design, but will still feature the same concept as their dual-core predecessors.

    Therefore, these players may need to battle it out before being able to enter competition against players such as Amazon or Apple, the sources noted.

    The sources noted that although these players’ performance in the dual-core tablet PC competition were not as good as expected, they will continue to advance and launch new quad-core devices to defend their brands.

    The new quad-core tablet PCs from Acer and Lenovo are expected to be priced between US$459-599.

    Since non-Apple players’ machines have no advantage to compete against Amazon or Apple’s tablet PC devices, the sources believe non-Apple players will together account for only 10-15% of the total tablet PC market.

    The real up-the-market case: Amazing Screen Technology: Samsung Flexible AMOLED [Dec 4, 2012]

    This is CF [Concept Formation?] of Samsung Mobile Display & AMOLED. I’ts amazing and wonderful technology!!!

    Some time earlier this year there were concept drawings of a Samsung phone with a flexible OLED display. This was a rather intriguing concept that we didn’t think would be happening anytime soon, but we were then proved wrong as Samsung stepped forward and said that flexible display smartphones were in the works and would be introduced some time in 2012.

    Now Samsung’s Mobile Display Division has released a new concept video of what a transparent and flexible tablet of the future could look like and what it could accomplish. We’re guessing that Samsung’s flexible smartphone for 2012 won’t be anything like the concept video, but we definitely like where Samsung’s ideas are headed.

    It showcases a tablet that can be shrunk and expanded according to our needs, augmented reality translation, and what appears to be 3D imagery as well that seems to literally leap off your screen.

    From: Samsung shows off flexible display concept tablet in video [Dec 5, 2011]

    In its quarterly earnings call, Samsung’s vice president of investor relations, Robert Yi, told investors, analysts and press, “The flexible display we are looking to introduce sometime in 2012, hopefully the earlier part. The application probably will start from the handset side.”

    After flexible-screen mobile phones roll out, the company plans to introduce the same technology for tabletsand other devices.

    In January 2011, Samsung purchased Liquivista, a strategic acquisition that will allow it to produce the kinds of displays that were announced today. Liquivista made electrowetting display technology, which is used to create mobile and other consumer electronic displays that are bright, low-power, flexible and transparent.

    Flexible screen technology was also a focus of Samsung’s in March, when Yongsuk Choi, director of Samsung Mobile Display, gave an overview of the company’s future mobile device plans. At that time, Choi said most of the flexible-display technology Samsung was working on was still in very early stages.

    From:  Samsung’s new phones will have flexible screens [Oct 28, 2011]

    See also on Samsung Mobile Display site:
    Future Display Used : Flexible Display – Foldable Display – Dual Display – 3D Display – Paper Thin Display: “Flexible Display: AMOLED products that are still fully functional when they are folded or rolled can be expanded and applied to full-color and mobile market as digital signage and e-book markets and technologies are developed.” …
    SMD History: … “Nov 2010: Developed WVGA [Wide VGA 800×480 resolution] Flexible AMOLED for the first time in the world” … “May 2009: Developed the world’s biggest 6.5” of Flexible AMOLED” …

    HP, Dell, Acer to expand R&D investments [Nov 24, 2011]

    Seeing that the PC industry is going through a slowdown, PC players Hewlett-Packard (HP), Dell and Acer have all expanded their investments in R&D and as the PC industry will enter an atmosphere that is filled with multiple platformsin 2012, each vendor’s R&D, branding and marketing abilities will become important drivers to increase their competitiveness in the future, according to sources from PC players.

    HP is set to increase its investment in R&D and to strengthen the related resources. The company also changed its policy to have senior vice president of research, and director of HP Labs Prith Banerjee directly report to company CEO Meg Whitman.

    Meanwhile, Dell is set to expand its R&D funding to US$1 billion each year, up 51.28% from US$661 million, that was reported a year ago. Dell also noted that the company will continue to acquire companies in the future and will need more funding to integrate the acquired firms.

    Furthermore, Acer’s first R&D center is also expected to increase its total engineers from 600 in the middle of the year to 1,000 by year-end with executives of brand vendors and ODMs all major targets for headhunting.

    An Acer executive also pointed out that the PC industry is experiencing a significant change, transitioning from Wintel system dominated to competition between several different platforms. Therefore, to the ability to develop devices based on Google’s Android system or ARM will become important.

    AMD helping Android fans port to x86 [Dec 6, 2011]

    A team of developers working privately to port the next version of Android to the x86 platform has been receiving a lot of support from AMD, but less from other key players.

    The project is seeking to port the Ice Cream Sandwich (ICS) android-4.0.1_r1 release build to the x86 platform, and Chih-Wei Huang, one of the enthusiasts involved, told The Register that AMD had not only donated two tablets to the cause, but also has a couple of engineers helping out. As a result, the porting to AMD’s Brazos platform is now largely complete and the source code has been made available.

    The first porting of Android to the x86 platform was actually done by Google engineers, but he explained that the Google team had not been continuing with the project since Android version 1.5, aka Cupcake. While the developers submit patches to Google, they seldom hear back, although some Google engineers are helping out privately with the project. Intel, too, hasn’t been keen.

    “Generally speaking, Google didn’t care for the x86, at least before ICS,” he told The Registerin an email conversation. “Intel doesn’t care, either. They don’t want to help us. I’ve tried to contact Intel in different ways, but the replies were negative.”

    Intel’s position has caused the team considerable problems, not least in getting Android to work with the video chipsets, and particularly the hardware acceleration added to Chipzilla’s kit. Work is still continuing, but since this is a voluntary project by people who have day jobs, then Android users might have a while before they can plaster an Intel Inside sticker on their systems.

    Chih-Wei Huang, an open source advocate based in Taiwan, started the project with a former colleague in June 2009, and it has morphed to the point where the scheme has 2,600 subscribers to the project forum. He said that while he tried to keep the porting process up to date, it was a lot of work and some people weren’t sharing data.

    “Now ICS is more mature for x86 tablet or netbook, so there are more practical reasons to do that,” he said. “Actually, I know some vendors like Bluestack, Viewsonic, and Insyde have already shipped Android-x86.org based products. However, they never contribute back. That usually makes us feel bad and angry.”

    Supplementary information: Android: A visual history [Dec 7, 2011]

    Supply chain battles for much improved levels of price/performance competitiveness

    Current snapshot:

    Intel rejects 50% Ultrabook CPU price cut demand from notebook players [Aug 16, 2011]

    Intel’s Oak Trail platform, paired Atom Z670 CPU (US$75) with SM35 chipsets (US$20) for tablet PC machine, is priced at US$95, already accounting for about 40% of the total cost of a tablet PC, even with a 70-80% discount, the platform is still far less attractive than Nvidia’s Tegra 2 at around US$20. Although players such as Asustek Computer and Acer have launched models with the platform for the enterprise market, their machines’ high price still significantly limit their sales, the sources noted.

    As for Ultrabook CPUs, Intel is only willing to provide marketing subsides and 20% discount to the first-tier players, reducing the Core i7-2677 to US$317, Core i7-2637 to US$289 and Core i5-2557 to US$250.

    As for Intel’s insistence, the sources believe that Intel is concerned that once it agrees to reduce the price, the company may have difficulties to maintain gross margins in the 60% range and even after passing the crisis, the company may have difficulty in maintaining its pricing. Even with Intel able to maintain a high gross margin through its server platform, expecting Intel to drop CPU prices may be difficult to achieve, the sources added.

    Update: ASUStek seems to maneuver by far the best among them (special early ultrabook engagement with Intel, with popssible higher discount, in addition to exploiting the Tegra 2 opportunity best via the only successful so far EeePad Transformer):
    Asustek expects better business performance in 2H11 [Aug 17, 2011]

    Asustek Computer expects its performance in the second half of 2011 to be better than that of fellow Taiwan-based companies, according to CFO David Chang.

    Asustek is likely to hit record quarterly revenues in the third  quarter and is optimistic about business operation in the fourth mainly due to the launch of second-generation Eee Pad Transformer tablets and ultrabook notebooks, Chang said.

    Asustek aims at a 14% market share for notebooks in China, and
    became the largest vendor in Eastern Europe’s notebook market in the second quarter. In addition, Asustek is poised to make forays into Latin America, especially Brazil and Mexico.

    Asustek expects to ship 14 million notebooks and 4.5-5 million Eee PCs in 2011, Chang indicated. Asustek shipped 11.4 million motherboards in the first half and expects to ship 22.5-23 million for the year.

    Tablet players expected to cut price to digest inventory overstock [Aug 16, 2011]

    Non-Apple tablet PC players, facing the fact their devices are having weaker sales than their order volumes, while demand from the retail channel has been quickly shrinking, are expected to start cutting their tablet prices by the end of September to digest inventory and minimize losses, and the decisions are expected to trigger a new price war within the tablet industry, according to sources from notebook players.

    The sources pointed out that most non-Apple tablet players had weaker-than-expected performances and Asustek, which had a rather better performance, had shipments of 700,000 tablets from May to July with actual sales only reaching 500,000 units.

    RIM and High Tech Computer (HTC) are already placing their hopes in 2012 with Samsung and Motorola both seeing their tablet demand weaker than expected, while some other players such as Acer are gradually reducing their orders.

    Motorola, Hewlett-Packard (HP), Asustek and Acer have all recently reduced their tablet prices with the lowest price currently at US$370; however, with their inventory will become harder to digest, the sources believe there will be at least two waves of price cuts from the end of September to the year-end holiday, reducing the tablet average price level to US$350 and may even drop further to US$300 in the future.

    More: Acer & Asus: Compensating lower PC sales by tablet PC push[March 29, 2011 with updates upto Aug 2, 2011]

    AMD’s Bright Outlook Likely to Boost Taiwan’s Supply Chain [Aug 16, 2011]

    Taiwan’s IC supply chain is expected to benefit from good business performance of Advanced Micro Devices Inc. (AMD), which is projected to outperform archrival Intel Corp. in the third quarter with increased shipment of accelerated processing units (APUs).

    The Taiwan supply chin is mainly composed of manufacturers including foundry Taiwan Semiconductor Manufacturing Co. (TSMC), packager Siliconware Precision Industries Co., Ltd., tester STATS ChipPAC Taiwan Semiconductor Corp., and substrate maker Nanya Printed Circuit Board Corp.

    AMD estimates its revenue for the third quarter to rise 8-12% from the second quarter, compared with Intel’s projected 8% revenue growth. According to AMD, it has enjoyed robust APU shipments since the second quarter, with both its PC and laptop APU shipments hit new highs.

    AMD has contracted TSMC, currently the world’s No.1 pure foundry, to make its Ontario [C-series], Zacate [E-series], and Desna [Z-series, specific for tablet PCs, a power optimized version of C-series, which are also for ultra-thin notebooks: Z-01 of 5.9W vs. C-50 9W in both cases with two 1 GHz “Bobcat” CPU cores + 6250 GPU] processors using 40-nanometer process technology as well as its Hudson chips using 65nm process technology.

    While increasing foundry outsourcing to TSMC, AMD has augmented packaging and testing contracts to Taiwan’s providers as well. Nanya is also expected to land contracts via Japanese partner NGK Spark Plug, which has directly received substrate contracts from AMD.

    In the second quarter, AMD saw its revenue slightly dip 2% from the first quarter to US$1.57 billion, while its gross margin was 46%, up from 45% recorded in the first quarter this year.

    AMD Llano processor shipments reach 1.3-1.5 million units in July [Aug 4, 2011]

    AMD shipped about one million Llano [A-series, for mainstream notebooks, all-in-one PCs and desktop PCs: with up to four up to 2.9 GHz x86 CPU cores and with an integrated DirectX 11-capable discrete-level graphics unit that features up to 400 Radeon cores along with dedicated HD video processing on a single chip] APUs in June and 1.3-1.5 million units in July, and with the appearance of the company’s new Llano APUs in the fourth quarter, annual shipments of Llano in 2011 should reach 7.5-8 million units, according to sources from motherboard players.

    The sources pointed out that AMD is pushing its 40nm-based C series (Ontario) and E series (Zacate) APUs for the entry-level market, while it is pushing 32nm-based Llano-based APUs for the mid-range to performance and mainstream markets, and is pushing 32nm AM3+ FX series (Zambezi) processors for the high-end market in the fourth quarter.

    In 2012, AMD will launch a new APU series codenamed Krishna using a 28nm process from Taiwan Semiconductor Manufacturing Company (TSMC), targeting mini PCs, and all-in-one PCs with an APU series codenamed Trinity to replace Llano for the mainstream market, adopting a 32nm process from Globalfoundries. For the high-end market, AMD will launch an APU series codenamed Komodo.

    AMD shipping Llano APUs; prices leaked [May 23, 2011]

    AMD has started shipping its Llano APUs to notebook clients and will begin to market the APUs to channels in July 2011, according to sources from notebook makers.

    AMD targets to ship one million notebook-use Llano APUs in June, 1.5 million in July, and a total of 8-9 million for the whole of 2011, revealed the sources, citing AMD’s internal estimates.

    If the shipment goals are realized, AMD will be able to boost its share in the notebook CPU segment to 15% by the end of the year, the sources commented.

    Additionally, AMD will also launch six Llano and four Bulldozer APUs for desktops.

    AMD: Llano and Bulldozer APU prices (k unit)
    Core Model Price Competing Intel model
    Llano/quad-core A8-3550P US$170 Core i5-2300
    Llano/quad-core A8-3550 US$150
    Llano/quad-core A6-3450P US$130 Core i3-2120/2010
    Llano/quad core A6-3450 US$110
    Llano/dual-core A4-3350P US$80 Pentium G6960/6950 and Sandy Bridge G800/600
    Llano/dual core E2-3250 US$70 Pentium G620
    Bulldozer/octo-core FX-8130P US$320 Core i7 2600K/2600
    Bulldozer/octo-core FX-8130 US$290
    Bulldozer/6-core FX-6110 US$240 Core i5 2500K/2500
    Bulldozer/quad-core FX-4110 US$220

    More: Acer repositioning for the post Wintel era starting with AMD Fusion APUs[June 17, 2011]

    Apple cancels supply schedule of iPad 3 for 2H11 [Aug 16, 2011]

    US-based tablet PC players Apple has recently canceled its iPad 3 supply schedule for the second half of 2011, forcing other tablet PC brand vendors that are set to launch same-level product to compete, to follow suit and delay their launch; however, supply of the iPad 2 in the second half will still be maintained at 28-30 million units, according to sources from the upstream supply chain.

    Apple was originally set to launch its iPad 3 in the second half of 2011 with a supply volume of 1.5-2 million units in the third quarter and 5-6 million in the fourth quarter, but Apple’s supply chain partners have recently discovered that the related figures have all already been deleted, the sources pointed out.

    The sources believe that the yield rate of the 9.7-inch panel that feature resolution of 2,048 by 1,536 may be the major reason of the supply delay since such panels are mainly supplied by Japan-based Sharp with a high price and Apple’s other supply partners Samsung Electronics and LG Display are both unable to reach a good yield. Since Apple is unable to control a certain level of supply volume, the iPad 3 is unlikely to be mass produced as scheduled, the sources added.

    Sources from panel players also pointed out that the 9.7-inch panel with high resolution requires a much larger backlight source and a single edge light bar is hardly able to reach satisfaction levels. Due to iPad 3’s requirements over the physical thinness, rich color support and toughness will all conflict with the panel’s technology restrictions; therefore, this could cause a delay in the launch.

    In June, LG Display supplied three million panels for the iPad 2 with Samsung supplying 1-1.5 million units and Chimei Innolux (CMI) 10,000-20,000 units. In July, LG’s supply volume dropped to 2.8 million units with Samsung maintaining its same levels, and CMI’s volume increased to 450,000-500,000 units.

    Update: CMI fails to become iPad 3 panel supplier, say sources [Aug 19, 2011]

    Chimei Innolux (CMI) has failed to become a LCD panel supplier for the Apple iPad 3 due to technological hurdles, according to industry sources.

    CMI has cut into the supply chain of iPad 2, which uses IPS panels, but the new Apple tablet is more demanding in terms of resolution, the sources said. The iPad 3 will feature a 9.7-inch panel with resolution of 2,048×1,536 compared to the iPad 2’s 1,024×768.

    CMI has been developing panels trying to meet the iPad 3 specifications, but problems with transmittance and yield rates of the panels have resulted in its failure to receive certification for the iPad, the sources said.

    CMI began developing IPS panels last year after receiving license from Hitachi in July 2010. The license covers IPS, Super-IPS, Advanced-Super IPS, IPS-Pro, and IPS-Pro-Prolleza.

    CMI previously scheduled mass production of IPS panels to begin as early as the end of 2010 or early 2011. But low yield rates delayed the mass production until recent months. The maker’s IPS panel monthly output in July 2011 reached nearly 500,000 units. It is looking forward to an output of one million units in August 2011, the sources said.

    The sources noted that the iPad 3’s resolution requirement of 2,048×1,536 pixels is also a challenge even for iPad panel regular suppliers such as LG Display (LGD) and Samsung Electronics. Apart from the two Korea makers, Japan’s Sharp has als been selected to supply panels for the iPad 3, the sources said.

    They noted that CMI still stands a chance of becoming a regular supplier for iPad 3 if it can improve its panel quality to meet Apple’s requirements. The maker recently invested NT$800 million to NT$1 billion [US$28 million to US$35 million] to improve manufacturing facilities, the sources said.

    Chimei Innolux Continues Suffering Loss in Q2 [Aug 16, 2011]

    Chimei Innolux Corp., the largest maker of thin film transistor-liquid crystal display (TFT-LCD) panels in Taiwan, reported a loss of NT$13 billion (US$448.3 million) in the second quarter, deeper than institutional investors` forecast.

    Industry sources said that the four major makers of large-sized TFT-LCD panels, i.e. AU Optronics Corp. (AUO), Chimei Innolux, Chunghwa Picture Tubes, Ltd. (CPT) and HannStar Display Corp., together reported total loss of about NT$120 billion (US$413.8 million [US$4.15 billion]) in the past about one year.

    Some institutional investors said that the all-size panel prices are expected to fall slightly, implying that makers` losses in the third quarter would not be less than second quarter`s.

    At its recent half-year online shareholder meeting, Chimei adjusted down its capital spending to NT$50 billion to NT$60 billion (US$1.7 billion to US$2.1 billion) from NT$75 billion to NT$70 billion (US$2.6 billion to US$2.4 billion) lowered previously and NT$100 billion (US$3.4 billion) announced in early this year. Chimei said that this year the company would focus mainly on high-level equipment and R&D projects for touch-panel technology.

    AUO, Chimei Innolux`s major rival and the No. 2 panel maker in Taiwan, recently also adjusted down its capital spending goal to under NT$70 billion (US$2.4 billion) from NT$90 billion to NT$95 billion (US$3.3 billion to US$3.1 billion).

    Chimei Innolux is a merger between three companies, including Chi Mei Optoelectronics Corp. (CMO), Innolux Display Corp., and TPO Displays Corp. (TPO), formed in the second quarter of 2010, and began reporting loss starting the third quarter of last year that has continued for four seasons.

    AUO reported an accumulated loss of NT$36 billion (US$1.24 billion) in the past three quarters.

    Eddie Chen, Chimei Innolux`s chief financial officer, said that his company focused on shipments of core businesses and cut many system assembly works in the second quarter. The company`s second-quarter shipments of large-sized panels increased about 10% quarter-on-quarter (QoQ), but its revenue generated from small/medium-sized panels fell 18.4% QoQ due to the falling panel prices. J.C. Wang, president of Chimei Innolux`s Southern Taiwan Science Park (STSP) branch, pointed out that his company decided to cut system-assembly business because it takes too many labor forces and that`s not his company`s core competitiveness.

    Wang said that the third quarter is a traditional high season, but the market now seems relatively weaker than it should be. In the second quarter, Chimei Innolux`s capacity utilization rate was about 80%, the company said that it would adjust according to market conditions.

    LCD maker CPT still deep in red in second quarter [July 30, 2011]

    LCD panel maker Chunghwa Picture Tubes Ltd (CPT, 中華映管) yesterday reported its 12th consecutive quarterly loss as prices for slim-screen panels for televisions and computers dropped on sluggish end demand.

    The company added that outlook for the third quarter remained sluggish, with demand expected to fall below the seasonal norm.

    However, Chunghwa Picture said it has no plans to cut its capital spending this year of between NT$2 billion (US$69 million) and NT$2.5 billion, which would be used to improve its equipment to produce high-definition flat panels used in tablet devices and smartphones.

    Earlier this week, its bigger local rival, AU Optronics Corp (友達光電), said it planned to slash capital spending by 30 percent.

    In the quarter ending June 30, Chunghwa Picture’s losses widened to NT$3.13 billion [US$108 million] from losses of NT$2.33 billion [US$80 million] in the first quarter. The Taoyuan-based company posted losses of NT$1.5 billion in the second quarter of last year.

    “Market demand, especially for TVs and IT products [computers], slumped in the first half. Oversupply caused panel prices to drop further,” company president Lin Sheng-chang (林盛昌) said during a teleconference with investors.

    “As the visibility for IT panels is unclear, we will make inventory management our priority,” Lin said.

    Days of inventory increased to 37 days last quarter from 31 days in the first quarter, the company said.

    The fragile economic recovery in the US and Europe is expected to curtail demand for consumer electronics, while demand for notebook computers should pick up slightly after new models hit the shelves, Chunghwa Picture said.

    To combat these difficult times, Lin said the company would have to accelerate its shift to high-margin products, such as tablet panels, touch sensors and smartphone screens, in the second half.

    Its newly formed strategic partnership with the world’s biggest e-paper display supplier, E Ink Holdings Inc (元太科技), will help it reach this goal, Lin said.

    Last week, E Ink agreed to spend NT$1.5 billion [US$52 million] to subscribe to Chunghwa Picture bonds. Chunghwa Picture agreed to supply LCD panels to E Ink.

    Besides e-paper displays, E Ink also supplies high-definition flat panels to LG Display and tablet device makers.

    Shipments of LCD panels used in smartphones, tablets and consumer electronics should grow by 20 percent to 25 percent in the second half, from 200 million units shipped in the first half, Lin said.

    Last quarter, revenues from small-and-medium LCD panels used in tablets and smartphones accounted for a larger share, 42 percent, of Chunghwa Picture’s total revenues of NT$15.93 billion, from 37 percent in the prior quarter, according to the company’s financial statement.

    Chunghwa Picture also said it would terminate its money-losing cathode-ray-tube (CRT) business. The company plans to revamp its CRT factories in Malaysia and in Fuzhou, Fujian Province, and shift to touch panel assembly.

    HannStar posts operating loss [Aug 15, 2011]

    HannStar Display has announced unconsolidated results for second-quarter 2011, with total sales rising 10% sequentially to NT$1.15 billion (US$387.4 million). But it recorded an operating loss of NT$1.04 billion and a net loss of NT$ 1.57 billion [US$54 million], which was translated into a loss per share of NT$ 0.27.

    Gross, operating, and net margin in the second quarter were 7%, -9%, and -14% respectively. Earnings before interest, taxes, depreciation and amortization (EBITDA) was 1%.

    HannStar said the operating loss in the second quarter was the result of an effort to enlarge its manufacturing capacity in Nanjing, China, which cost it an extra NT$1.88 billion [US$65 million] in operation.

    Capacity utilization of HannStar was nearly full in second-quarter 2011. Small- to medium-size panels under 10-inch took up about 45% of its total revenues. Notebook panels accounted for 10% and monitor panels 45%.

    HannStar is expected to enhance notebook panels’ share to 15% and small- to medium-size panels to 55% in third-quarter 2011. Monitor panels’ share will be lowered to around 30%.

    HannStar expects small- to medium-size panels’ share to reach 60% by end of 2011 and notebook panels to grow to 20%.

    Explanatory excerpts from Pixel Qi’s first big name device manufacturing partner is the extremely ambitious ZTE [Feb 15, 2011, with updates up to June 3, 2011]

    to engage some of the largest factories that have ever been made, and for that to work their economics need very high volumes. We need to have customers who really commit to large purchase orders almost before we start to design.”

    The display business can be considered to be the worlds biggest non-profit industry, the 5 biggest LCD makers who produce 90% of the worlds LCDs, produce for $120 Billion in screens every year but can only make small profit margins out of that because of the strong competition and the large volumes shipped. Those companies that produce the worlds LCD screens have very high costs, very high risks, little flexibility.