Home » 2013 » December

Monthly Archives: December 2013

AMD’s dense server strategy of mixing next-gen x86 Opterons with 64-bit ARM Cortex-A57 based Opterons on the SeaMicro Freedom™ fabric to disrupt the 2014 datacenter market using open source software (so far)

… so far, as Microsoft was in a “shut-up and ship” mode of operation during 2013 and could deliver its revolutionary Cloud OS with its even more disruptive Big Data solution for x86 only (that is likely to change as 64-bit ARM will be delivered with servers in H2 CY14).

Update: Disruptive Technologies for the Datacenter – Andrew Feldman, GM and CVP, AMD [Open Compute Project, Jan 28, 2014]

OCP Summit V – January 28, 2014, San Jose Convention Center, San Jose, California Disruptive Technologies for the Datacenter – Andrew Feldman, GM and CVP, AMD

image

image

image
Note from the press release given below that: “The AMD Opteron A-Series development kit is packaged in a Micro-ATX form factor”. Take the note of the topmost message: “Optimized for dense compute High-density, power-sensitive scale-out workloads: web hosting, data analytics, caching, storage”.

image

image

image

image

AMD to Accelerate the ARM Server Ecosystem with the First ARM-based CPU and Development Platform from a Server Processor Vendor [press release, Jan 28, 2014]

AMD also announced the imminent sampling of the ARM-based processor, named the AMD Opteron™ A1100 Series, and a development platform, which includes an evaluation board and a comprehensive software suite.

image
This should be the evaluation board for the development platform with imminent sampling.

In addition, AMD announced that it would be contributing to the Open Compute Project a new micro-server design using the AMD Opteron A-Series, as part of the common slot architecture specification for motherboards dubbed “Group Hug.”

From OCP Summit IV: Breaking Up the Monolith [blog of the Open Compute Project, Jan 16, 2013]
…  “Group Hug” board: Facebook is contributing a new common slot architecture specification for motherboards. This specification — which we’ve nicknamed “Group Hug” — can be used to produce boards that are completely vendor-neutral and will last through multiple processor generations. The specification uses a simple PCIe x8 connector to link the SOCs to the board. …

How does AMD support the Open Compute common slot architecture? [AMD YouTube channel, Oct 3, 2013]

Learn more about AMD Open Compute: http://bit.ly/AMD_OpenCompute Dense computing is the latest trend in datacenter technology, and the Open Compute Project is driving standards codenamed Common Slot. In this video, AMD explains Common Slot and how the AMD APU and ARM offerings will power next generation data centers.

See also: Facebook Saved Over A Billion Dollars By Building Open Sourced Servers [TechCrunch, Jan 28, 2014]
image
from which I copied here the above image showing the “Group Hug” motherboards.
Below you could see an excerpt from Andrew Feldman’s presentation showing such a motherboard with Opteron™ A1100 Series SoCs (even further down there is an image with Feldman showing that motherboard to the public during his talk):

image

The AMD Opteron A-Series processor, codenamed “Seattle,” will sample this quarter along with a development platform that will make software design on the industry’s premier ARM–based server CPU quick and easy. AMD is collaborating with industry leaders to enable a robust 64-bit software ecosystem for ARM-based designs from compilers and simulators to hypervisors, operating systems and application software, in order to address key workloads in Web-tier and storage data center environments. The AMD Opteron A-Series development platform will be supported by a broad set of tools and software including a standard UEFI boot and Linux environment based on the Fedora Project, a Red Hat-sponsored, community-driven Linux distribution.

imageAMD continues to drive the evolution of the open-source data center from vision to reality and bring choice among processor architectures. It is contributing the new AMD Open CS 1.0 Common Slot design based on the AMD Opteron A-Series processor compliant with the new Common Slot specification, also announced today, to the Open Compute Project.

AMD announces plans to sample 64-bit ARM Opteron A “Seattle” processors [AMD Blogs > AMD Business, Jan 28, 2014]

AMD’s rich history in server-class silicon includes a number of notable firsts including the first 64-bit x86 architecture and true multi-core x86 processors. AMD adds to that history by announcing that its revolutionary AMD Opteron™ A-series 64-bit ARM processors, codenamed “Seattle,” will be sampling this quarter.

AMD Opteron A-Series processors combine AMD’s expertise in delivering server-class silicon with ARM’s trademark low-power architecture and contributing to the Open Source software ecosystem that is rapidly growing around the ARM 64-bit architecture. AMD Opteron A-Series processors make use of ARM’s 64-bit ARMv8 architecture to provide true server-class features in a power efficient solution.

AMD plans for the AMD Opteron™ A1100 processors to be available in the second half of 2014 with four or eight ARM Cortex A57 cores, up to 4MB of shared Level 2 cache and 8MB of shared Level 3 cache. The AMD Opteron A-Series processor supports up to 128GB of DDR3 or DDR4 ECC memory as unbuffered DIMMs, registered DIMMs or SODIMMs.

The ARMv8 architecture is the first from ARM to have 64-bit support, something that AMD brought to the x86 market in 2003 with the AMD Opteron processor. Not only can the ARMv8-based Cortex A-57 architecture address large pools of memory, it has been designed from the ground up to provide the optimal balance of performance and power efficiency to address the broad spectrum of scale-out data center workloads.

With more than a decade of experience in designing server-class solutions silicon, AMD took the ARM Cortex A57 core, added a server-class memory controller, and included features resulting in a processor that meets the demands of scale-out workloads. A requirement of scale-out workloads is high performance connectivity, and the AMD Opteron A1100 processor has extensive integrated I/O, including eight PCI Express Gen 3 lanes, two 10 GB/s Ethernet and eight SATA 3 ports.

Scale-out workloads are becoming critical building blocks in today’s data centers. These workloads scale over hundreds or thousands of servers, making power efficient performance critical in keeping total cost of ownership (TCO) low. The AMD Opteron A-Series meets the demand of these workloads through intelligent silicon design and by supporting a number of operating system and software projects.

As part of delivering a server-class solution, AMD has invested in the software ecosystem that will support AMD Opteron A-Series processors. AMD is a gold member of the Linux Foundation, the organisation that oversees the development of the Linux kernel, and is a member of Linaro, a significant contributor to the Linux kernel. Alongside collaboration with the Linux Foundation and Linaro, AMD itself is listed as a top 20 contributor to the Linux kernel. A number of operating system vendors have stated they will support the 64-bit ARM ecosystem, including Canonical, Red Hat and SUSE, while virtualization will be enabled through KVM and Xen.

Operating system support is supplemented with programming language support, with Oracle and the community-driven OpenJDK porting versions of Java onto the 64-bit ARM architecture. Other popular languages that will run on AMD Opteron A-Series processors include Perl, PHP, Python and Ruby. The extremely popular GNU C compiler and the critical GNU C Library have already been ported to the 64-bit ARM architecture.

Through the combination of kernel support and development tools such as libraries, compilers and debuggers, the foundation has been set for developers to port applications to a rapidly growing ecosystem.

As AMD Opteron A-Series processors are well suited to web hosting and big data workloads, AMD is a gold sponsor of the Apache Foundation, the organisation that manages the Hadoop and HTTP Server projects. Up and down the software stack, the ecosystem is ready for the data center revolution that will take place when AMD Opteron A-Series are deployed.

Soon, AMD’s partners will start to realise what a true server-class 64-bit ARM processor can do. By using AMD’s Opteron A-Series Development Kit, developers can contribute to the fast growing software ecosystem that already includes operating systems, compilers, hypervisors and applications. Combining AMD’s rich history in designing server-class solutions with ARM’s legendary low-power architecture, the Opteron A-Series ushers in the era of personalised performance.

Introducing the industry’s only 64-bit ARM-based server SoC from AMD [AMD YouTube channel, Jan 21, 2014]

Hear from AMD & ARM executives on why AMD is well-suited to bring ARM to the datacenter. AMD is introducing “Seattle,” a 64-bit ARM-based server SoC built on the same technology that powers billions of today’s most popular mobile devices. By fusing AMD’s deep expertise in the server processor space along with ARM’s low-power, parallel processing capabilities, Seattle makes it possible for servers to be tuned for targeted workloads such as web/cloud hosting, multi-media delivery, and data analytics to enable optimized performance at low power thresholds. Subscribe: http://bit.ly/Subscribe_to_AMD

It Begins: AMD Announces Its First ARM Based Server SoC, 64-bit/8-core Opteron A1100 [AnandTech, Jan 28, 2014]

… AMD will be making a reference board available to interested parties starting in March, with server and OEM announcements to come in Q4 of this year

It’s still too early to talk about performance or TDPs, but AMD did indicate better overall performance than its Opteron X2150 (4-core 1.9GHz Jaguar) at a comparable TDP:

image

AMD alluded to substantial cost savings over competing Intel solutions with support for similar memory capacities. AMD tells me we should expect a total “solution” price somewhere around 1/10th that of a competing high-end Xeon box, but it isn’t offering specifics beyond that just yet. Given the Opteron X2150 performance/TDP comparison, I’m guessing we’re looking at a similar ~$100 price point for the SoC. There’s also no word on whether or not the SoC will leverage any of AMD’s graphics IP. …

End of Update

AMD is also in a quite unique market position now as its only real competitor, Calxeda shut down its operation on December 19, 2013 and went into restructuring. The reason for that was lack of further funding by venture capitalists attributed mainly to its initial 32-bit Cortex-A15 based approach and the unwillingness of customers and software partners to port their already 64-bit x86 software back to 32-bit.

With the only remaining competitor in the 64-bit ARM server SoC race so far*, Applied Micro’s X-Gene SoC being built on a purpose built core of its own (see also my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, Dec 6, 2013] post), i.e. with only architecture license taken from ARM Holdings, the volume 64-bit ARM server SoC market starting in 2014 already belongs to AMD. I would base that prediction on the AppliedMicro’s X-Gene: 2013 Year in Review [Dec 20, 2013] post, stating that the first-generation X-Gene product is just nearing volume production, and a pilot X-Gene solution is planned only for early 2014 delivery by Dell.

* There is also Cavium which has too an ARMv8 architecture license only (obtained in August, 2012) but for this the latest information (as of Oct 30, 2013) was that: “In terms of the specific announcement of the product, we want to do it fairly close to silicon. We believe that this is a very differentiated product, and we would like to kind of keep it under the covers as long as we can. Obviously our customers have all the details of the products, and they’re working with them, but on a general basis for competitive reasons, we are kind of keeping this a little bit more quieter than we normally do.”

Meanwhile the 64-bit x86 based SeaMicro solution has been on the market since July 30, 2010, after 3 years in development. At the time of SeaMicro acquisition by AMD (Feb 29, 2012) this already represented a quite well thought-out and engineered solution, as one can easily grasp from the information included below:  

image

1. IOVT: I/O-Virtualization Technology
2. TIO: Turn It Off

image

3. Freedom™ Supercomputer Fabric: 3D torus network fabric
– 8 x 8 x 8 Fabric nodes
– Diameter (max hop) 4 + 4 + 4 = 12
– Theor. cross section bandwidth = 2 (periodic) x 8 x 8 (section) x 2(bidir) x 2.0Gbs/link = 512Gb/s
– Compute, storage, mgmt cards are plugged into the network fabric
– Support for hot plugged compute cards
The first three—IOVT, TIO, and the Freedom™ Supercomputer Fabric—live in SeaMicro’s Freedom™ ASIC. Freedom™ ASICs are paired with each CPU and with DRAM, forming the foundational building block of a SeaMicro system.
4. DCAT: Dynamic Computation-Allocation Technology™
– CPU management and load balancing
– Dynamic workload allocation to specific CPUs on the basis of power-usage metrics
– Users can create pools of compute for a given application
– Compute resources can be dynamically added to the pool based on predefined utilization thresholds
The DCAT technology resides in the SeaMicro system software and custom-designed FPGAs/NPUs, which control and direct the I/O traffic.
More information:
SeaMicro SM10000-64 Server [SeaMicro presentation on Hot Chips 23, Aug 19, 2011] for slides in PDF format while the presentation itself is the first one in the following recorded video (just the first 20 minutes + 7 minutes of—quite valuable—Q&A following that):
Session 7, Hot Chips 23 (2011), Friday, August 19, 2011. SeaMicro SM10000-64 Server: Building Data Center Servers Using “Cell Phone” Chips Ashutosh Dhodapkar, Gary Lauterbach, Sean Lie, Dhiraj Mallick, Jim Bauman, Sundar Kanthadai, Toru Kuzuhara, Gene Shen, Min Xu, and Chris Zhang, SeaMicro Poulson: An 8-Core, 32nm, Next-Generation Intel Itanium Processor Stephen Undy, Intel T4: A Highly Threaded Server-on-a-Chip with Native Support for Heterogenous Computing Robert Golla and Paul Jordan, Oracle
SeaMicro Technology Overview [Anil Rao from SeaMicro, January 2012]
System Overview for the SM10000 Family [Anil Rao from SeaMicro, January 2012]
Note that the above is just for the 1st generation as after the AMD acquisition (Feb 29, 2012) a second generation solution came out with the SM15000 enclosure (Sept 10, 2012 with more info in the details section later), and certainly there will be a 3d generation solution with the integrated into the each of x86 and 64-bit ARM based SoCs coming in 2014.

With the “only production ready, production tested supercompute fabric” (as was touted by Rory Read, CEO of AMD more than a year ago), the SeaMicro Freedom™ now will be integrated into the upcoming 64-bit ARM Cortex-A57 based “Seattle” chips from AMD, sampling in the first quarter of 2014. Consequently I would argue that even the high-end market will be captured by the company. Moreover, I think this will not be only in the SoC realm but in enclosures space as well (although that 3d type of enclosure is still to come), to detriment of HP’s highly marketed Moonshot and CloudSystem initiatives.

Then here are two recent quotes from the top executive duo of AMD showing the importance of their upcoming solution as they view it themselves:

Rory Read – AMD’s President and CEO [Oct 17, 2013]:

In the server market, the industry is at the initial stages of a multiyear transition that will fundamentally change the competitive dynamic. Cloud providers are placing a growing importance on how they get better performance from their datacenters while also reducing the physical footprint and power consumption of their server solution.

image

Lisa Su – AMD’s Senior Vice President and General Manager, Global Business Units [Oct 17, 2013]:

We are fully top to bottom in 28 nanometer now across all of our products, and we are transitioning to both 20 nanometer and to FinFETs over the next couple of quarters in terms of designs. … [Regarding] the SeaMicro business, we are very pleased with the pipeline that we have there. Verizon was the first major datacenter win that we can talk about publicly. We have been working that relationship for the last two years. …

We’re very excited about the server space. It’s a very good market. It’s a market where there is a lot of innovation and change. In terms of 64-bit ARM, you will see us sampling that product in the first quarter of 2014. That development is on schedule and we’re excited about that. All of the customer discussions have been very positive and then we will combine both the [?x86 and the?]64-bit ARM chip with our SeaMicro servers that will have full solution as well. You will see SeaMicro plus ARM in 2014.

So I think we view this combination of IP as really beneficial to accelerating the dense server market both on the chip side and then also on the solution side with the customer set.

AMD SeaMicro has been extensively working with key platform software vendors, especially in the open source space:

image

The current state of that collaboration is reflected in the corresponding numbered sections coming after the detailed discussion (given below before the numbered sections):

  1. Verizon (as its first big name cloud customer, actually not using OpenStack)
  2. OpenStack (inc. Rackspace, excl. Red Hat)
  3. Red Hat
  4. Ubuntu
  5. Big Data, Hadoop


So let’s take a detailed look at the major topic:

AMD in the Demo Theater [OpenStack Foundation YouTube channel, May 8, 2013]

AMD presented its demo at the April 2013 OpenStack Summit in Portland, OR. For more summit videos, visit: http://www.openstack.org/summit/portland-2013/session-videos/
Note that the OpenStack Quantum networking project was renamed Neutron after April, 2013. Details on the OpenStack effort will be provided later in the post.

Rory Read – AMD President and CEO [Oct 30, 2012]:

That SeaMicro Freedom™ fabric is ultimately very-very important. It is the only production ready, production tested supercompute fabric on the planet.

Lisa Su – AMD Senior Vice President and General Manager, Global Business Units [Oct 30, 2012]:

The biggest change in the datacenter is that there is no one size fits all. So we will offer ARM-based CPUs with our fabric. We will offer x86-based CPUs with our fabric. And we will also look at opportunities where we can merge the CPU technology together with graphics compute in an APU form-factor that will be very-very good for specific workloads in servers as well. So AMD will be the only company that’s able to offer the full range of compute horsepower with the right workloads in the datacenter.

AMD makes ARM Cortex-A57 64bit Server Processor [Charbax YouTube channel, Oct 30, 2012]

AMD has announced that they are launching a new ARM Cortex-A57 64bit ARMv8 Processor in 2014, targetted for the servers market. This is an interview with Andrew Feldman, VP and GM of Data Center Server Solutions Group at AMD, founder of SeaMicro now acquired by AMD.

From AMD Changes Compute Landscape as the First to Bridge Both x86 and ARM Processors for the Data Center [press release, Oct 29, 2012]

This strategic partnership with ARM represents the next phase of AMD’s strategy to drive ambidextrous solutions in emerging mega data center solutions. In March, AMD announced the acquisition of SeaMicro, the leader in high-density, energy-efficient servers. With this announcement, AMD will integrate the AMD SeaMicro Freedom fabric across its leadership AMD Opteron x86- and ARM technology-based processors that will enable hundreds, or even thousands of processor clusters to be linked together to provide the most energy-efficient solutions.

AMD ARM Oct 29, 2012 Full length presentation [Manny Janny YouTube channel, Oct 30, 2012]

I do not have any affiliation with AMD or ARM. This video is posted to provide the general public with information and provide an area for comments
Rory Read – AMD President and CEO: [3:27] That SeaMicro Freedom™ fabric is ultimately very-very important in this announcement. It is the only production ready, production tested supercompute fabric on the planet. [3:41]
Lisa Su – Senior Vice President and General Manager, Global Business Units: [13:09] The biggest change in the datacenter is that there is no one size fits all. So we will offer ARM-based CPUs with our fabric. We will offer x86-based CPUs with our fabric. And we will also look at opportunities where we can merge the CPU technology together with graphics compute in an APU form-factor that will be very-very good for specific workloads in servers as well. So AMD will be the only company that’s able to offer the full range of compute horsepower with the right workloads in the datacenter [13:41]

From AMD to Acquire SeaMicro: Accelerates Disruptive Server Strategy [press release, Feb 29, 2012]

AMD (NYSE: AMD) today announced it has signed a definitive agreement to acquire SeaMicro, a pioneer in energy-efficient, high-bandwidth microservers, for approximately $334 million, of which approximately $281 million will be paid in cash. Through the acquisition of SeaMicro, AMD will be accelerating its strategy to deliver disruptive server technology to its OEM customers serving cloud-centric data centers. With SeaMicro’s fabric technology and system-level design capabilities, AMD will be uniquely positioned to offer industry-leading server building blocks tuned for the fastest-growing workloads such as dynamic web content, social networking, search and video. …
… “Cloud computing has brought a sea change to the data center–dramatically altering the economics of compute by changing the workload and optimal characteristics of a server,” said Andrew Feldman, SeaMicro CEO, who will become general manager of AMD’s newly created Data Center Server Solutions business. “SeaMicro was founded to dramatically reduce the power consumed by servers, while increasing compute density and bandwidth.  By becoming a part of AMD, we will have access to new markets, resources, technology, and scale that will provide us with the opportunity to work tightly with our OEM partners as we fundamentally change the server market.”

ARM TechCon 2012 SoC Partner Panel: Introducing the ARM Cortex-A50 Series [ARMflix YouTube channel, recorded on Oct 30, published on Nov 13, 2012]

Moderator: Simon Segars EVP and GM, Processor and Physical IP Divisions ARM Panelists: Andrew Feldman Corporate VP & GM, Data Center Server Solutions (need to confirm his title with AMD) AMD Martyn Humphries VP & General Manager, Mobile Applications Group Broadcom Karl Freund VP, Marketing Calxeda** John Kalkman VP, Marketing Samsung Semiconductor Bob Krysiak EVP and President of the Americas Region STMicroelectronics
** Note that nearly 14 months later, on Dec 19, 2013 Calxeda ran out of its ~$100M venture capital accumulated earlier. As the company was not able to secure further funding it shut down its operation by dismissing most of its employees (except 12 workers serving existing customers) and went into “restructuring” with just putting on their company website: “We will update you as we conclude our restructuring process”. This is despite of the kind of pioneering role the company had, especially with HP’s Moonshot and CloudSystem initiatives, and the relatively short term promise of delivering its server cartridge to HP’s next-gen Moonshot enclosure as was well reflected in my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, Dec 6, 2013] post. The major problem was that “it tried to get to market with 32-bit chip technology, at a time most x86 servers boast 64-bit technology … [and as] customers and software companies weren’t willing to port their software to run on 32-bit systems” – reported the Wall Street Journal. I would also say that AMD’s “only production ready, production tested supercompute fabric on the planet” (see AMD Rory’s statement already given above) with its upcoming “Seattle” 64-bit ARM SoC to be on track for delivery in H2 CY14 was another major reason for the lack of additional venture funds to Calxeda.

AMD’s 64-bit “Seattle” ARM processor brings best of breed hardware and software to the data center [AMD Business blog, Dec 12, 2013]

Going into 2014, the server market is set to face the biggest disruption since AMD launched the 64-bit x86 AMD Opteron™ processor – the first 64-bit x86 processor – in 2003. Processors based on ARM’s 64-bit ARMv8 architecture will start to appear next year, and just like the x86 AMD Opteron™ processors a decade ago, AMD’s ARM 64-bit processors will offer enterprises a viable option for efficiently handling vast amounts of data.

image

From: AMD Unveils Server Strategy and Roadmap [press release June 18, 2013]

These forthcoming AMD Opteron™ processors bring important innovations to the rapidly changing compute market, including integrated CPU and GPU compute (APU); high core-count ARM servers for high-density compute in the data center; and substantial improvements in compute per-watt per-dollar and total cost of ownership.
“Our strategy is to differentiate ourselves by using our unique IP to build server processors that are particularly well matched to a target workload and thereby drive down the total cost of owning servers. This strategy unfolds across both the enterprise and data centers and includes leveraging our graphics processing capabilities and embracing both x86 and ARM instruction sets,” said Andrew Feldman, general manager of the Server Business Unit, AMD. “AMD led the world in the transition to multicore processors and 64-bit computing, and we intend to do it again with our next-generation AMD Opteron families.”
In 2014, AMD will set the bar in power-efficient server compute with the industry’s premier ARM server CPU. The 64-bit CPU, code named “Seattle,” is based on ARM Cortex-A57 cores and is expected to provide category-leading throughput as well as setting the bar in performance-per-watt. AMD will also deliver a best-in-class APU, code named “Berlin.” “Berlin” is an x86 CPU and APU, based on a new generation of cores namedSteamroller.”  Designed to double the performance of the recently available “Kyoto” part, “Berlin” will offer extraordinary compute-per-watt that will enable massive rack density. The third processor announced today is code named “Warsaw,” AMD’s next-generation 2P/4P offering. It is optimized to handle the heavily virtualized workloads found in enterprise environments including the more complex compute needs of data analytics, xSQL and traditional databases. “Warsaw” will provide significantly improved performance-per-watt over today’s AMD Opteron 6300 family. 
Seattle
“Seattle” will be the industry’s only 64-bit ARM-based server SoC from a proven server processor supplier.  “Seattle” is an 8- and then 16-core CPU based on the ARM Cortex-A57 core and is expected to run at or greater than 2 GHz.  The “Seattle” processor is expected to offer 2-4X the performance of AMD’s recently announced AMD Opteron X-Series processor with significant improvement in compute-per-watt.  It will deliver 128GB DRAM support, extensive offload engines for better power efficiency and reduced CPU loading, server caliber encryption, and compression and legacy networking including integrated 10GbE.  It will be the first processor from AMD to integrate AMD’s advanced Freedom™ Fabric for dense compute systems directly onto the chip. AMD plans to sample “Seattle” in the first quarter of 2014 with production in the second half of the year.
Berlin
Berlin” is an x86-based processor that will be available both as a CPU and APU. The processor boasts four next-generation “Steamroller” cores and will offer almost 8X the gigaflops per-watt compared to current AMD Opteron™ 6386SE processor.  It will be the first server APU built on AMD’s revolutionary Heterogeneous System Architecture (HSA), which enables uniform memory access for the CPU and GPU and makes programming as easy as C++. “Berlin” will offer extraordinary compute per-watt that enables massive rack density. It is expected to be available in the first half of 2014
Warsaw
Warsaw” is an enterprise server CPU optimized to deliver unparalleled performance and total cost of ownership for two- and four-socket servers.  Designed for enterprise workloads, it will offer improved performance-per-watt, which drives down the cost of owning a “Warsaw”-based server while enabling seamless migration from the AMD Opteron 6300 Series family.  It is a fully compatible socket with identical software certifications, making it ideal for the AMD Open 3.0 Server – the industry’s most cost effective Open Compute platform.  It is expected to be available in the first quarter of 2014.

Note that AMD Details Embedded Product Roadmap [press release, Sept, 9, 2013] as well in which there is also a:

“Hierofalcon” CPU SoC
“Hierofalcon” is the first 64-bit ARM-based platform from AMD targeting embedded data center applications, communications infrastructure and industrial solutions. It will include up to eight ARM Cortex™-A57 CPUs expected to run up to 2.0 GHz, and provides high-performance memory with two 64-bit DDR3/4 channels with error correction code (ECC) for high reliability applications. The highly integrated SoC includes 10 Gb KR Ethernet and PCI-Express Gen 3 for high-speed network connectivity, making it ideal for control plane applications. The “Hierofalcon” series also provides enhanced security with support for ARM TrustZone® technology and a dedicated cryptographic security co-processor, aligning to the increased need for networked, secure systems. “Hierofalcon” is expected to be sampling in the second quarter of 2014 with production in the second half of the year.

image

The AMD Opteron processor came at a time when x86 processors were seen by many as silicon that could only power personal computers, with specialized processors running on architectures such as SPARC™ and Power™ being the ones that were handling server workloads. Back in 2003, the AMD Opteron processor did more than just offer another option, it made the x86 architecture a viable contender in the server market – showing that processors based on x86 architectures could compete effectively against established architectures. Thanks in no small part to the AMD Opteron processor, today the majority of servers shipped run x86 processors.

In 2014, AMD will once again disrupt the datacenter as x86 processors will be joined by those that make use of ARM’s 64-bit architecture. Codenamed “Seattle,” AMD’s first ARM-based Opteron processor will use the ARMv8 architecture, offering low-power processing in the fast growing dense server space.

To appreciate what the first ARM-based AMD Opteron processor is designed to deliver to those wanting to deploy racks of servers, it is important to realize that the ARMv8 architecture offers a clean slate on which to build both hardware and software.

ARM’s ARMv8 architecture is much more than a doubling of word-length from previous generation ARMv7 architecture: it has been designed from the ground-up to provide higher performance while retaining the trademark power efficiencies that everyone has come to expect from the ARM architecture. AMD’s “Seattle” processors will have either four or eight cores, packing server-grade features such as support for up to 128 GB of ECC memory, and integrated 10Gb/sec of Ethernet connectivity with AMD’s revolutionary Freedom™ fabric, designed to cater for dense compute systems.

From: AMD Delivers a New Generation of AMD Opteron and Intel Xeon “Ivy Bridge” Processors in its New SeaMicro SM15000 Micro Server Chassis [press release, Sept 10, 2012]

With the new AMD Opteron processor, AMD’s SeaMicro SM15000 provides 512 cores in a ten rack unit system with more than four terabytes of DRAM and supports up to five petabytes of Freedom Fabric Storage. Since AMD’s SeaMicro SM15000 server is ten rack units tall, a one-rack, four-system cluster provides 2,024 cores, 16 terabytes of DRAM, and is capable of supporting 20 petabytes of storage.  The new and previously unannounced AMD Opteron processor is a custom designed octal core 2.3 GHz part based on the new “Piledriver” core, and supports up to 64 gigabytes of DRAM per CPU. The SeaMicro SM15000 system with the new AMD Opteron processor sets the high watermark for core density for micro servers.
Configurations based on the AMD Opteron processor and Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge” microarchitecture) will be available in November 2012. …

image

AMD off-chip interconnect fabric IP designed to enable significantly lower TCO

• Links hundreds –> thousands of SoC modules

• Shares hundreds of TBs storage and virtualizes I/O

• 160Gbps Ethernet Uplink

• Instruction Set:
– x86
– ARM (coming in 2014 when the fabric will be integrated into the SoCs as well, including the x86 SoCs)

From: SM15000-OP: 64 Octal Core Servers
with AMD Opteron™ processors (2.0/2.3/2.8 GHz, 8 “Piledriver” cores)

image

Freedom™ ASIC 2.0 – Industry’s only Second Generation Fabric Technology
The Freedom™ ASIC is the building block of SeaMicro Fabric Compute Systems, enabling interconnection of energy efficient servers in a 3-dimensional Torus Fabric. The second generation Freedom ASIC includes high performance network interfaces, storage connectivity, and advanced server management, thereby eliminating the need for multiple sets of network adapters, HBAs, cables, and switches. This results in unmatched density, energy efficiency, and lowered TCO. Some of the key technologies in ASIC 2.0 include:
  • SeaMicro Input/Output Virtualization Technology (IOTV™) eliminates all but three components from SeaMicro’s motherboard—CPU, DRAM, and the ASIC itself—thereby shrinking the motherboard, while reducing power, cost and space.
  • SeaMicro new TIO™ (Turn It Off) technology enables SeaMicro to further power-optimize the mini motherboard by turning off unneeded CPU and chipset functions. Together, SeaMicro’s I/O Virtualization Technology and TIO technology produce the smallest and most power efficient server motherboards available.
  • SeaMicro Freedom Supercompute Fabric built of multiple Freedom ASICs working together, creating a 1.28 terabits per-second fabric that ties together 64 of the power-optimized mini-motherboards at low latency and low power with massive bandwidth.
  • SeaMicro Freedom Fabric Storage technology allows the Freedom supercompute fabric to extend out of the chassis and across the data center linking not just components inside the chassis, but also those outside as well.

image

Unified Management – Easily Provision and Manage Servers, Network, and Storage Resources on Demand
The SeaMicro SM15000 implements a rich management system providing unified management of servers, network, and storage. Resources can be rapidly deployed, managed, and repurposed remotely, enabling lights-off data center operations. It offers a broad set of management API including an industry standard CLI, SNMP, IPMI, syslog, and XEN APIs, allowing customers to seamlessly integrate the SeaMicro SM15000 into existing data center management environments.
Redundancy and Availability – Engineered from the Ground Up to Eliminate Single Points of Failure
The SeaMicro SM15000 is designed for the most demanding environments, helping to ensure availability of compute, network, storage, and system management. At the heart of the system is the Freedom Fabric, interconnecting all resources in the system, with the ability to sustain multiple points of failure and allow live component servicing. All active components in the system can be configured redundant and are hot-swappable, including server cards, network uplink cards, storage controller cards, system management cards, disks, fan trays, and power supplies. Key resources can also be configured to be protected in the following ways:
Compute – A shared spare server can be configured to act as a standby spare for multiple primary servers. In the event of failure, the primary server’s personality, including MAC address, assigned disks, and boot configuration can be migrated to the standby spare and brought back online – ensuring fast restoration of services from a remote location.
Network – The highly available fabric ensures network connectivity is maintained between servers and storage in the event of path failure. For uplink high-availability, the system can be configured with multiple uplink modules and port channels providing redundant active/active interfaces.
Storage – The highly available fabric ensures that servers can access fabric storage in the event of failures. The fabric storage system also provides an efficient, high utilization optional hardware RAID to protect data in case of disk failure.


The Industry’s First Data Center in a Box
AMD’s SeaMicro SM15000 family of Fabric Compute Systems provides the equivalent of 32 1RU dual socket servers, massive bandwidth, top of rack Ethernet switching, and high capacity shared storage, with centralized management in a small, compact 10RU form factor. In addition, it provides an integrated server console management for unified management. The SeaMicro SM15000 dramatically reduces CAPEX and significantly reduces the ongoing OPEX of deploying discreet compute, networking, storage, and management systems.
More information:
An Overview of AMD|SeaMicro Technology [Anil Rao from AMD|SeaMicro, October 2012]
System Overview for the SM15000 Family [Anil Rao from AMD|SeaMicro, October 2012]
What a Difference 0.09 Percent Makes [The Wave Newsletter from AMD, September 2013]
Today’s cloud services have helped companies consolidate infrastructure and drive down costs, however, recent service interruptions point to a big downside of relying on public cloud service. Most are built using commodity, off-the-shelf servers to save costs and are standardized around the same computing and storage SLAs of 99.95 and 99.9 percent. This is significantly lower than the four nine availability standard in the data networking world. Leading companies are realizing that the performance and reliability of their applications is inextricably linked to their underlying server architecture. In this issue, we discuss the strategic importance of selecting the right hardware. Whether building an enterprise-caliber cloud service or implementing Apache™ Hadoop® to process and analyze big data, hardware matters.
more >
Where Does Software End and Hardware Begin? [The Wave Newsletter from AMD, September 2013]
Lines are blurring between software and hardware with some industry leaders choosing to own both. Software companies are realizing that the performance and value of their software depends on their hardware choices.  more >
Improving Cloud Service Resiliency with AMD’s SeaMicro Freedom Fabric [The Wave Newsletter from AMD, December 2013]
Learn why AMD’s SeaMicro Freedom™ Fabric ASIC is the server industry’s first viable solution to cost-effectively improve the resiliency and availability of cloud-based services.

We realize that having an impressive set of hardware features in the first ARM-based Opteron processors is half of the story, and that is why we are hard at work on making sure the software ecosystem will support our cutting edge hardware. Work on software enablement has been happening throughout the stack – from the UEFI, to the operating system and onto application frameworks and developer tools such as compilers and debuggers. This ensures that the software will be ready for ARM-based servers.

AMD developing Linux on ARM at Linaro Connect 2013 [Charbax YouTube channel, March 11, 2013]

[Recorded at Linaro Connect Asia 2013, March 4-8, 2013] Dr. Leendert van Doorn, Corporate Fellow at AMD, talks about what AMD does with Linaro to optimize Linux on ARM. He talks about the expectations that AMD has for results to come from Linaro in terms of achieving a better and more fully featured Linux world on ARM, especially for the ARM Cortex-A57 ARMv8 processor that AMD has announced for the server market.

AMD’s participation in software projects is well documented, being a gold member of the Linux Foundation, the organization that manages the development of the Linux kernel, and a group member of Linaro. AMD is a gold sponsor of the Apache Foundation, which oversees projects such as Hadoop, HTTP Server and Samba among many others, and the company’s engineers are contributors to the OpenJDK project. This is just a small selection of the work AMD is taking part in, and these projects in particular highlight how important AMD feels that open source software is to the data center, and in particular micro servers, that make use of ARM-based processors.

And running ARM-based processors doesn’t mean giving up on the flexibility of virtual machines, with KVM already ported to the ARMv8 architecture. Another popular hypervisor, Xen, is already available for 32-bit ARM architectures with a 64-bit port planned, ensuring that two popular and highly capable hypervisors will be available.

The Linux kernel has supported 64-bit ARMv8 architecture since Linux 3.7, and a number of popular Linux distributions have already signaled their support for the architecture including Canonical’s Ubuntu and the Red Hat sponsored Fedora distribution. In fact there is a downloadable, bootable Ubuntu distribution available in anticipation for ARMv8-based processors.

It’s not just operating systems and applications that are available. Developer tools such as the extremely popular open source GCC compiler and the vital GNU C Library (Glibc) have already been ported to the ARMv8 architecture and are available for download. With GCC and Glibc good to go, a solid foundation for developers to target the ARMv8 architecture is forming.

All of this work on both hardware and software should shed some light on just how big ARM processors will be in the data center. AMD, an established enterprise semiconductor vendor, is uniquely placed to ship both 64-bit ARMv8 and 64-bit x86 processors that enable “mixed rack” environments. And thanks to the army of software engineers at AMD, as well as others around the world who have committed significant time and effort, the software ecosystem will be there to support these revolutionary processors. 2014 is set to see the biggest disruption in the data center in over a decade, with AMD again at the center of it.

Lawrence Latif is a blogger and technical communications representative at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites, and references to third party trademarks, are provided for convenience and illustrative purposes only. Unless explicitly stated, AMD is not responsible for the contents of such links, and no third party endorsement of AMD or any of its products is implied.

End of AMD’s 64-bit “Seattle” ARM processor brings best of breed hardware and software to the data center [AMD Business blog, Dec 12, 2013]

AMD at ARM Techcon 2013 [Charbax YouTube channel, recorded at the ARM Techcon 2013 (Oct 29-31), published on Dec 25, 2013]

AMD in 2014 will be delivering a 64bit ARM processor for servers. The ARM Architecture and Ecosystem enables servers to achieve greater performance per watt and greater performance per dollar. The code name for the product is Seattle. AMD Seattle is expected to reach mass market cloud servers in the second half of 2014.

From: Advanced Micro Devices’ CEO Discusses Q3 2013 Results – Earnings Call Transcript [Seeking Alpha, Oct 17, 2013]

Rory Read – President and CEO:

The three step turnaround plan we outlined a year ago to restructure, accelerate and ultimately transform AMD is clearly paying off. We completed the restructuring phase of our plan, maintaining cash at optimal levels and beating our $450 million quarterly operating expense goal in the third quarter. We are now in the second phase of our strategy – accelerating our performance by consistently executing our product roadmap while growing our new businesses to drive a return to profitability and positive free cash flow.
We are also laying the foundation for the third phase of our strategy, as we transform AMD to compete across a set of high growth markets. Our progress on this front was evident in the third quarter as we generated more than 30% of our revenue from our semi-custom and embedded businesses. Over the next two years we will continue to transform AMD to expand beyond a slowing, transitioning PC industry, as we create a more diverse company and look to generate approximately 50% of our revenue from these new high growth markets.

We have strategically targeted that semi-custom, ultra-low power client, embedded, dense server and the professional graphics market where we can offer differentiated products that leverage our APU and graphics IP. Our strategy allows us to continue to invest in the product that will drive growth, while effectively managing operating expenses. …

… Several of our growth businesses passed key milestones in the third quarter. Most significantly, our semi-custom business ramped in the quarter. We successfully shipped millions of units to support Sony and Microsoft, as they prepared to launch their next-generation game consoles. Our game console wins are generating a lot of customer interest, as we demonstrate our ability to design and reliably ramp production on two of the most complex SOCs ever built for high-volume consumer devices. We have several strong semi-custom design opportunities moving through the pipeline as customers look to tap into AMD’s IP, design and integration expertise to create differentiated winning solutions. … it’s our intention to win and mix in a whole set semicustom offerings as we build out this exciting and important new business.
We made good progress in our embedded business in the third quarter. We expanded our current embedded SOC offering and detailed our plans to be the only company to offer both 64-bit x86 and ARM solutions beginning in 2014. We have developed a strong embedded design pipeline which, we expect, will drive further growth for this business across 2014.
We also continue to make steady progress in another of our growth businesses in the third quarter, as we delivered our fifth consecutive quarter of revenue and share growth in the professional graphics area. We believe we can continue to gain share in this lucrative part of the GPU market, based on our product portfolio, design wins [in place] [ph] and enhanced channel programs.

In the server market, the industry is at the initial stages of a multiyear transition that will fundamentally change the competitive dynamic. Cloud providers are placing a growing importance on how they get better performance from their datacenters while also reducing the physical footprint and power consumption of their server solution.

This will become the defining metric of this industry and will be a key growth driver for the market and the new AMD. AMD is leading this emerging trend in the server market and we are committed to defining a leadership position.

Earlier this quarter, we had a significant public endorsement of our dense server strategy as Verizon announced a high performance public cloud that uses our SeaMicro technology and Opteron processor. We remain on track to introduce new, low-power X86 and 64-bit ARM processors next year and we believe we will offer the industry leading ARM-based servers. …

Two years ago we were 90% to 95% of our business centered over PCs and we’ve launched the clear strategy to diversify our portfolio taking our IT — leadership IT and Graphics and CPU and taking it into adjacent segment where there is high growth for three, five, seven years and stickier opportunities.
We see that as an opportunity to drive 50% or more of our business over that time horizon. And if you look at the results in the third quarter, we are already seeing the benefits of that opportunity with over 30% of our revenue now coming from semi-custom and our embedded businesses.
We see it is an important business in PC, but its time is changing and the go-go era is over. We need to move and attack the new opportunities where the market is going, and that’s what we are doing.

Lisa Su – Senior Vice President and General Manager, Global Business Units:

We are fully top to bottom in 28 nanometer now across all of our products, and we are transitioning to both 20 nanometer and to FinFETs over the next couple of quarters in terms of designs. We will do 20 nanometer first, and then we will go to FinFETs. …

game console semicustom product is a long life cycle product over five to seven years. Certainly when we look at cost reduction opportunities, one of the important ones is to move technology nodes. So we will in this timeframe certainly move from 28 nanometer to 20 nanometer and now the reason to do that is both for pure die cost savings as well as all the power savings that our customer benefits from. … so expect the cost to go down on a unit basis as we move to 20.

[Regarding] the SeaMicro business, we are very pleased with the pipeline that we have there. Verizon was the first major datacenter win that we can talk about publicly. We have been working that relationship for the last two years. So it’s actually nice to be able to talk about it. We do see it as a major opportunity that will give us revenue potential in 2014. And we continue to see a strong pipeline of opportunities with SeaMicro as more of the datacenter guys are looking at how to incorporate these dense servers into their new cloud infrastructures. …

… As I said the Verizon engagement has lasted over the past two years. So some of the initial deployments were with the Intel processors but we do have significant deployments with AMD Opteron as well. We do see the percentage of Opteron processors increasing because that’s what we’d like to do. …

We’re very excited about the server space. It’s a very good market. It’s a market where there is a lot of innovation and change. In terms of 64-bit ARM, you will see us sampling that product in the first quarter of 2014. That development is on schedule and we’re excited about that. All of the customer discussions have been very positive and then we will combine both the [?x86 and the?]64-bit ARM chip with our SeaMicro servers that will have full solution as well. You will see SeaMicro plus ARM in 2014.

So I think we view this combination of IP as really beneficial to accelerating the dense server market both on the chip side and then also on the solution side with the customer set.

Amazon’s James Hamilton: Why Innovation Wins [AMD SeaMicro YouTube channel, Nov 12, 2012] video which was included into the Headline News and Events section of Volume 1, December 2012 of The Wave Newsletter from AMD SeaMicro with the following intro:

James Hamilton, VP and Distinguished Engineer at Amazon called AMD’s co-announcement with ARM to develop 64-bit ARM technology-based processors “A great day for the server ecosystem.” Learn why and hear what James had to say about what this means for customers and the broader server industry.

James Hamilton of Amazon discusses the four basic tenants of why he thinks data center server innovation needs to go beyond just absolute performance. He believes server innovation delivering improved volume economics, storage performance, price/performance and power/performance will win in the end.

AMD Changes Compute Landscape as the First to Bridge Both x86 and ARM Processors for the Data Center [press release, Oct 29, 2012]

Company to Complement x86-based Offerings with New Processors Based on ARM 64-bit Technology, Starting with Server Market

SUNNYVALE, Calif. —10/29/2012

In a bold strategic move, AMD (NYSE: AMD) announced that it will design 64-bit ARM® technology-based processors in addition to its x86 processors for multiple markets, starting with cloud and data center servers. AMD’s first ARM technology-based processor will be a highly-integrated, 64-bit multicore System-on-a-Chip (SoC) optimized for the dense, energy-efficient servers that now dominate the largest data centers and power the modern computing experience. The first ARM technology-based AMD Opteron™ processor is targeted for production in 2014 and will integrate the AMD SeaMicro Freedom™ supercompute fabric, the industry’s premier high-performance fabric.

AMD’s new design initiative addresses the growing demand to deliver better performance-per-watt for dense cloud computing solutions. Just as AMD introduced the industry’s first mainstream 64-bit x86 server solution with the AMD Opteron processor in 2003, AMD will be the only processor provider bridging the x86 and 64-bit ARM ecosystems to enable new levels of flexibility and drive optimal performance and power-efficiency for a range of enterprise workloads.

“AMD led the data center transition to mainstream 64-bit computing with AMD64, and with our ambidextrous strategy we will again lead the next major industry inflection point by driving the widespread adoption of energy-efficient 64-bit server processors based on both the x86 and ARM architectures,” said Rory Read, president and chief executive officer, AMD. “Through our collaboration with ARM, we are building on AMD’s rich IP portfolio, including our deep 64-bit processor knowledge and industry-leading AMD SeaMicro Freedom supercompute fabric, to offer the most flexible and complete processing solutions for the modern data center.”

“The industry needs to continuously innovate across markets to meet customers’ ever-increasing demands, and ARM and our partners are enabling increasingly energy-efficient computing solutions to address these needs,” said Warren East, chief executive officer, ARM. “By collaborating with ARM, AMD is able to leverage its extraordinary portfolio of IP, including its AMD Freedom supercompute fabric, with ARM 64-bit processor cores to build solutions that deliver on this demand and transform the industry.”

The explosion of the data center has brought with it an opportunity to optimize compute with vastly different solutions. AMD is providing a compute ecosystem filled with choice, offering solutions based on AMD Opteron x86 CPUs, new server-class Accelerated Processing Units (APUs) that leverage Heterogeneous Systems Architecture (HSA), and new 64-bit ARM-based solutions.

This strategic partnership with ARM represents the next phase of AMD’s strategy to drive ambidextrous solutions in emerging mega data center solutions. In March, AMD announced the acquisition of SeaMicro, the leader in high-density, energy-efficient servers. With this announcement, AMD will integrate the AMD SeaMicro Freedom fabric across its leadership AMD Opteron x86- and ARM technology-based processors that will enable hundreds, or even thousands of processor clusters to be linked together to provide the most energy-efficient solutions.

“Over the past decade the computer industry has coalesced around two high-volume processor architectures – x86 for personal computers and servers, and ARM for mobile devices,” observed Nathan Brookwood, research fellow at Insight 64. “Over the next decade, the purveyors of these established architectures will each seek to extend their presence into market segments dominated by the other. The path on which AMD has now embarked will allow it to offer products based on both x86 and ARM architectures, a capability no other semiconductor manufacturer can likely match.”

At an event hosted by AMD in San Francisco, representatives from Amazon, Dell, Facebook and Red Hat participated in a panel discussion on opportunities created by ARM server solutions from AMD. A replay of the event can be found here as of 5 p.m. PDT, Oct. 29.

Supporting Resources

  • AMD bridges the x86 and ARM ecosystems for the data center announcement press resources
  • Follow AMD on Twitter at @AMD
  • Follow the AMD and ARM announcement on Twitter at #AMDARM
  • Like AMD on Facebook.

AMD SeaMicro SM15000 with Freedom Fabric Storage [AMD YouTube channel, Sept 11, 2012]

AMD Extends Leadership in Data Center Innovation – First to Optimize the Micro Server for Big Data [press release, Sept 10, 2012]

AMD’s SeaMicro SM15000™ Server Delivers Hyper-efficient Compute for Big Data and Cloud Supporting Five Petabytes of Storage; Available with AMD Opteron™ and Intel® Xeon® “Ivy Bridge”/”Sandy Bridge” Processors
SUNNYVALE, Calif. —9/10/2012
AMD (NYSE: AMD) today announced the SeaMicro SM15000™ server, another computing innovation from its Data Center Server Solutions (DCSS) group that cements its position as the technology leader in the micro server category. AMD’s SeaMicro SM15000 server revolutionizes computing with the invention of Freedom™ Fabric Storage, which extends its Freedom™ Fabric beyond the SeaMicro chassis to connect directly to massive disk arrays, enabling a single ten rack unit system to support more than five petabytes of low-cost, easy-to-install storage. The SM15000 server combines industry-leading density, power efficiency and bandwidth with a new generation of storage technology, enabling a single rack to contain thousands of cores, and petabytes of storage – ideal for big data applications like Apache™ Hadoop™ and Cassandra™ for public and private cloud deployments.
AMD’s SeaMicro SM15000 system is available today and currently supports the Intel® Xeon® Processor E3-1260L (“Sandy Bridge”). In November, it will support the next generation of AMD Opteron™ processors featuring the “Piledriver” core, as well as the newly announced Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge”). In addition to these latest offerings, the AMD SeaMicro fabric technology continues to deliver a key building block for AMD’s server partners to build extremely energy efficient micro servers for their customers.
“Historically, server architecture has focused on the processor, while storage and networking were afterthoughts. But increasingly, cloud and big data customers have sought a solution in which storage, networking and compute are in balance and are shared. In a legacy server, storage is a captive resource for an individual processor, limiting the ability of disks to be shared across multiple processors, causing massive data replication and necessitating the purchase of expensive storage area networking or network attached storage equipment,” said Andrew Feldman, corporate vice president and general manager of the Data Center Server Solutions group at AMD. “AMD’s SeaMicro SM15000 server enables companies, for the first time, to share massive amounts of storage across hundreds of efficient computing nodes in an exceptionally dense form factor. We believe that this will transform the data center compute and storage landscape.”
AMD’s SeaMicro products transformed the data center with the first micro server to combine compute, storage and fabric-based networking in a single chassis. Micro servers deliver massive efficiencies in power, space and bandwidth, and AMD set the bar with its SeaMicro product that uses one-quarter the power, takes one-sixth the space and delivers 16 times the bandwidth of the best-in-class alternatives. With the SeaMicro SM15000 server, the innovative trajectory broadens the benefits of the micro server to storage, solving the most pressing needs of the data center.
Combining the Freedom™ Supercompute Fabric technology with the pioneering Freedom™ Fabric Storage technology enables data centers to provide more than five petabytes of storage with 64 servers in a single ten rack unit (17.5 inch tall) SM15000 system. Once these disks are interconnected with the fabric, they are seen and shared by all servers in the system. This approach provides the benefits typically provided by expensive and complex solutions such as network-attached storage and storage area networking with the simplicity and low cost of direct attached storage
“AMD’s SeaMicro technology is leading innovation in micro servers and data center compute,” said Zeus Kerravala, founder and principal analyst of ZK Research. “The team invented the micro server category, was the first to bring small-core servers and large-core servers to market in the same system, the first to market with a second-generation fabric, and the first to build a fabric that supports multiple processors and instruction sets. It is not surprising that they have extended the technology to storage. The bringing together of compute and petabytes of storage demonstrates the flexibility of the Freedom Fabric. They are blurring the boundaries of compute, storage and networking, and they have once again challenged the industry with bold innovation.”
Leaders Across the Big Data Community Agree
Dr. Amr Awadallah, CTO and Founder at Cloudera, the category leader that is setting the standard for Hadoop in the enterprise, observes: “The big data community is hungry for innovations that simplify the infrastructure for big data analysis while reducing hardware costs. As we hear from our vast big data partner ecosystem and from customers using CDH and
Cloudera Enterprise, companies that are seeking to gain insights across all their data want their hardware vendors to provide low cost, high density, standards-based compute that connects to massive arrays of low cost storage. AMD’s SeaMicro delivers on this promise.”
Eric Baldeschwieler, co-founder and CTO of Hortonworks and a pioneer in Hadoop technology, notes: “Petabytes of low cost storage, hyper-dense energy-efficient compute, connected with a supercompute-style fabric is an architecture particularly well suited for big data analytics and Hortonworks Data Platform. At Hortonworks, we seek to make Apache Hadoop easier to use, consume and deploy, which is in line with AMD’s goal to revolutionize and commoditize the storage and processing of big data. We are pleased to see leaders in the hardware community inventing technology that extends the reach of big data analysis.”
Matt Pfeil, co-founder and VP of customer solutions at DataStax, the leader in real-time mission-critical big data platforms, agrees: “At DataStax, we believe that extraordinary databases, such as Cassandra, running mission-critical applications, can be used by nearly every enterprise. To see AMD’s DCSS group bringing together efficient compute and petabytes of storage over a unified fabric in a single low-cost, energy-efficient solution is enormously exciting. The combination of the SM15000 server and best-in-class database, Cassandra, offer a powerful threat to the incumbent makers of both databases and the expensive hardware on which they reside.”
AMD’s SeaMicro SM15000™ Technology
AMD’s SeaMicro SM15000 server is built around the industry’s first and only second-generation fabric, the Freedom Fabric. It is the only fabric technology designed and optimized to work with Central Processor Units (CPUs) that have both large and small cores, as well as x86 and non-x86 CPUs. Freedom Fabric contains innovative technology including:
  • SeaMicro IOVT (Input/Output Virtualization Technology), which eliminates all but three components from the SeaMicro motherboard – CPU, DRAM, and the ASIC itself – thereby shrinking the motherboard, while reducing power, cost and space;
  • SeaMicro TIO™ (Turn It Off) technology, which enables further power optimization on the mini motherboard by turning off unneeded CPU and chipset functions. Together, SeaMicro IOVT and TIO technology produce the smallest and most power efficient motherboards available;
  • Freedom Supercompute Fabric creates a 1.28 terabits-per-second fabric that ties together 64 of the power-optimized mini-motherboards at low latency and low power with massive bandwidth;
  • SeaMicro Freedom Fabric Storage, which allows the Freedom Supercompute Fabric to extend out of the chassis and across the data center, linking not just components inside the chassis, but those outside as well.
AMD’s SeaMicro SM15000 Server Details
AMD’s SeaMicro SM15000 server will be available with 64 compute cards, each holding a new custom-designed single-socket octal core 2.0/2.3/2.8 GHz AMD Opteron processor based on the “Piledriver” core, for a total of 512 heavy-weight cores per system or 2,048 cores per rack. Each AMD Opteron processor can support 64 gigabytes of DRAM, enabling a single system to handle more than four terabytes of DRAM and over 16 terabytes of DRAM per rack. AMD’s SeaMicro SM15000 system will also be available with a quad core 2.5 GHz Intel Xeon Processor E3-1265Lv2 (“Ivy Bridge”) for 256 2.5 GHz cores in a ten rack unit system or 1,024 cores in a standard rack. Each processor supports up to 32 gigabytes of memory so a single SeaMicro SM15000 system can deliver up to two terabytes of DRAM and up to eight terabytes of DRAM per rack.
AMD’s SeaMicro SM15000 server also contains 16 fabric extender slots, each of which can connect to three different Freedom Fabric Storage arrays with different capacities:
  • FS 5084-L is an ultra-dense capacity-optimized storage system. It supports up to 84 SAS/SATA 3.5 inch or 2.5 inch drives in 5 rack units for up to 336 terabytes of capacity per-array and over five petabytes per SeaMicro SM15000 system;
  • FS 2012-L is a capacity-optimized storage system. It supports up to 12 3.5 inch or 2.5 inch drives in 2 rack units for up to 48 terabytes of capacity per-array or up to 768 terabytes of capacity per SeaMicro SM15000 system;
  • FS 2024-S is a performance-optimized storage system. It supports up to 24 2.5 inch drives in 2 rack units for up to 24 terabytes of capacity per-array or up to 384 terabytes of capacity per SM15000 system.

In summary, AMD’s SeaMicro SM15000 system:

  • Stands ten rack units or 17.5 inches tall;
  • Contains 64 slots for compute cards for AMD Opteron or Intel Xeon processors;
  • Provides up to ten gigabits per-second of bandwidth to each CPU;
  • Connects up to 1,408 solid state or hard drives with Freedom Fabric Storage
  • Delivers up to 16 10 GbE uplinks or up to 64 1GbE uplinks;
  • Runs standard off-the-shelf operating systems including Windows®, Linux, Red Hat and VMware and Citrix XenServer hypervisors.
Availability
AMD’s SeaMicro SM15000 server with Intel’s Xeon Processor E3-1260L “Sandy Bridge” is now generally available in the U.S and in select international regions. Configurations based on AMD Opteron processors and Intel Xeon Processor E3-1265Lv2 with the “Ivy Bridge” microarchitecture will be available in November, 2012. More information on AMD’s revolutionary SeaMicro family of servers can be found at www.seamicro.com/products.


1. Verizon

Verizon Cloud on AMD’s SeaMicro SM15000 [AMD YouTube channel, Oct 7, 2013]

Find out more about SeaMicro and AMD athttp://bit.ly/AMD_SeaMicro Verizon and AMD partner to create an enterprise-class cloud service that was not possible using off the shelf servers. Verizon Cloud is based on the SeaMicro SM15000, the industry’s first and only programmable server hardware. The new services redefine the benchmarks for public cloud computing and storage performance and reliability.

Verizon Cloud Compute and Verizon Cloud Storage [The Wave Newsletter from AMD, December 2013]

With enterprise adoption of public cloud services at 10 percent1, Verizon identified a need for a cloud service that was secure, reliable and highly flexible with enterprise-grade performance guarantees. Large, global enterprises want to take advantage of the agility, flexibility and compelling economics of the public cloud, but the performance and reliability are not up to par for their needs. To fulfill this need, Verizon spent over two years identifying and developing software using AMD’s SeaMicro SM15000, the industry’s first and only programmable server hardware. The new services redefine the benchmarks for public cloud computing and storage performance and security.

Designed specifically for enterprise customers, the new services allow companies to use the same policies and procedures across the enterprise network and the public cloud. The close collaboration has resulted in cloud computing services with unheralded performance level guarantees that are offered with competitive pricing. The new cloud services are backed by the power of Verizon, including global data centers, global IP network and enterprise-grade managed security services. The performance and security innovations are expected to accelerate public cloud adoption by the enterprise for their mission critical applications. more >

Verizon Selects AMD’s SeaMicro SM15000 for Enterprise Class Services: Verizon Cloud Compute and Verizon Cloud Storage [AMD-Seamicro press release, Oct 7, 2013]

Verizon and AMD create technology that transforms the public cloud, delivering the industry’s most advanced cloud capabilities

SUNNYVALE, Calif. —10/7/2013

AMD (NYSE: AMD) today announced that Verizon is deploying SeaMicro SM15000™ servers for its new global cloud platform and cloud-based object storage service, whose public beta was recently announced. AMD’s SeaMicro SM15000 server links hundreds of cores together in a single system using a fraction of the power and space of traditional servers. To enable Verizon’s next generation solution, technology has been taken one step further: Verizon and AMD co-developed additional hardware and software technology on the SM15000 server that provides unprecedented performance and best-in-class reliability backed by enterprise-level service level agreements (SLAs). The combination of these technologies co-developed by AMD and Verizon ushers in a new era of enterprise-class cloud services by enabling a higher level of control over security and performance SLAs. With this technology underpinning the new Verizon Cloud Compute and Verizon Cloud Storage, enterprise customers can for the first time confidently deploy mission-critical systems in the public cloud.

“We reinvented the public cloud from the ground up to specifically address the needs of our enterprise clients,” said John Considine, chief technology officer at Verizon Terremark. “We wanted to give them back control of their infrastructure – providing the speed and flexibility of a generic public cloud with the performance and security they expect from an enterprise-grade cloud. Our collaboration with AMD enabled us to develop revolutionary technology, and it represents the backbone of our future plans.”

As part of its joint development, AMD and Verizon co-developed hardware and software to reserve, allocate and guarantee application SLAs. AMD’s SeaMicro Freedom™ fabric-based SM15000 server delivers the industry’s first and only programmable server hardware that includes a high bandwidth, low latency programmable interconnect fabric, and programmable data and control plane for both network and storage traffic. Leveraging AMD’s programmable server hardware, Verizon developed unique software to guarantee and deliver reliability, unheralded performance guarantees and SLAs for enterprise cloud computing services.

“Verizon has a clear vision for the future of the public cloud services—services that are more flexible, more reliable and guaranteed,” said Andrew Feldman, corporate vice president and general manager, Server, AMD. “The technology we developed turns the cloud paradigm upside down by creating a service that an enterprise can configure and control as if the equipment were in its own data center. With this innovation in cloud services, I expect enterprises to migrate their core IT services and mission critical applications to Verizon’s cloud services.”

“The rapid, reliable and scalable delivery of cloud compute and storage services is the key to competing successfully in any cloud market from infrastructure, to platform, to application; and enterprises are constantly asking for more as they alter their business models to thrive in a mobile and analytic world,” said Richard Villars, vice president, Datacenter & Cloud at IDC. “Next generation integrated IT solutions like AMD’s SeaMicro SM15000 provide a flexible yet high-performance platform upon which companies like Verizon can use to build the next generation of cloud service offerings.”

Innovative Verizon Cloud Capabilities on AMD’s SeaMicro SM15000 Server Industry Firsts

Verizon leveraged the SeaMicro SM15000 server’s ability to disaggregate server resources to create a cloud optimized for computing and storage services. Verizon and AMD’s SeaMicro engineers worked for over two years to create a revolutionary public cloud platform with enterprise class capabilities.

These new capabilities include:

  • Virtual machine server provisioning in seconds, a fraction of the time of a legacy public cloud;
  • Fine-grained server configuration options that match real life requirements, not just small, medium, large sizing, including processor speed (500 MHz to 2,000 MHz) and DRAM (.5 GB increments) options;
  • Shared disks across multiple server instances versus requiring each virtual machine to have its own dedicated drive;
  • Defined storage quality of service by specifying performance up to 5,000 IOPS to meet the demands of the application being deployed, compared to best-effort performance;
  • Consistent network security policies and procedures across the enterprise network and the public cloud;
  • Strict traffic isolation, data encryption, and data inspection with full featured firewalls that achieve Department of Defense and PCI compliance levels;
  • Guaranteed network performance for every virtual machine with reserved network performance up to 500 Mbps compared to no guarantees in many other public clouds.

The public beta for Verizon Cloud will launch in the fourth quarter. Companies interested in becoming a beta customer can sign up through the Verizon Enterprise Solutions website: www.verizonenterprise.com/verizoncloud.

AMD’s SeaMicro SM15000 Server

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking, more than five petabytes of storage with a 1.28 terabyte high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

AMD’s SeaMicro server product family currently supports the next generation AMD Opteron™ (“Piledriver”) processor, Intel® Xeon® E3-1260L (“Sandy Bridge”) and E3-1265Lv2 (“Ivy Bridge”) and Intel® Atom™ N570 processors. The SeaMicro SM15000 server also supports the Freedom Fabric Storage products, enabling a single system to connect with more than five petabytes of storage capacity in two racks. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.

For more information on the Verizon Cloud implementation, please visit: www.seamicro.com/vzcloud.

About AMD

AMD (NYSE: AMD) designs and integrates technology that powers millions of intelligent devices, including personal computers, tablets, game consoles and cloud servers that define the new era of surround computing. AMD solutions enable people everywhere to realize the full potential of their favorite devices and applications to push the boundaries of what is possible. For more information, visit www.amd.com.

4:01 PM – 10 Dec 13:

imageAMD SeaMicro@SeaMicroInc

correction…Verizon is not using OpenStack, but they are using our hardware. @cloud_attitude


2. OpenStack

OpenStack 101 – What Is OpenStack? [Rackspace YouTube channel, Jan 14, 2013]

OpenStack is an open source cloud operating system and community founded by Rackspace and NASA in July 2010. Here is a brief look at what OpenStack is, how it works and what people are doing with it. See: http://www.openstack.org/

OpenStack: The Open Source Cloud Operating System

Why OpenStack? [The Wave Newsletter from AMD, December 2013]

OpenStack continues to gain momentum in the market as more and more, larger, established technology and service companies move from evaluation to deployment. But why has OpenStack become so popular? In this issue, we discuss the business drivers behind the widespread adoption and why AMD’s SeaMicro SM15000 server is the industry’s best choice for a successful OpenStack deployment. If you’re considering OpenStack, learn about the options and hear winning strategies from experts featured in our most recent OpenStack webcasts. And in case you missed it, read about AMD’s exciting collaboration with Verizon enabling them to offer enterprise-caliber cloud services. more >

OpenStack the SeaMicro SM15000 – From Zero to 2,048 Cores in Less than One Hour [The Wave Newsletter from AMD, March 2013]

The SeaMicro SM15000 is optimized for OpenStack, a solution that is being adopted by both public and private cloud operators. Red 5 Studios recently deployed OpenStack on a 48 foot bus to power their new massive multiplayer online game Firefall. The SM15000 uniquely excels for object storage, providing more than 5 petabytes of direct attached storage in two data center racks.  more >

State of the Stack [OpenStack Foundation YouTube channel, recorded on Nov 8 under official title “Stack Debate: Understanding OpenStack’s Future”, published on Nov 9, 2013]

OpenStack in three short years has become one of the most successful,most talked about and most community-driven Open Source projects inhistory.In this joint presentation Randy Bias (Cloudscaling) and Scott Sanchez (Rackspace) will examine the progress from Grizzly to Havana and delve into new areas like refstack, tripleO, baremetal/Ironic, the move from”projects” to “programs”, and AWS compatibility.They will show updated statistics on project momentum and a deep diveon OpenStack Orchestrate (Heat), which has the opportunity to changethe game for OpenStack in the greater private cloud game. The duo willalso highlight the challenges ahead of the project and what should bedone to avoid failure. Joint presenters: Scott Sanchez, Randy Bias

The biggest issue with OpenStack project which “started without a benevolent dictator and/or architect” was mentioned there (watch from [6:40]) as a kind of: “The worst architectural decision you can make is stay with default networking for a production system because the default networking model in OpenStack is broken for use at scale”.

Then Randy Bias summarized that particular issue later in Neutron in Production: Work in Progress or Ready for Prime Time? [Cloudscaling blog, Dec 6, 2013] as:

Ultimately, it’s unclear whether all networking functions ever will be modeled behind the Neutron API with a bunch of plug-ins. That’s part of the ongoing dialogue we’re having in the community about what makes the most sense for the project’s future.

The bottom-line consensus was is that Neutron is a work in progress. Vanilla Neutron is not ready for production, so you should get a vendor if you need to move into production soon.

AMD’s SeaMicro SM15000 Is the First Server to Provide Bare Metal Provisioning to Scale Massive OpenStack Compute Deployments [press release, Nov 5, 2013]

Provides Foundation to Leverage OpenStack Compute for Large Networks of Virtualized and Bare Metal Servers

SUNNYVALE, Calif. and Hong Kong, OpenStack Summit —11/5/2013

AMD (NYSE: AMD) today announced that the SeaMicro SM15000™ server supports bare metal features in OpenStack® Compute. AMD’s SeaMicro SM15000 server is ideally suited for massive OpenStack deployments by integrating compute, storage and networking into a 10 rack unit system. The system is built around the Freedom™ fabric, the industry’s premier supercomputing fabric for scale out data center applications. The Freedom fabric disaggregates compute, storage and network I/O to provide the most flexible, scalable and resilient data center infrastructure in the industry. This allows customers to match the compute performance, storage capacity and networking I/O to their application needs. The result is an adaptive data center where any server can be mapped to any hard disk/SSD or network I/O to expand capacity or recover from a component failure.

“OpenStack Compute’s bare metal capabilities provide the scalability and flexibility to build and manage large-scale public and private clouds with virtualized and dedicated servers,” said Dhiraj Mallick, corporate vice president and general manager, Data Center Server Solutions, at AMD. “The SeaMicro SM15000 server’s bare metal provisioning capabilities should simplify enterprise adoption of OpenStack and accelerate mass deployments since not all work loads are optimized for virtualized environments.”

Bare metal computing provides more predictable performance than a shared server environment using virtual servers. In a bare metal environment there are no delays caused by different virtual machines contending for shared resources, since the entire server’s resources are dedicated to a single user instance. In addition, in a bare metal environment the performance penalty imposed by the hypervisor is eliminated, allowing the application software to make full use of the processor’s capabilities

In addition to leading in bare metal provisioning, AMD’s SeaMicro SM15000 server provides the ability to boot and install a base server image from a central server for massive OpenStack deployments. A cloud image containing the KVM, the OpenStack Compute image and other applications can be configured by the central server. The coordination and scheduling of this workflow can be managed by Heat, the orchestration application that manages the entire lifecycle of an OpenStack cloud for bare metal and virtual machines.

Supporting Resources

Scalable Fabric-based Object Storage with the SM15000 [The Wave Newsletter from AMD, March 2013]

The SeaMicro SM15000 is changing the economics of deploying object storage, delivering the storage of unprecedented amounts of data while using 1/2 the power and 1/3 the space of traditional servers. more >

SwiftStack with OpenStack Swift Overview [SwiftStack YouTube channel, Oct 4, 2012]

SwiftStack manages and operates OpenStack Swift. SwiftStack is built from the ground up for web, mobile and as-a-service applications. Designed to store and serve content for many concurrent users, SwiftStack contains everything you need to set up, integrate and operate a private storage cloud on hardware that you control.

AMD’s SeaMicro SM15000 Server Achieves Certification for Rackspace Private Cloud, Validated for OpenStack [press release, Jan 30, 2013]

Providing unprecedented computing efficiency for “Nova in a Box” and object storage capacity for “Swift in a Rack


3. Red Hat

OpenStack + SM15000 Server = 1,000 Virtual Machines for Red Hat [The Wave Newsletter from AMD, June 2013]

Red Hat deploys one SM15000 server to quickly and cost effectively build out a high capacity server cluster to meet the growing demands for OpenShift demonstrations and to accelerate sales. Red Hat OpenShift, which runs on Red Hat OpenStack, is Red Hat’s cloud computing Platform-as-a-Service (PaaS) offering. The service provides built-in support for nearly every open source programming language, including Node.js, Ruby, Python, PHP, Perl, and Java. OpenShift can also be expanded with customizable modules that allow developers to add other languages.
more >

Red Hat Enterprise Linux OpenStack Platform: Community-invented, Red Hat-hardened [RedHatCloud YouTube channel, Aug 5, 2013]

Learn how Red Hat Enterprise Linux OpenStack Platform allows you to deploy a supported version of OpenStack on an enterprise-hardened Linux platform to build a massively scalable public-cloud-like platform for managing and deploying cloud-enabled workloads. With Red Hat Enterprise Linux OpenStack Platform, you can focus resources on building applications that add value to your organization, while Red Hat provides support for OpenStack and the Linux platform it runs on.

AMD’s SeaMicro SM15000 Server Achieves Certification for Red Hat OpenStack [press release, June 12, 2013]

BOSTON – Red Hat Summit —6/12/2013

AMD (NYSE: AMD) today announced that its SeaMicro SM15000™ server is certified for Red Hat® OpenStack, and that the company has joined the Red Hat OpenStack Cloud Infrastructure Partner Network. The certification ensures that the SeaMicro SM15000 server provides a rigorously tested platform for organizations building private or public cloud Infrastructure as a Service (IaaS), based on the security, stability and support available with Red Hat OpenStack. AMD’s SeaMicro solutions for OpenStack include “Nova in a Box” and “Swift in a Rack” reference architectures that have been validated to ensure consistent performance, supportability and compatibility.

The SeaMicro SM15000 server integrates compute, storage and networking into a compact, 10 RU (17.5 inches) form factor with 1.28 Tbps supercompute fabric. The technology enables users to install and configure thousands of computing cores more efficiently than any other server. Complex time-consuming tasks are completed within minutes due to the integration of compute, storage and networking. Operational fire drills, such as setting up servers on short notice, manually configuring hundreds of machines and re-provisioning the network to optimize traffic are all handled through a single, easy-to-use management interface.

“AMD has shown leadership in providing a uniquely differentiated server for OpenStack deployments, and we are excited to have them as a seminal member of the Red Hat OpenStack Cloud Infrastructure Partner Network,” said Mike Werner, senior director, ISV and Developer Ecosystems at Red Hat. “The SeaMicro server is an example of incredible innovation, and I am pleased that our customers will have the SM15000 system as an option for energy-efficient, dense computing as part of the Red Hat Certified Solution Marketplace.”

AMD’s SeaMicro SM15000 system is the highest-density, most energy-efficient server in the market. In 10 rack units, it links 512 compute cores, 160 gigabits of I/O networking and more than five petabytes of storage with a 1.28 Terabits-per-second high-performance supercompute fabric, called Freedom™ Fabric. The SM15000 server eliminates top-of-rack switches, terminal servers, hundreds of cables and thousands of unnecessary components for a more efficient and simple operational environment.

“We are excited to be a part of the Red Hat OpenStack Cloud Infrastructure Partner Network because the company has a strong track record of bridging the communities that create open source software and the enterprises that use it,” said Dhiraj Mallick, corporate vice president and general manager, Data Center Server Solutions, AMD. “As cloud deployments accelerate, AMD’s certified SeaMicro solutions ensure enterprises are able realize the benefits of increased efficiency and simplified operations, providing them with a competitive edge and the lowest total cost of ownership.”

AMD’s SeaMicro server product family currently supports the next-generation AMD Opteron™ (“Piledriver”) processor, Intel® Xeon® E3-1260L (“Sandy Bridge”) and E3-1265Lv2 (“Ivy Bridge”) and Intel® Atom™ N570 processors. The SeaMicro SM15000 server also supports the Freedom Fabric Storage products, enabling a single system to connect with more than five petabytes of storage capacity in two racks. This approach delivers the benefits of expensive and complex solutions such as network attached storage (NAS) and storage area networking (SAN) with the simplicity and low cost of direct attached storage.


4. Ubuntu

Ubuntu Server certified hardware SeaMicro [one of Ubuntu certification pages]

Canonical works closely with SeaMicro to certify Ubuntu on a range of their hardware.

The following are all Certified. More and more devices are being added with each release, so don’t forget to check this page regularly.

Ubuntu on SeaMicro SM15000-OP | Ubuntu [Sept 1, 2013]

Ubuntu on SeaMicro SM15000-XN | Ubuntu [Oct 1, 2013]

Ubuntu on SeaMicro SM15000-XH | Ubuntu [Dec 18, 2013]

Ubuntu OIL announced for broadest set of cloud infrastructure options [Ubuntu Insights, Nov 5, 2013]

Today at the OpenStack Design Summit in Hong Kong, we announced the Ubuntu OpenStack Interoperability Lab (Ubuntu OIL). The programme will test and validate the interoperability of hardware and software in a purpose-built lab, giving Ubuntu OpenStack users the reassurance and flexibility of choice.
We’re launching the programme with many significant partners onboard, such as; Dell, EMC, Emulex, Fusion-io, HP, IBM, Inktank/Ceph, Intel, LSi, Open Compute, SeaMicro, VMware.
The OpenStack ecosystem has grown rapidly giving businesses access to a huge selection of components for their cloud environments. Most will expect that, whatever choices they make or however complex their requirements, the environment should ‘just work’, where any and all components are interoperable. That’s why we created the Ubuntu OpenStack Interoperability Lab.
Ubuntu OIL is designed to offer integration and interoperability testing as well as validation to customers, ISVs and hardware manufacturers. Ecosystem partners can test their technologies’ interoperability with Ubuntu OpenStack and a range of software and hardware, ensuring they work together seamlessly as well as with existing processes and systems. It means that manufacturers can get to market faster and with less cost, while users can minimise integration efforts required to connect Ubuntu OpenStack with their infrastructure.
Ubuntu is about giving customers choice. Over the last releases, we’ve introduced new hypervisors, and software-defined networking (SDN) stacks, and capabilities for workloads running on different types of public cloud options. Ubuntu OIL will test all of these options as well as other technologies to ensure Ubuntu OpenStack offers the broadest set of validated and supported technology options compatible with user deployments. Ubuntu OIL will test and validate for all supported and future releases of Ubuntu, Ubuntu LTS and OpenStack.
Involvement in the lab is through our Canonical Partner Programme. New partners can sign up here.
Learn more about Ubuntu OIL


5. Big Data, Hadoop

Storing Big Data – The Rise of the Storage Cloud [The Wave Newsletter from AMD, December 2012]

Data is everywhere and growing at unprecedented rates. Each year, there are over one hundred million new Internet users generating thousands of terabytes of data every day. Where will all this data be stored? more >

AMD’s SeaMicro SM15000 Achieves Certification for CDH4, Cloudera’s Distribution Including Apache Hadoop Version 4 [press release, March 20, 2013]

Hadoop-in-a-Box” package accelerates deployments by providing 512 cores and over five petabytes in two racks

The Hidden Truth: Hadoop is a Hardware Investment [The Wave Newsletter from AMD, September 2013]

Apache Hadoop is a leading software application for analyzing big data, but its performance and reliability are tied to a company’s underlying server architecture. Learn how AMD’s SeaMicro SM15000™ server compares with other minimum scale deployments. more >

The Cortex-A53 as the Cortex-A7 replacement core is succeeding as a sweet-spot IP for various 64-bit high-volume market SoCs to be delivered from H2 CY14 on

… not suprisingly as it is built on the same micro-architecture. Even Intel will manufacture Cortex-A53 based SoCs for Altera (Stratix 10 FPGA SoCs) in 2015 on its leading edge Tri-Gate (FinFET) 14nm process.

With MediaTek MT6592-based True Octa-core superphones are on the market to beat Qualcomm Snapdragon 800-based ones [‘Experiencing the Cloud’, Dec 21, 2013] MediaTek will follow up with a 4G LTE MT6595 version in January, and with a 64-bit version based on Cortex-A53 instead of Cortex-A7 in H2 CY14. In this way it will be able to compete head-on with the new Qualcomm Snapdragon 410 in the most lucrative high-volume market.

imageAccording to 大陸4G啟動 聯發科快攻 [Commercial Times, Dec 10, 2013]: “MediaTek MT6590’s first 4G modem chip is expected to begin shipping next month, in addition to 4G systems integration single chip (SoC) MT6595 has appeared earlier this month in the customer’s specification sheet, and 8-core as the main design, not difficult to see MediaTek ambition to expand high-end market.

MediaTek delivering 4G LTE chips for verification, say paper [DIGITIMES, Dec 18, 2013]

MediaTek reportedly has delivered its first 4G LTE chip, the MT6590, to potential clients for verification. The chips are expected to begin generating revenues for the IC design house in the first quarter of 2014, according to a Chinese-language Liberty Times report. The MT6590 supports five modes and 10 frequency bands.

The news echoes earlier remarks by MediaTek president Hsieh Ching-chiang stating the company plans to launch 4G chips at year-end 2013 with end-market devices powered by the 4G chips to be available in the first quarter of 2014, the paper added.

Citing data from JPMorgan Chase, the paper said shipments of MediaTek’s first 8-core chip, the MT6592, are higher than expected and shipment momentum is likely to continue into the first quarter of 2014.

The latest news: Chipset vendors to showcase 64-bit smartphone solutions at CES 2014 [DIGITIMES, Dec 23, 2013]

Chipset players including Qualcomm, Nvidia, Marvell Technology and Broadcom all are expected to showcase 64-bit processors for smartphone applications at the upcoming CES 2014 trade show, a move which will add pressure on Taiwan-based MediaTek in its efforts to expand market share with its newly released 8-core CPUs, according to industry sources.

Qualcomm has already unveiled a 64-bit-chip, the Snapdragon 410, and is expected to begin sampling in the first half of 2014, according to the company.

Nvidia, which is familiar with 64-bit computing architectures, is expected to start volume production of 64-bit chips for smartphones in the first half of 2014 at the earliest, said industry sources.

Marvell and Broadcom are also expected to highlight their 64-bit chips at CES 2014, kicking off competition in the 64-bit chipset segment, note the sources.

Meanwhile, the vendors, as well as China-based chipset suppliers Spreadtrum Communications and RDA Microelectronics, will also exert efforts to take market share from MediaTek in the entry-level to mid-range chipset segment in 2014, commented the sources.

From: 64-bit smartphones to be ushered in 2014, say sources [DIGITIMES, Dec 11, 2013]

… Qualcomm has also claimed that the Snapdragon 410 will support all major operating systems, including Android, Windows Phone and Firefox OS and that Qualcomm Reference Design versions of the processor will be available to enable rapid development time and reduce OEM R&D, designed to provide a comprehensive mobile device platform. However, the observers noted that the Snapdragon 410 chips are aiming at the mid-range LTE smartphone segment, particularly the sub-CNY1,000 (US$165) sector in China. The launch of the mid-range 64-bit Snapdragon chips also aims to widen its lead against Taiwan-based rival MediaTek in the China market, the sources added. Qualcomm said the Snapdragon 410 processor is expected to be in commercial devices in the second half of 2014. …

Samsung Electronics is also believed to be working on its own 64-bit CPUs in house and expected to launch 64-bit capable flagship models in the first half of 2014 at the earliest, said the observers.

The 64-bit versions of CPUs from MediaTek, Broadcom and Nvidia are likely to come in late 2014 or in 2015, added the sources.

Google is expected to accelerate the upgrading of its Android platform, providing an environment for software developers to work on related 64-bit applications, commented the sources.

Taiwan IC suppliers developing chips for MediaTek smartphone solutions [DIGITIMES, Dec 18, 2013]

MediaTek’s growing shipments of smartphone solutions, which are expected to top 200 million units in 2013 and 300 million units in 2014, have encouraged Taiwan-based suppliers of LCD driver ICs, power management ICs, ambient light sensors, gyroscopes, touchscreen controller ICs and MEMS microphones to develop chips that can be incorporated into these smartphone solutions, according to industry sources.

MediaTek has been focusing its R&D efforts on developments of 4- and 8-core and 4G CPUs as well as wireless chips in order to maintain its competitiveness, while relying on other IC vendors to complete its smartphone solution platforms, the sources noted.

With MediaTek’s smartphone solution shipments expected to reach 30 million units a month in 2014, any suppliers which can deliver IC parts for MediaTek’s smartphone platforms will see their revenues and profits grow substantially in 2014, the sources said.

Qualcomm Technologies Introduces Snapdragon 410 Chipset with Integrated 4G LTE World Mode for High-Volume Smartphones [press release, Dec 9, 2013]

4G LTE, 64-Bit Processing Expands Qualcomm Technologies’ Global Product Offerings and Reference Design Program

SAN DIEGO – December 09, 2013 – Qualcomm Incorporated (NASDAQ: QCOM) today announced that its wholly-owned subsidiary, Qualcomm Technologies, Inc., has introduced the Qualcomm® Snapdragon™ 410 chipset with integrated 4G LTE World Mode. The delivery of faster connections is important to the growth and adoption of smartphones in emerging regions, and Qualcomm Snapdragon chipsets are poised to address the needs of consumers as 4G LTE begins to ramp in China.

The new Snapdragon 410 chipsets are manufactured using 28nm process technology. They feature processors that are 64-bit capable along with superior graphics performance with the Adreno 306 GPU, 1080p video playback and up to a 13 Megapixel camera. Snapdragon 410 chipsets integrate 4G LTE and 3G cellular connectivity for all major modes and frequency bands across the globe and include support for Dual and Triple SIM. Together with Qualcomm RF360 Front End Solution, Snapdragon 410 chipsets will have multiband and multimode support. Snapdragon 410 chipsets also feature Qualcomm Technologies’ Wi-Fi, Bluetooth, FM and NFC functionality, and support all major navigation constellations: GPS, GLONASS, and China’s new BeiDou, which helps deliver enhanced accuracy and speed of Location data to Snapdragon-enabled handsets.

The chipset also supports all major operating systems, including the Android, Windows Phone and Firefox operating systems. Qualcomm Reference Design versions of the processor will be available to enable rapid development time and reduce OEM R&D, designed to provide a comprehensive mobile device platform. The Snapdragon 410 processor is anticipated to begin sampling in the first half of 2014 and expected to be in commercial devices in the second half of 2014.

Qualcomm Technologies also announced for the first time the intention to make 4G LTE available across all of the Snapdragon product tiers. The Snapdragon 410 processor gives the 400 product tier several 4G LTE options for high-volume mobile devices, as the third LTE-enabled solution in the product tier. By offering 4G LTE variants to its entry level smartphone lineup, Qualcomm Technologies ensures that emerging regions are equipped for this transition while also having every major 2G and 3G technology available to them. Qualcomm Technologies offers OEMs and operators differentiation through a rich feature set upon which to build innovative high-volume smartphones for budget-conscious consumers.

“We are excited to bring 4G LTE to highly affordable smartphones at a sub $150 ( ̴ 1,000 RMB) price point with the introduction of the Qualcomm Snapdragon 410 processor,” said Jeff Lorbeck, senior vice president and chief operating officer, Qualcomm Technologies, China. “The Snapdragon 410 chipset will also be the first of many 64-bit capable processors as Qualcomm Technologies helps lead the transition of the mobile ecosystem to 64-bit processing.”

Qualcomm Technologies will release the Qualcomm Reference Design (QRD) version of the Snapdragon 410 processor with support for Qualcomm RF360™ Front End Solution. The QRD program offers Qualcomm Technologies’ leading technical innovation, easy customization options, the QRD Global Enablement Solution which features regional software packages, modem configurations, testing and acceptance readiness for regional operator requirements, and access to a broad ecosystem of hardware component vendors and software application developers. Under the QRD program, customers can rapidly deliver differentiated smartphones to value-conscious consumers. There have been more than 350 public QRD-based product launches to date in collaboration with more than 40 OEMs in 18 countries.

Note that just 18 days before that there was the news that Qualcomm Technologies Announces Next Generation Qualcomm Snapdragon 805 “Ultra HD” Processor [press release, Nov 20, 2013]

Mobile Technology Leader Announces its Highest Performance Processor Designed to Deliver the Highest Quality Mobile Video, Camera and Graphics to Qualcomm Snapdragon 800 Tier
NEW YORK – November 20, 2013 – Qualcomm Incorporated (NASDAQ: QCOM) today announced that its subsidiary, Qualcomm Technologies, Inc., introduced the next generation mobile processor of the Qualcomm® Snapdragon™ 800 tier, the Qualcomm Snapdragon 805 processor, which is designed to deliver the highest-quality mobile video, imaging and graphics experiences at Ultra HD (4K) resolution, both on device and via Ultra HD TVs. Featuring the new Adreno 420 GPU, with up to 40 percent more graphics processing power than its predecessor, the Snapdragon 805 processor is the first mobile processor to offer system-level Ultra HD support, 4K video capture and playback and enhanced dual camera Image Signal Processors (ISPs), for superior performance, multitasking, power efficiency and mobile user experiences.
The Snapdragon 805 processor is Qualcomm Technologies’ newest and highest performing Snapdragon processor to date, featuring:
– Blazing fast apps and web browsing and outstanding performance: Krait 450 quad-core CPU, the first mobile CPU to run at speeds of up to 2.5 GHz per core, plus superior memory bandwidth support of up to 25.6 GB/second that is designed to provide unprecedented multimedia and web browsing performance.
– Smooth, sharp user interface and games support Ultra HD resolution: The mobile industry’s first end-to-end Ultra HD solution with on-device display concurrent with output to HDTV; features Qualcomm Technologies’ new Adreno 420 GPU, which introduces support for hardware tessellation and geometry shaders, for advanced 4K rendering, with even more realistic scenes and objects, visually stunning user interface, graphics and mobile gaming experiences at lower power.
– Fast, seamless connected mobile experiences: Custom, efficient integration with either the Qualcomm® Gobi™ MDM9x25 or the Gobi MDM9x35 modem, powering superior seamless connected mobile experiences. The Gobi MDM9x25 chipset announced in February 2013 has seen significant adoption as the first embedded, mobile computing solution to support LTE carrier aggregation and LTE Category 4 with superior peak data rates of up to 150Mbps. Additionally, Qualcomm’s most advanced Wi-Fi for mobile, 2-stream dual-band Qualcomm® VIVE™ 802.11ac, enables wireless 4K video streaming and other media-intensive applications. With a low-power PCIe interface to the QCA6174, tablets and high-end smartphones can take advantage of faster mobile Wi-Fi performance (over 600 Mbps), extended operating range and concurrent Bluetooth connections, with minimal impact on battery life.
– Ability to stream more video content at higher quality using less power: Support for Hollywood Quality Video (HQV) for video post processing, first to introduce hardware 4K HEVC (H.265) decode for mobile for extremely low-power HD video playback.
– Sharper, higher resolution photos in low light and advanced post-processing features: First Gpixel/s throughput camera support in a mobile processor designed for a significant increase in camera speed and imaging quality. Sensor processing with gyro integration enables image stabilization for sharper, crisper photos. Qualcomm Technologies is the first to announce a mobile processor with advanced, low-power, integrated sensor processing, enabled by its custom DSP, designed to deliver a wide range of sensor-enabled mobile experiences.
“Using a smartphone or tablet powered by Snapdragon 805 processor is like having an UltraHD home theater in your pocket, with 4K video, imaging and graphics, all built for mobile,” said Murthy Renduchintala, executive vice president, Qualcomm Technologies, Inc., and co-president, QCT. “We’re delivering the mobile industry’s first truly end-to-end Ultra HD solution, and coupled with our industry leading Gobi LTE modems and RF transceivers, streaming and watching content at 4K resolution will finally be possible.”
The Snapdragon 805 processor is sampling now and expected to be available in commercial devices by the first half of 2014.

The original value proposition was presented in the brief Brian Jeff highlights the ARM® Cortex™-A53 processor [ARMflix YouTube channel, Oct 30, 2012] video as follows

Brian Jeff highlights the ARM® Cortex™-A53 processor, ARM’s most efficient application processor ever, delivering today’s mainstream smartphone experience in a quarter of the power in the respective process nodes.

The Top 5 Things to Know about Cortex-A53 [Brian Jeff on ‘ARM Connected Community’, Oct 28, 2013]

The Cortex-A53 was introduced to the market in October 2012, delivering the ARMv8 instruction set and significantly increased performance in a highly efficient power and area footprint. It is available for licensing now, and will be deployed in silicon in early 2014 by multiple ARM partners. There are a few key aspects of the Cortex-A53 that developers, OEMs, and SoC designers should know:

1. ARM low power / high efficiency heritage

The ARM9 is the most licensed processor in ARM’s history with over 250 licenses sold. It identified a very important power/cost sweet spot.The Cortex-A5 (launched in 2009) was designed to fit in the CPU same power and area footprint,

image    ARM926-based feature phone (Nokia E60).

while delivering significantly higher performance and power-efficiency, and bring it to modern ARMv7 feature set – software compatibility with the high end of the processor roadmap (then Cortex-A9)

image

The Cortex-A53 is built around a simple pipeline, 8 stages long with in-order execution like the Cortex-A7 and Cortex-A5 processors that preceded it. An instruction traversing a simple pipeline requires fewer registers and switches less logic to fetch, decode, issue, execute, and write back the results than a more complex pipeline microarchitecture. Simpler pipelines are smaller and lower power. The high efficiency Cortex-A CPU product line, consisting of Cortex-A5, Cortex-A7, and Cortex-A53, takes a design approach prioritizing efficiency first, then seeking as much performance as possible at the maximum efficiency. The added performance in each successive generation in this series comes from advances in the memory system, increasing dual-issue capability, expanded internal busses, and improved branch prediction.

2. ARM v8-A Architecture

The Cortex-A53 is fully compliant with the ARMv8-A architecture, which is the latest ARM architecture and introduces support for 64b operation while maintaining 100% backward compatibility with the broadly deployed ARMv7 architecture. The processor can switch between AArch32 and AArch64 modes of operation to allow 32bit apps and 64bit apps to run together on top of a 64bit operating system. This dual execution state support allows maximum flexibility for developers and SoC designers in managing the rollout of 64bit support in different markets. ARMv8-A brings additional features (more registers, new instructions) that bring increased performance and Cortex-A53 is able to take advantage of these.

3. Higher performance than Cortex-A9: smaller and more efficient too

The Cortex-A9 features an out-of-order pipeline, dual issue capability, and a longer pipeline than Cortex-A53 that enables 15% higher frequency operation. However the Cortex-A53 achieves higher single thread performance by pushing a simpler design farther – some of the key factors enabling the performance of the Cortex-A53 include the integrated low latency level 2 cache, the larger 512 entry main TLB, and the complex branch predictor. The Cortex-A9 has set the bar for the high end of the smartphone market through 2012 – by matching and exceeding that level of performance in a smaller footprint and power budget, the Cortex-A53 delivers performance to entry level devices that was previously enjoyed by high-end flagship mobile devicesin a lower power budget and at lower cost. The graph below compares the single thread performance of the high efficiency Cortex-A processors with the Cortex-A9. At the same frequency, Cortex-A53 delivers more than 20% higher instruction throughput than the Cortex-A9 for representative workloads.

image

4. Supports big.LITTLE with Cortex-A57

The Cortex-A53 is architecturally identical to the higher performance Cortex-A57 processor, and can be integrated with it in a big.LITTLE processor subsystem. big.LITTLE enables peak performance and extreme efficiency by distributing work to the right-sized processor for the task at hand.

It is described in more detail here – Ten Things to Know About big.LITTLE

image

The diagram above shows Cortex-A53 combined with Cortex-A57 and a Mali-T628Graphics processor in an example system. The CCI-400 cache coherent interconnect allows the 2 CPU clusters to be combined in a seamless way that allows software to manage the task allocation in a highly transparent way, as described in <link – software>. The big.LITTLE system enables peak performance at low average power.

Cortex-A53 in ideal for use in a standalone use scenario, delivering excellent performance at very low power and area enabling new features to be supported in the low cost smartphone segments  Our new LITTLE processor packs a performance punch.

Read more about that in a somewhat humorous blog on Cortex-A53 from the product launch – ARM Cortex-A53 — Who You callin’ LITTLE?

5. Extensive feature set for broad application support

The Cortex-A53 includes a feature set that allows it to be configured and optimized through physical implementation tailored to mobile SoCs and to  scalable enterprise systems

Mobile Features

Enterprise Features

  • AMBA 4 ACE Coherent bus
  • big.LITTLE processing (2 CPU Clusters) with CCI-400 interconnect
  • AMBA5 CHI Coherent bus

Scalable to 4 or more coherent CPU clustersfor low-cost servers or networking infrastructure devices.

  • 16-core systems with  CCN-504 or 32-core systems with CCN-508 – all on a single silicon die.

Small area, low power design

Optimized for <150mW envelope

Small area, low power design.

Likely still optimized for 150 mW. However, higher performance implementations can be used

ECC, parity available, but configurable if not needed

ECC and parity protection required for enterprise applications

See also:

ARM Cortex-A53 — Who You callin’ LITTLE? [Brian Jeff on ‘ARM Connected Community’, Oct 30, 2013]

I may only weigh in at just over half a square millimeter on die, but I can handle a heavy workload and I pack quite a processing punch, and frankly I’m tired of the lack of respect I get as a “LITTLE” processor. I am the CortexTM-A53 processor from ARM, some of you may have previously known me by my code name “Apollo”. Despite being three times as efficient as my big brother, the Cortex-A57, and delivering more performance than today’s current heavyweight champ the Cortex-A9, I am often overlooked.

Processor designers and consumers alike look to the big core, the top end MHz figure, and the number of big processors in the system when they evaluate devices like premium smartphones and tablets. What they don’t realize is that I’m the one running during most of the time the mobile applications cluster is awake, and I’m the one that will enable improvements in battery life even as delivered peak performance increases dramatically. It is high time that the LITTLE processor gets the respect and appreciation that is due.

I’m speaking not just for myself here, but for my close cousin the Cortex-A7. We’re built from the same DNA, so to speak, sharing the same 8-stage pipeline and in-order structure. We both consume about the same level of power on our respective production process nodes, and although I bring added performance and support 64-bit, we are both quite alike. We are 100% code compatible for 32-bit code after all. And yet we don’t get the respect we deserve. It is an injustice, really.

In high-end mobile devices, my cousin the Cortex-A7 is always telling me how everyone wants to hear about how fast the Cortex-A15 is in the system, how many Cortex-A15 CPUs are in the system, and how many MaliTM GPU cores are built into the SoC. They don’t even notice if there are four Cortex-A7 cores in the design capable of delivering plenty of performance — more performance than a lot of smartphones in the market today.  They just expect battery life to improve without giving any credit to the LITTLE processor that makes it possible.

Well they will soon see… big.LITTLE processors are coming into the market next year, nearly sampling already, and the capability of the LITTLE processor will be in full view, let me tell you.

Oh, and another thing — in the enterprise space, what they call “big Iron” — there is almost no recognition of the worth of small processors there. Sure, new designs are considering LITTLE processors in many-core topologies with ARM’s CoreLinkTM Cache Coherent Network (CCN) interconnect, but look at the products that are deployed today — they are mostly based on big cores, the bigger the better. Nowhere is this more evident than in the server space, where IT managers brag about how big their server racks are. Just wait and see. New server processors are being developed based on ARM, where even my big brother the Cortex-A57 is about an order of magnitude smaller and lower power than the incumbent processors. I’m in a different weight class altogether, but I can hang with the big boys on total performance. Purpose-built servers using lots of Cortex-A53 cores can deliver even more aggregate performance in a given power and thermal envelope. But are we LITTLE cores getting much attention in servers today? No. Well just watch and see. In 2015 when the first Cortex-A50 series 64-bit processors are built for lower power servers, you won’t be able to help but notice that LITTLE processors can get key jobs done in a lot less energy.

So I may be the same size relative to my Cortex-A57 big brother as the Cortex-A7 is to the Cortex-A15, but OEMs and consumers better not underestimate me. I’ve been going through intensive work these past 2 years to build up my muscles in the places that count: my SIMD performance is way up thanks to the improved NEONTM architectural support in ARMv8 and a much wider NEON datapath. I can dual-issue almost anything. My memory system is also juiced up, as is my branch predictor capability. That’s how I can pack a bigger punch than Cortex-A9 at around a quarter the power in our respective process nodes.

That’s all I’m saying, man. You gotta respect the LITTLE processor.

Peace.

AnandTech Live with ARM’s Peter Greenhalgh [anandshimpi YouTube channel, Dec 20, 2013]

A live chat with ARM Fellow and Lead Architect on Cortex A53, Peter Greenhalgh

From the earlier: Answered by the Experts: ARM’s Cortex A53 Lead Architect, Peter Greenhalgh [AnandTech, Dec 17, 2013]

Cortex-A53 has been designed to be able to easily replace Cortex-A7. For example, Cortex-A7 supports the same bus-interface standards (and widths) as Cortex-A7 which allows a partner who has already built a Cortex-A7 platform to rapidly convert to Cortex-A53.

A Cortex-A53 cluster only supports up to 4-cores. If more than 4-cores are required in a platform then multiple clusters can be implemented and coherently connected using an interconnect such as CCI-400. The reason for not scaling to 8-cores per cluster is that the L2 micro-architecture would need to either compromise energy-efficiency in the 1-4 core range to achieve performance in the 4-8 core range, or compromise performance in the 4-8 core range to maximise energy-efficiency in the 1-4 core range.

We expect to see a range of platform configurations using Cortex-A53. A 4+4 Cortex-A53 platform configuration is fully supported and a logical progression from a 4+4 Cortex-A7 platform.

We’re pretty happy with the 8-stage (integer) Cortex-A53 pipeline and it has served us well across the Cortex-A53, Cortex-A7 and Cortex-A5 family. So far it’s scaled nicely from 65nm to 16nm and frequencies approaching 2GHz so there’s no reason to think this won’t hold true in the future.

Cortex-A53 has the same pipeline length as Cortex-A7 so I would expect to see similar frequencies when implemented on the same process geometry. Within the same pipeline length the design team focussed on increasing dual-issue, in-order performance as far as we possibly could. This involved symmetric dual-issue of most of the instruction set, more forwarding paths in the datapaths, reduced issue latency, larger & more associative TLB, vastly increased conditional and indirect branch prediction resources and expanded instruction and data prefetching. The result of all these changes is an increase in SPECInt-2000 performance from 0.35-SPEC/Mhz on Cortex-A7 to 0.50-SPEC/Mhz on Cortex-A53. This should provide a noticeable performance uplift on the next generation of smartphones using Cortex-A53.

Due to the power-efficiency of Cortex-A53 on a 28nm platform, all 4 cores can comfortably be executing at 1.4GHz in less than 750mW which is easily sustainable in a current smartphone platform even while the GPU is in operation.

The performance per watt (energy efficiency) of Cortex-A53 is very similar to Cortex-A7. Certainly within the variation you would expect with different implementations. Largely this is down to learning from Cortex-A7 which was applied to Cortex-A53 both in performance and power.

Intel to make ARM Processors, firstly 64bit 14nm ARM Cortex-A53 ARMv8 for Altera [Charbax YouTube channel, Oct 31, 2013]

Nathan Brookwood is an Analyst and Research Fellow at Insight 64, he is the source for the Forbes article http://www.forbes.com/sites/jeanbaptiste/2013/10/29/exclusive-intel-opens-fabs-to-arm-chips/ The new Intel CEO has changed Intel’s policy, now deciding that it’s actually OK to manufacture ARM Processors in their Fab. Possibly now Intel is also going to make ARM Processors for Apple, Qualcomm, Nvidia, AMD or someone else, possibly also even for themselves, possibly releasing a whole range of Intel ARM Processors to launch if Intel cares to have some reach into Smartphones, Tablets, ARM Laptops, Smart TVs, ARM Desktops, ARM Servers, I think Intel doesn’t need to not contribute to each of those ARM categories themselves too and by fabricating for Chip Makers, it depends what the new Intel CEO finds to be the thing to do for them.

Altera Announces Quad-Core 64-bit ARM Cortex-A53 for Stratix 10 SoCs [press release, Oct 29, 2013]

Manufactured on Intel’s 14 nm Tri-Gate Process, Altera Stratix® 10 SoCs Will Deliver Industry’s Most Versatile Heterogeneous Computing Platform

image

Santa Clara, Calif., ARM TechCon, October 29, 2013Altera Corporation (NASDAQ: ALTR) today announced that its Stratix 10 SoC devices, manufactured on Intel’s 14 nm Tri-Gate process, will incorporate a high-performance, quad-core 64-bit ARM Cortex™-A53 processor system, complementing the device’s floating-point digital signal processing (DSP) blocks and high-performance FPGA fabric. Coupled with Altera’s advanced system-level design tools, including OpenCL, this versatile heterogeneous computing platform will offer exceptional adaptability, performance, power efficiency and design productivity for a broad range of applications, including data center computing acceleration, radar systems and communications infrastructure.

From: Intel fabs Altera’s Stratix 10 FPGA with four ARM A53 cores [SemiAccurate, Nov 5, 2013]: Altera representatives at Techcon said that the beast would tape out in Q4/2014 or about a year from now.

From: Pigs Fly. Altera Goes with ARM on Intel 14nm [SemiWiki.com, Oct 29, 2013]:

I asked Altera about the schedule for all of this. Currently they have over 100 customers using the beta release of their software to model their applications in the Stratix 10. They have taped out a test-chip that is currently in the Intel fab. In the first half of next year they will have a broader release of the software to everyone. They will tape out the actual designs late in 2014 and have volume production starting in early 2015.

Why did they pick this processor? It has the highest power efficiency of any 64-bit processor. Plus it is backwards compatible with previous Altera families which used (32-bit) ARM Cortex-A9. The A53 has a 32-bit mode that is completely binary compatible with the A9. As I reported last week from the Linley conference, ARM is on a roll into communications infrastructure, enterprise and datacenter so there is a huge overlap between the target markets for the A53 and the target markets for the Stratix 10 SoCs.

The ARM Cortex-A53 processor, the first 64-bit processor used on a SoC FPGA, is an ideal fit for use in Stratix 10 SoCs due to its performance, power efficiency, data throughput and advanced features. The Cortex-A53 is among the most power efficient of ARM’s application-class processors, and when delivered on the 14 nm Tri-Gate process will achieve more than six times more data throughput compared to today’s highest performing SoC FPGAs. The Cortex-A53 also delivers important features, such as virtualization support, 256TB memory reach and error correction code (ECC) on L1 and L2 caches. Furthermore, the Cortex-A53 core can run in 32-bit mode, which will run Cortex-A9 operating systems and code unmodified, allowing a smooth upgrade path from Altera’s 28 nm and 20 nm SoC FPGAs.

“ARM is pleased to see Altera adopting the lowest power 64-bit architecture as an ideal complement to DSP and FPGA processing elements to create a cutting-edge heterogeneous computing platform,” said Tom Cronk, executive vice president and general manager, Processor Division, ARM. “The Cortex-A53 processor delivers industry-leading power efficiency and outstanding performance levels, and it is supported by the ARM ecosystem and its innovative software community.”

Leveraging Intel’s 14 nm Tri-Gate process and an enhanced high-performance architecture, Altera Stratix 10 SoCs will have a programmable-logic performance level of more than 1GHz; two times the core performance of current high-end 28 nm FPGAs.

“High-end networking and communications infrastructure are rapidly migrating toward heterogeneous computing architectures to achieve maximum system performance and power efficiency,” said Linley Gwennap, principal analyst at The Linley Group, a leading embedded research firm. “What Altera is doing with its Stratix 10 SoC, both in terms of silicon convergence and high-level design tool support, puts the company at the forefront of delivering heterogeneous computing platforms and positions them well to capitalize on myriad opportunities.”

By standardizing on ARM processors across its three-generation SoC portfolio, Altera will offer software compatibility and a common ARM ecosystem of tools and operating system support. Embedded developers will be able to accelerate debug cycles with Altera’s SoC Embedded Design Suite (EDS) featuring the ARM Development Studio 5 (DS-5™) Altera® Edition toolkit, the industry’s only FPGA-adaptive debug tool, as well as use Altera’s software development kit (SDK) for OpenCL to create heterogeneous implementations using the OpenCL high-level design language.

“With Stratix 10 SoCs, designers will have a versatile and powerful heterogeneous compute platform enabling them to innovate and get to market faster,” said Danny Biran, senior vice president, corporate strategy and marketing at Altera. “This will be very exciting for customers as converged silicon continues to be the best solution for complex, high-performance applications.”

About Altera

Altera® programmable solutions enable designers of electronic systems to rapidly and cost effectively innovate, differentiate and win in their markets. Altera offers FPGAs, SoCs, CPLDs, ASICs and complementary technologies, such as power management, to provide high-value solutions to customers worldwide. Follow Altera viaFacebook, Twitter, LinkedIn, Google+ and RSS, andsubscribe to product update emails and newsletters.  altera.com

My Altera will use Intel Custom Foundry’s 14 nm Tri-Gate (FinFET) process services to produce its new high-end SoC FPGA with 64-bit ARM Cortex-A53 IP [‘Experiencing the Cloud’, Nov 1, 2013] post was already answering in detail the following questions that arised from the above announcement:

  1. Why FPGAs? Why more FPGAs?
  2. Why SoC FPGAs?
  3. Why ARM with FPGA on the Intel Tri-Gate (FinFET) process, and why now?
  4. OpenCL for FPGAs
  5. Altera SoC FPGAs

MediaTek MT6592-based True Octa-core superphones are on the market to beat Qualcomm Snapdragon 800-based ones UPDATE: from $147+ in Q1 and $132+ in Q2

… prices are starting as low as $247 in China (ZOPO Black 2, sold outside as ZP998)
UPDATE: China market: Prices of octa-core smartphones drifting below CNY1,000 [US$165] [DIGITIMES, Jan 27, 2014]

The battle for the entry-level smartphone segment in China is intensifying, and Coolpad with releasing an octa-core model priced below CNY1,000 (US$165), according to industry sources.

The Coolpad Great God F1, one of two 8-core smartphones released by Coolpad recently, comes with a MediaTek 1.7GHz 8-core MT5692 processor, 5-inch display with 720p resolution and 13-megapixel camera, and a price tag of only CNY888 (US$147).

China-based vendors including ZTE, Huawei, Lenovo, TCL and Gionee have launched 8-core smartphones with prices ranging from CNY1,699-1,999 (US$280-330).

My own insert here: Currently the cheapest one on the market outside China is the Ulefone U9592 : http://www.fastcardtech.com/Ulefone-U9592
image

Ulefone U9592
The cheapest MTK6592 Smart Phone so far With IPS screen & 8.0M camera

Ulefone is the cheapest MTK6592 smart phone so far, but it has the best performance on the hardware as you can see in the review. The quality of the display is really good, even better then 720P. 5.0inch capacitive touch screen 854×480 MTK6592 Cortex A7 Octa core CPU,1.7GHz 2GB RAM +16GB ROM Dual camera:2.0MP front camera and 8.0MP back camera with flashlight Dual SIM Card Dual Standby

This video is from another vendor,
GeekBuying selling it for $200 (with free shipping).

Coolpad’s aggressive pricing will force other vendors to slash their prices soon, commented the sources.

Xiaomi Technology also plans to launch an 8-core model in the second quarter of 2014, and market sources believe that Xiaomi is likely to tag the price of its 8-core model at CNY799 (US$132).

The keen competition in the 8-core segment could also affect pricing for the 4G LTE smartphone market, said the sources, adding that prices of mainstream LTE models will fall to around CNY1,500 (US$248) in the first half of 2014 and drop to below CNY1,000 (US$165) in the second half of the year.

Demand for low-cost entry-level LTE smartphones from China Mobile, and fierce competition among LTE chipset suppliers including Qualcomm, Marvell Technology, MediaTek and Spreadtrum Communications will also accelerate price erosion of LTE smartphones, added the sources.

And here is the case of a global brand: Alcatel One Touch Idol X+ 5″ 1080p with MT6592 Octa Core [Charbax YouTube channel, Jan 17, 2014], list price indication given to PCMag was: “Alcatel projected a ballpark price point of below $300.”

Based on the newest fastest yet Mediatek MT6592 Octa core ARM Cortex-A7, with a 5″ Full HD IPS LCD display, thin and light form factor, this is the highest yet performance from MediaTek, Alcatel One Touch is a very rapidly growing Smartphone brand.

END OF UPDATE

Detailed MT6592 SoC information is in Eight-core MT6592 for superphones and big.LITTLE MT8135 for tablets implemented in 28nm HKMG are coming from MediaTek to further disrupt the operations of Qualcomm and Samsung [‘Experiencing the Cloud’, July 20-29, 2013]. See also MediaTek True Octa-core [MediaTek technology page, July 22, 2013].

MT6592 True Octa-core : Performance Benchmark [mediateklab YouTube channel, Dec 20, 2013], its Chinese version was made available on Youku Nov 23, 2013, the  competitor’s quad-core at 2.3GHz is obviously the Snapdragon 800

MT6592 delivers a perfect balance of performance and power consumption. See how the performance of the eight-core MT6592 (2GHz) compares to a quad-core (2.3GHz) smartphone over a period of time in benchmark test.

MT6592 True Octa-Core: Thermal Benchmark [mediateklab YouTube channel, Dec 20, 2013]

See how the temperature of the eight-core MT6592 compares to a leading quad-core smartphone in our high-tech ” hot chocolate” test.

MT6592 True Octa-Core : Low Power Benchmark

With its combination of performance-driven and energy-efficient cores, MT6592 makes much more effective use of battery power.

MediaTek Launches MT6592 True Octa-Core Mobile Platform [MediaTek press release, Nov 20, 2013]

The MT6592 is the world’s first heterogeneous computing SOC with scalable eight-core processing for superior multi-tasking, industry-leading multimedia and excellent performance-per-watt.
TAIWAN, Hsinchu – 20 November, 2013 – MediaTek Incorporated (2454:TT) today unveiled the MT6592, the world’s first true octa-core mobile platform. The MediaTek MT6592 System on a Chip (SOC) combines an advanced eight-core application processor with industry-leading multimedia capabilities and mobile connectivity for a perfect balance of performance and power consumption.
The greater computational capabilities of the MediaTek MT6592 deliver premium gaming performance, advanced multi-tasking and enhanced web browsing for high-end smartphones and tablets. The MT6592 builds on the success of existing MediaTek quad-core mobile platforms, which have revolutionized price-performance efficiency for mobile devices, and is expected to be available in devices running Android ‘Jelly Bean’ by the end of 2013. MT6592 enabled mobile devices running Android ‘Kit-Kat’ are expected in early 2014.
Building on the advanced 28nm HPM high-performance process, the MT6592 has eight CPU cores, each capable of clock speeds up to 2GHz. The true octa-core architecture is fully scalable, and the MT6592 runs both low-power and more demanding tasks equally effectively by harnessing the full capabilities of all eight cores in any combination. An advanced MediaTek scheduling algorithm also monitors temperature and power consumption to ensure optimum performance at all times.
The MT6592 features a world-class multimedia subsystem with a quad-core graphics engine, an advanced video playback system supporting Ultra-HD 4Kx2K H.264 video playback and support for new video codecs such as H.265 and VP9, a 16-megapixel camera and a Full HD display. The SOC also features MediaTek ClearMotion™ technology for automatic frame-rate conversion of standard 24/30fps video to high-quality 60fps video for significantly smoother playback.
Enhancing mobile performance still further, the MT6592 incorporates the MediaTek advanced multi-mode cellular modem and a full connectivity capability for dual-band 801.11n Wi-Fi, Miracast screen-sharing as well as Bluetooth, GPS and an FM tuner.
In addition to MediaTek’s leadership in Heterogeneous Multi-Processing (HMP) in CPU, all of its mobile SOC’s including the MT6592 have been using a Heterogeneous Computing (HC) architecture, distributing the workload to different kinds of processors and other specialized computing engines to optimize performance.  These HC building blocks include the CPU, GPU, DSP, multiple connectivity engines, multiple multimedia engines, camera engines, display engines, navigation, and sensor cores. MediaTek is committed to apply the best-in-class technologies to each of these building blocks.
“We are thrilled to offer the new MT6592 to our customers as part of our ongoing commitment to providing inclusive mobile technology,” said Jeffrey Ju, MediaTek General Manager, Smartphone Business Unit. ”The MT6592 delivers longer battery life, low-latency response times and the best possible mobile multimedia experience. Being the first to market with this advanced eight-core SOC is testament to the industry-leading position of MediaTek.”
” MediaTek has taken a pioneering position with the MT6592 by being the first to use the power-efficient ARM® Cortex®-A7 processor in an octa-core configuration with the ARM Mali™ GPU,” said Noel Hurley, ARM Vice President of Strategy and Marketing, Processor Division. “We are delighted that our partnership with MediaTek continues to deliver new and innovative mobile consumer products, extending our low-power and high-performance leadership in mobile devices.”
                                                                                               ###
About MediaTek Inc.
MediaTek Inc. is a leading fabless semiconductor company for wireless communications and digital multimedia solutions. The company is a market leader and pioneer in cutting-edge SOC system solutions for wireless communications, high-definition TV, optical storage, and DVD and Blu-ray products. Founded in 1997 and listed on Taiwan Stock Exchange under the “2454” code, MediaTek is headquartered in Taiwan and has sales or research subsidiaries in Mainland China, Singapore, India, United States, Japan, Korea, Denmark, England, Sweden and Dubai. For more information, visit MediaTek’s website at www.mediatek.com.

Gameloft Modern Combat 5 True Octa Core vs Quad Core Comparison [techand trickz YouTube channel, Nov 26, 2013]

Gameloft teams up with MediaTek to unleash stunning graphical gameplay for Modern Combat 5 [MediaTek press release, Nov 18, 2013]

Gameloft to use latest True Octa-Core MT6592 to bring mobile gaming to the next level
Paris – November 18, 2013 – Gameloft, a leading global publisher of digital and social games, and MediaTek, a leading fabless semiconductor company specializing in wireless communications and digital multimedia solutions, announce that the hotly anticipated Modern Combat 5 will be optimized on the new MT6592 octa-core smartphone chip, for Android smartphones.The MT6592, MediaTek’s latest innovation, is the first true octa-core processor in the world, and Gameloft’s next title, Modern Combat 5, will be the first game optimized for the new chip. As mobile gaming moves forward highly detailed and realistic gameplay, the need for higher performance chipset is required. Specific features of the new Modern Combat 5 include definition levels not seen before, especially in the technically difficult mediums of water distortion effects, reflections and shadowing.
Modern Combat 5 is a fast-moving, visually exciting action game played across various terrains and conditions. MT6592 allows for continuous scrolling in high definition with attention to detail from soft particle display to enhanced depth of field to create a more immersive experience.
“We’re thrilled to expand our collaboration with MediaTek,” said Ludovic Blondel, Vice President OEM at Gameloft. “This new octa-core system on a chip is focused on high performance and is one of the best mobile technologies on today’s market. We are delighted to showcase this innovative, high-end technology in Modern Combat 5, one of our most awaited games of 2014.”
“With the rapid development of mobile Internet applications and services, mobile gaming has become one of the leading value-added services for our customers and the best medium to experience the power of True Octa-Core with our MT6592 chip,” said Jeffrey Ju, General Manager of MediaTek Smartphone Business Unit. “Our partnership with Gameloft on Modern Combat 5 is a major breakthrough for the industry and gaming community, as we empower the ultimate gaming experience that can be enjoyed anywhere, anytime.”
Modern Combat 5 will be available on all smartphone models equipped with the MT6592 chip, and will be available for download from the Google Play Store in early 2014.
                                                                                                 ###
About Gameloft
A leading global publisher of digital and social games, Gameloft® has established itself as one of the top innovators in its field since 2000. Gameloft creates games for all digital platforms, including mobile phones, smartphones and tablets (including Apple® iOS and Android® devices), set-top boxes and connected TVs. Gameloft operates its own established franchises such as Asphalt®, Order & Chaos, Modern Combat, and Dungeon Hunter, and also partners with major rights holders including Universal®, Illumination Entertainment®, Disney®, Marvel®, Hasbro®, FOX®, Mattel® and Ferrari®. Gameloft is present on all continents, distributes its games in over 100 countries and employs over 5,000 developers. Gameloft is listed on NYSE Euronext Paris (NYSE Euronext: GFT.PA, Bloomberg: GFT FP, Reuters: GLFT.PA). Gameloft’s sponsored Level 1 ADR (ticker: GLOFY) is traded OTC in the US.

Current (Dec 22, 2013) MT6592-based smartphones in PDAdb.net:

Coolpad 9976A ???

  • Release Date: November, 2013
  • OS: Google Android 4.2.1 Chinese
  • CPU: 32bit MediaTek MT6592, 1638MHz
  • Memory: 2048MiB RAM, 7629MiB ROM
  • Display: 7″ 1200×1920 pixel
  • Cellular Phone: dual cellular operation (Dual standby)
  • Physical Attributes: 104.5 x 185 x 7.6 mm, 263 g

O2 Super K1  [RMB 2,199 – $362]

  • Release Date: November, 2013
  • OS: Google Android 4.3
  • CPU: 32bit MediaTek MT6592, 1700MHz
  • Memory: 2048MiB RAM, 30518MiB ROM
  • Display: 5.7″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)
  • Physical Attributes: 79.5 x 155 x 8.2 mm, 175 g

THL W11 Monkey King II
[RMB 1,899 – $318]

  • Release Date: November, 2013
  • OS: Google Android 4.3
  • CPU: 32bit MediaTek MT6592, 2000MHz
  • Memory: 2048MiB RAM, 30518MiB ROM
  • Display: 5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Single standby)
  • Physical Attributes: 71.2 x 144 x 8.6 mm, 155 g

Uniscope XC2S 
[RMB 1,699 – $280]

  • Release Date: December, 2013
  • OS: Google Android 4.2.2 Aliyun OS
  • CPU: 32bit MediaTek MT6592, 1664MHz
  • Memory: 2048MiB RAM, 30517MiB ROM
  • Display: 5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Single standby)
  • Physical Attributes: 68.5 x 139.5 x 8.44 mm

THL T100s Ironman ???

  • Release Date: December, 2013
  • OS: Google Android 4.2.2
  • CPU: 32bit MediaTek MT6592, 1700MHz
  • Memory: 2048MiB RAM, 30518MiB ROM
  • Display: 5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)
  • Physical Attributes: 70.4 x 144.2 x 8.8 mm, 144 g

UMI X2S  ???

  • Release Date: December, 2013
  • OS: Google Android 4.2.2
  • CPU: 32bit MediaTek MT6592, 1664MHz
  • Memory: 2048MiB RAM, 30518MiB ROM
  • Display: 5″ 1080×1920 pixel
  • Cellular Phone: dual cellular operation (Dual standby)

Newman K18 16GB ???

  • Release Date: December, 2013
  • OS: Google Android 4.2.2
  • CPU: 32bit MediaTek MT6592, 1700MHz
  • Memory: 2048MiB RAM, 15259MiB ROM
  • Display: 5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)
  • Physical Attributes: 69.9 x 144 x 6.1 mm, 120 g

Newman K18 32GB  ???

  • Release Date: December, 2013
  • OS: Google Android 4.2.2
  • CPU: 32bit MediaTek MT6592, 1664MHz
  • Memory: 2048MiB RAM, 30518MiB ROM
  • Display: 5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)
  • Physical Attributes: 69.9 x 144 x 6.1 mm, 120 g

GiONEE Elife E7 mini 16GB ???

  • Release Date: December, 2013
  • OS: Google Android 4.3
  • CPU: 32bit MediaTek MT6592, 1700MHz
  • Memory: 1024MiB RAM, 15259MiB ROM
  • Display: 4.7″ 720×1280 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)
  • Physical Attributes: 66 x 139.8 x 8.6 mm

Zopo ZP998  [internally as Zopo Black 2 for RMB 1,499 – $247]

  • Release Date: January, 2014
  • OS: Google Android 4.2.2
  • CPU: 32bit MediaTek MT6592, 1664MHz
  • Memory: 2048MiB RAM, 30518MiB ROM
  • Display: 5.5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)

Alcatel One Touch Idol X+ (TCL S960T)  [RMB 1,999 – $329]

  • Release Date: January, 2014
  • OS: Google Android 4.2.2
  • CPU: 32bit MediaTek MT6592, 2000MHz
  • Memory: 2048MiB RAM, MiB ROM
  • Display: 5″ 1080×1920 pixel color IPS TFT
  • Cellular Phone: GSM850, GSM900, GSM1800,..
  • Physical Attributes: 69.1 x 140.4 x 7.9 mm, 125 g

Huawei Ascend G750-T00 / Honor 3X / Glory 4 
[RMB 1,698$280]

  • Release Date: January, 2014
  • OS: Google Android 4.2.2 Chinese
  • CPU: 32bit MediaTek MT6592, 1664MHz
  • Memory: 2048MiB RAM, 7630MiB ROM
  • Display: 5.5″ 720×1280 pixel color IPS TFT
  • Cellular Phone: dual cellular operation (Dual standby)

The case of the most ambitious newcomer, ZOPO:

Next Step of ZOPO-Return Banquet of Partners of ZOPO Draws to a Successful Conclusion [ZOPOMOBILE YouTube channel, Aug 31, 2013]

Speech from Kevin Xu, CEO of ZOPO at 2013 ZOPO-Return Banquet of Partners of ZOPO Draws to a Successful Conclusion Edited by official authorized zopomobileshop.com http://www.zopomobileshop.com
From: At August 30, 2013, Return Banquet of Partners(global Market) of Shenzhen ZOPO Communications-equipment Co., Ltd. was held at the The Pavilion Hotel, Shenzhen, China. More than 50 people attended this return banquet activity, including Mr. Kevin Xu, President of ZOPO Communications-equipment Co., Ltd., Mr. Allen Cao, senior manager, Mr Shawn Sun, executive director of zopomobileshop.com, and representatives of various reseller, such as dx.com, efox-shop.com, lightinthebox.com and other retail business.
The return banquet at afternoon started with Mr. Allen Cao, senior manager of international market, delivered his thanksgiving remarks to the guests on behalf of the ZOPO Communications-equipment Co., Ltd, thanking the partners of the various fields for their constant trust and support to ZOPO Communications-equipment Co., Ltd. He introduced to partners achievements of the accelerated development the ZOPO mobile phone business on global market in 2012 and 2013. ZOPO already have 4 official distributors in European: French, Germany, Italy, Spain. ZOPO also have built up strategic partnership with more then 10 E- business, such as zopomobileshop.com, pandawill.com, ebay, paypal, AliExpress and so on. Mr.Cao show special thanks to zopomobileshop.com team, appreciate Ms. Jessica Tang and Zopomobileshop team provide global customers a channel to understand ZOPO and the reliable service. Afterwards,Mr. Kevin Xu,President of the company introduced its direction for future development in becoming “ a reliable and professional smart phone supplier by providing users phone with the latest tech”. He confirms that ZOPO will be the first factory to release smart phone with 8 cores. Further more, the ZP980 and C2, will have a update to a 2rd generation version and a version with batter price come out soon. Then Mr. Jay Wang, CEO of Pandawill.com has a speech as partners representative.
Return banquet of partners of ZOPO communications-equipment CO.,Ltd. has been end of a dinner. Mr. Kevin Xu, President of ZOPO Communications-equipment Co., Ltd., Mr. Allen Cao, senior manager, drank with all guests, praying together for a bright and beautiful future. The party thus drew to its successful conclusion and happy wishes.

Zopo – Factory Testing of Zopo C2 Mobile Phone [Digital Playworld YouTube channel, July 31, 2013]

http://www.digitalplayworld.co.uk A short video showing some of the quality testing that the Zopo factory put their mobile phones through. To order yours in the UK visit http://www.digitalplayworld.co.uk

Zopo Factory Tour — How Popular Zopo 990, 980 Phones Be Made [Jody Elife YouTube channel, Nov 19, 2013]

Recently, Antelife team are so honored to get invitation from Zopo company, to visit Zopo factory, reveal each detail you wanna know, and show you how Zopo phones be made. More Zopo phones here: http://www.antelife.com/catalogsearch/result/?q=zopo

ZOPO ZP998 AnTuTu Benchmark [ZOPOMOBILE YouTube channel, Dec 17, 2013]

ZOPO zp998 Octa Core NFC Test – Zopomobileshop [ZOPOMOBILE YouTube channel, Dec 17, 2013]

From: http://www.zoposhop.com/officialZOPO-ZP998-XiaoHei-II-MTK6592-Octa-Core-1-7GHz-5-5-inch-FHD-Screen-14-0M-Camera-Smart-Phone-With-OTG-NFC-5G-WIFI-Air

Pre-order ZOPO ZP998 FIRST TRUE 1.7GHz Eight-core 2GRAM+32 ROM MTK6592T 14.0MP CAMERA (Delivery after 30days)

image

image

image

image

 

image

 

image

image

image

image

 

image

 

image

image

image

Intel’s HPC-like exascale approach to next-gen of Big Data as well

or we’ll need 1000x more compute (Exascale) than we have today, and we can do that via a proper exascale architecture for general purpose computing (i.e. without the special purpose computing approaches proposed by Intel competitors) – this is the latest message from Intel.

Just two recent headlines from the media:

Then other two headlines which are reflecting another aspect of Intel’s move:

Referring to: Chip Shot: Intel Reveals More Details of Its Next Generation Intel® Xeon Phi™ Processor at SC’13 [Intel Newsroom, Nov 19, 2013]

Today at the Supercomputing Conference in Denver, Intel discussed form factors and memory configuration details of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing”). The new revolutionary design will be based on the leading edge 14nm manufacturing technology and will be available as a host CPU with high-bandwidth memory on a processor package. This first-of-a-kind, highly integrated, many-core CPU will be more easily programmable for developers and improve performance by removing “off-loading” to PCIe devices, and increase cost effectiveness by reducing the number of components compared to current solutions. The company has also announced collaboration with the HPC community designed to deliver customized products to meet the diverse needs of customers, and introduced new Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools to bring the benefits of Big Data analytics and HPC together. View the tech briefing.

image

High-bandwidth In-Package Memory:
Performance for memory-bound workloads
Flexible memory usage modelsimage

image

From: Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

  • Intel discloses form factors and memory configuration details of the CPU version of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing”), to ease programmability for developers while improving performance.

During the Supercomputing Conference (SC’13), Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing“), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will significantly reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking.

Knights Landing will also offer developers three memory options to optimize performance. Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.

In addition, Intel and Fujitsu recently announced an initiative that could potentially replace a computer’s electrical wiring with fiber optic links to carry Ethernet or PCI Express traffic over an Intel® Silicon Photonics link. This enables Intel Xeon Phi coprocessors to be installed in an expansion box, separated from host Intel Xeon processors, but function as if they were still located on the motherboard. This allows for much higher density of installed coprocessors and scaling the computer capacity without affecting host server operations.

Several companies are already adopting Intel’s technology. For example, Fovia Medical*, a world leader in volume rendering technology, created high-definition, 3D models to help medical professionals better visualize a patient’s body without invasive surgery. A demonstration from the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) showed a 2D simulation of an F4 tornado, and addressed how a forecaster will be able to experience an immersive 3D simulation and “walk around a storm” to better pinpoint its path. Both applications use Intel® Xeon® technology.

Intel @ SC13 [HPCwire YouTube channel, Nov 22, 2013]

Intel presents technical computing solutions from SC13 in Denver, CO. [The CAPS demo is from [4:00] on]

From: Exascale Challenges and General Purpose Processors [Intel presentation, Oct 24, 2013]

CERN Talk 2013 presentation by Avinash Sodani, Chief Architect, Knights Landing Processor, Intel Corporation

The demand for high performance computing will continue to grow exponentially, driving to Exascale in 2018/19. Among the many challenges that Exascale computing poses, power and memory are two important ones. There is a commonly held belief that we need special purpose computing to meet these challenges. This talk will dispel this myth and show how general purpose computing can reach the Exascale efficiencies without sacrificing the benefits of general purpose programming. It will talk about future architectural trends in Xeon-Phi and what it means for the programmers.
About the speaker
Avinash Sodani is the chief architect of the future Xeon-Phi processor from Intel called Knights Landing. Previously, Avinash was one of the primary architects of the first Core i7/i5 processor (called Nehalem). He also worked as a server architect for Xeon line of products. Avinash has a PhD in Computer Architecture from University of Wisconsin-Madison and a MS in Computer Science from the same university. He has a B.Tech in Computer Science from Indian Institute of Technology, Kharagpur in India.

image

Summary

  • Many challenges to reach Exascale – Power is one of them 
  • General purpose processors will achieve Exascale power efficiencies
    – Energy/op trend show bridgeable gap of ~2x to Exascale (not 50x)
  • General purpose programming allows use of existing tools and programming methods. 
  • Effort needed to prepare SW to utilize Xeon-Phi’s full compute capability. But optimized code remains portable for general purpose processors.
  • More integration over time to reduce power and increase reliability

From: Intel Formally Introduces Next-Generation Xeon Phi “Knights Landing” [X-bit labs, Nov 19, 2013]

According to a slide from an Intel presentation that leaked to the web earlier this year, Intel Xeon Phi code-named Knights Landing will be released sometimes in late 2014 or in 2015.

image

The most important aspect about the Xeon Phi “Knights Landing” product is its performance, which is expected to be around or over double precision 3TFLOPS, or 14 – 16GFLOPS/w; up significantly from ~1TFLOPS per current Knights Corner chip (4 – 6GFLOPS/w). Keeping in mind that Knights Landing is 1.5 – 2 years away, three times performance increase seem significant and enough to compete against its rivals. For example, Nvidia Corp.’s Kepler has 5.7GFLOPS/w DP performance, whereas its next-gen Maxwell (competitor for KNL) will offer something between 8GFLOPS/w and 16GFLOPS/w.

More from: Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

  • New Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools bring the benefits of Big Data analytics and HPC together.
  • Collaboration with HPC community designed to deliver customized products to meet the diverse needs of customers.

High Performance Computing for Data-Driven Discovery
Data intensive applications including weather forecasting and seismic analysis have been part of the HPC industry from its earliest days, and the performance of today’s systems and parallel software tools have made it possible to create larger and more complex simulations. However, with unstructured data accounting for 80 percent of all data, and growing 15 times faster than other data1, the industry is looking to tap into all of this information to uncover valuable insight.

Intel is addressing this need with the announcement of the Intel® HPC Distribution for Apache Hadoop* software (Intel® HPC Distribution) that combines the Intel® Distribution for Apache Hadoop software with Intel® Enterprise Edition of Lustre* software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.

The Intel® Cloud Edition for Lustre* software is a scalable, parallel file system that is available through the Amazon Web Services Marketplace* and allows users to pay-as-you go to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud.

With numerous vendors announcing pre-configured and validated hardware and software solutions featuring the Intel Enterprise Edition for Lustre, at SC’13, Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. Partners announcing these appliances include Advanced HPC*, Aeon Computing*, ATIPA*, Boston Ltd.*, Colfax International*, E4 Computer Engineering*, NOVATTE* and System Fabric Works*.

* Other names and brands may be claimed as the property of others.

1 From IDC Digital Universe 2020 (2013)

Mark Seager: Approaching Big Data as a Technical Computing Usage Model [ieeeComputerSociety YouTube channel, recorded on October 29, published on November 12, 2013]

Mark Seager, CTO for technical computing at Intel, discusses the amazing new capabilities that are spreading across industries and reshaping the world. Watch him describe the hardware and software underlying much of the parallel processing that drives the big data revolution in his talk at the IEEE Computer Society’s “Rock Stars of Big Data” event, which was held 29 October 2013 at the Computer History Museum in Santa Clara, CA. Mark leads the HPC strategy for Intel’s High Performance Computing division. He is working on an ecosystem approach to develop and build HPC systems for Exascale and new storage paradigms Big Data systems. Mark managed the Platforms portion of the Advanced Simulation and Computing (ASC) program at Lawrence Livermore National Laboratory (LLNL) and successfully developed with industry partners and deployed the five generations of TOP1 systems. In addition, Mark developed the LLNL Linux strategy and award winning industry partnerships in storage and Linux systems developments. He has won numerous awards including the prestigious Edward Teller Award for “Major Contributions to the State-of-the-Art in High Performance Computing.”

From: Discover Your Parallel Universe [The Data Stack blog from Intel, Nov 18, 2013]

That’s Intel’s theme at SC’13 this week at the 25th anniversary of the Supercomputing Conference. We’re using it to emphasize the importance of modernizing codes and algorithms to take advantage of modern processors (think lots of cores and threads and wide vector units found in Intel Xeon processors and Intel Xeon Phi coprocessors). Or simply put, “going parallel” as we like to call it. We have a fantastic publication called Parallel Universe Magazine for more on the software and hardware side of going parallel.

But we’re also using it as inspiration for the researchers, scientists, and engineers that are changing the world every day. We’re asking them to envision the universe we’ll live in if the supercomputing community goes parallel. A few examples:

  1. In a parallel universe there is a cure
  2. In a parallel universe natural disasters are predicted
  3. In a parallel universe ideas become reality

Pretty lofty huh? But also inevitable. We will find a 100% cure to all forms of cancer according to the National Cancer Institute. We will be able to predict the weather 28-days in advance according to National Oceanic and Atmospheric Association. And everyone will eventually use computing to turn their ideas into products.

The only problem is it’ll be the year 2190 before we have a cure to pancreatic cancer, we’ll need 1000x more compute (Exascale) than we have today to predict the weather 28-days in advance, and the cost and learning curve of technical computing will need to continue to drop before everyone has access.

That’s our work here at Intel. We solve these problems. We drive more performance at lower cost which gives people more compute. The more compute, the better cancer researchers will understand the disease. We’ll shift that 2190 timeline left. We’ll also solve the challenges to reaching Exascale levels of compute which will make weather forecast more accurate. And we’ll continue to drive open standards. This will create a broad ecosystem of hardware and software partners which drives access on a broad scale.

From: Criteria for a Scalable Architecture 2013 OFA Developer Workshop, Monterey, CA [keynote on 2013 OpenFabrics International Developer Workshop, April 21-24, 2013]
By
Mark Seager, CTO for the HPC Ecosystem, Intel Technical Computing Group

In this video from the 2013 Open Fabrics Developer Workshop, Mark Seager from Intel presents: Criteria for a Scalable Achitecture. Learn more at: https://www.openfabrics.org/press-room/2013-intl-developer-workshop.html

image………………………………………………………..
Exascale Systems Challenges are both Interconnect and SAN

• Design with system focus that enables end-user applications
• Scalable hardware
– Simple, Hierarchal
– New storage hierarchy with NVRAM
• Scalable Software
– Factor and solve
– Hierarchal with function shipping
• Scalable Apps
– Asynchronous coms and IO
– In-situ, in-transit and post processing/visualization

Summary

• Integration of memory and network into processor will help keep us on the path to Exascale
• Energy is the overwhelming challenge. We need a balanced attack that optimizes energy under real user conditions
• B:F and memory/core while they have their place, they can also result in impediments to progress
• Commodity interconnect can deliver scalability through improvements in Bandwidth, Latency and message rates
………………………………………………………..

SAN: Storage Area Network     Ci: Compute nodes     NVRAM: Non-Volatile RAM
OSNj: ?Operating System and Network?    SNk: ?Storage Node?

Lustre: the dominant parallel file system for HPC and ‘Big Data’

Moving Lustre Forward: Status & Roadmap [RichReport YouTube channel, Dec 2, 2013]

In this video from the DDN User Meeting at SC13, Brent Gorda from the Intel High Performance Data Division presents: “Moving Lustre Forward: Status & Roadmap.” Learn more: http://www.whamcloud.com/about/ and http://ddn.com

Intel Expands Software Portfolio for Big Data Solutions [press release, June 12, 2013]

New Intel® Enterprise Edition for Lustre* Software Designed to Simplify Big Data Management, Storage

NEWS HIGHLIGHTS

  • Intel® Enterprise Edition for Lustre* software helps simplify configuration, monitoring, management and storage of high volumes of data.
  • With Intel® Manager for Lustre* software, Intel is able to extend the reach of Lustre into new markets such as financial services, data analytics, pharmaceuticals, and oil and gas.
  • When combined with the Intel® Distribution for Apache Hadoop* software, Hadoop users can access Lustre data files directly, saving time and resources.
  • New software offering furthers Intel’s commitment to drive new levels of performance and features through continuing contributions the open source community.

SANTA CLARA, Calif., June. 12, 2013 – The amount of available data is growing at exponential rates and there is an ever-increasing need to move, process and store it to help solve the world’s most important and demanding problems. Accelerating the implementation of big data solutions, Intel Corporation announced the Intel® Enterprise Edition for Lustre* software to make performance-based storage solutions easier to deploy and manage.

Businesses and organizations of all sizes are increasingly turning to high-performance computing (HPC) technologies to store and process big data workloads due to its performance and scalability advantages. Lustre is an open source parallel distributed file system and key storage technology that ties together data and enables extremely fast access. Lustre has become the popular choice for storage in HPC environments for its ability to support tens of thousands of client systems and tens of petabytes of storage with access speeds well over 1 terabyte per second. That is the equivalent to downloading all “Star Wars”* and all “Star Trek”* movies and television shows in Blu-Ray* format in one-quarter of a second.

“Enterprise users are looking for cost-effective and scalable tools to efficiently manage and quickly access large volumes of data to turn valuable information into actionable insight,” said Boyd Davis, vice president and general manager of Intel’s Datacenter Software Division. “The addition of the Intel Enterprise Edition for Lustre to our big data software portfolio will help make it easier and more affordable for businesses to move, store and process data quickly and efficiently.”

The Intel Enterprise Edition for Lustre software is a validated and supported distribution of Lustre featuring management tools as well as a new adaptor for the Intel® Distribution for Apache Hadoop*. This new offering provides enterprise-class reliability and performance to take full advantage of storage environments with worldwide service, support, training and development provided experienced Lustre engineers at Intel.

The Intel® Manager for Lustre provides a consistent view of what is happening inside the storage system regardless of where the data is stored or what type of hardware is used. This tool enables IT administrators to easily manage tasks and reporting, provides real-time system monitoring as well as the ability to quickly troubleshoot. IT departments are also able to streamline management, shorten the learning curve and lower operational expenses resulting in time and resource savings, better risk mitigation and improved business decision-making.

When paired with the Intel® Distribution for Apache Hadoop, the Intel Enterprise Edition for Lustre software allows Hadoop to be run on top of Lustre, significantly improving speed in which data can be accessed and analyzed. This allows users to access data files directly from the global file system at faster rates and speeds up analytics time, providing more productive use of storage assets as well as simpler storage management.

As part of the company’s commitment to drive innovation and enable the open source community, Intel will contribute development and support as well as community releases to the development of Lustre. With veteran Lustre engineers and developers working at Intel contributing to the code, Lustre will continue its growth in both high-performance computing and commercial environments and is poised to enter new enterprise markets including financial services, data analytics, pharmaceuticals, and oil and gas.

The Intel Enterprise Edition for Lustre will be available in early in the third quarter of this year.

Microsoft products for the Cloud OS

Part of: Microsoft Cloud OS vision, delivery and ecosystem rollout

1. The Microsoft way
2. Microsoft Cloud OS vision
3. Microsoft Cloud OS delivery and ecosystem rollout
4. Microsoft products for the Cloud OS

4. 1 Windows Server 2012 R2 & System Center 2012 R2
4.2 Unlock Insights from any Data – SQL Server 2014
4.3 Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights
4.4 Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI)
4.5 Microsoft talking about Cloud OS and private clouds: starting with Ray Ozzie in November, 2009 (separate post)

4.5.1 Tiny excerpts from official executive and/or corporate communications
4.5.2 More official communications in details from executives and/or corporate

4. 1 Windows Server 2012 R2 & System Center 2012 R2 [MPNUK YouTube channel, Nov 18, 2013]

Hosting technical training overview.

Windows Server 2012 R2: 0:00
Server Virtualization: 4:40
Storage: 11:07
Networking: 17:37
Server Management and Automation: 23:14
Web and Application Platform: 27:05

System Center 2012 R2: 31:14
Infrastructure Provisioning: 36:15
Infrastructure Monitoring: 42:48
Automation and Self-service: 45:30
Application Performance Monitoring: 48:50
IT Service Management: 51:05

More information is in the What’s New in 2012 R2 [Windows Server 2012 R2, System Center 2012 R2] series of “In the Cloud” articles by Brad Anderson:

Over the last three weeks, Microsoft has made an exciting series of announcements about its next wave of products, including Windows Server 2012 R2, System Center 2012 R2, SQL Server 2014, Visual Studio 2013, Windows Intune and several new Windows Azure services. The preview bits are now available, and the customer feedback has been incredible!

The most common reaction I have heard from our customers and partners is that they cannot believe how much innovation has been packed into these releases – especially in such a short period of time. There is a truly amazing amount new value in these releases and, with this in mind, we want to help jump-start your understanding of the key scenarios that we are enabling.

As I’ve discussed this new wave of products with customers, partners, and press, I’ve heard the same question over and over: “How exactly did Microsoft build and deliver so much in such a short period of time?” My answer is that we have modified our own internal processes in a very specific way: We build for the cloud first.

A cloud-first design principle manifests itself in every aspect of development; it means that at every step we architect and design for the scale, security and simplicity of a high-scale cloud service. As a part of this cloud-first approach, we assembled a ‘Scenario Focus Team’ that identified the key user scenarios we needed to support – this meant that our engineers knew exactly what needed to be built at every stage of development, thus there was no time wasted debating what happened next. We knew our customers, we knew our scenarios, and that allowed all of the groups and stakeholders to work quickly and efficiently.

The cloud-first design approach also means that we build and deploy these products within our own cloud services first and then deliver them to our customers and partners. This enables us to first prove-out and battle-harden new capabilities at cloud scale, and then deliver them for enterprise use. The Windows Azure Pack is a great example of this: In Azure we built high-density web hosting where we could literally host 5,000 web servers on a single Windows Server instance. We exhaustively battle-hardened that feature, and now you can run it in your datacenters.

At Microsoft we operate more than 200 cloud services, many of which are servicing 100’s of millions of users every day. By architecting everything to deliver for that kind of scale, we are sure to meet the needs of enterprise anywhere and in any industry.

Our cloud-first approach was unique for another reason: It was the first time we had common/unified planning across Windows Client, Windows Server, System Center, Windows Azure, and Windows Intune. I know that may sound crazy, but it’s true – this is a first. We spent months planning and prioritizing the end-to-end scenarios together, with the goal of identifying and enabling all the dependencies and integration required for an effort this broad. Next we aligned on a common schedule with common engineering milestones.

The results have been fantastic. Last week, within 24 hours, we were able to release the previews bits of Windows Client 8.1, Windows Server 2012 R2, System Center 2012 R2, and SQL Server 2014.

By working together throughout the planning and build process, we established a common completion and Release to Manufacturing date, as well as a General Availability date. Because of these shared plans and development milestones, by the time we started the actual coding, the various teams were well aware of each dependency and the time to build the scenarios was much shorter.

The bottom-line impact of this Cloud-first approach is simple:  Better value, faster.

This wave of products shows that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and those scenarios are all delivered at a higher quality.

This wave of products demonstrates that the changes we’ve made internally allow us to deliver more end-to-end scenarios out of the box, and each of those scenarios are all delivered at a higher quality.  This cloud-first approach also helps us deliver the Cloud OS vision that drives the STB business strategy.

The story behind the technologies that support the Cloud OS vision is an important part of how we enable customers to embrace cloud computing concepts.  Over the next eight weeks, we’ll examine in great detail the three core pillars (see the table below) that support and inspire these R2 products:  Empower People-centric IT, Transform the Datacenter, and Enable Modern Business Apps.  The program managers who defined these scenarios and worked within each pillar throughout the product development process, have authored in-depth overviews of these pillars and their specific scenarios, and we’ll release those on a weekly basis.

Pillar

Scenarios

Empower People-centric IT

People-centric IT (PCIT) empowers each person you support to work virtually anywhere on PCs and devices of their choice, while providing IT with an easy, consistent, and secure way to manage it all. Microsoft’s approach helps IT offer a consistent self-service experience for people, their PCs, and their devices while ensuring security. You can manage all your client devices in a single tool while reducing costs and simplifying management.

Transform the Datacenter

Transforming the datacenter means driving your business with the power of a hybrid cloud infrastructure. Our goal is to help you leverage your investments, skills and people by providing a consistent datacenter and public cloud services platform, as well as products and technologies that work across your datacenter, and service provider clouds.

Enable Modern Business Apps

Modern business apps live and move wherever you want, and Microsoft offers the tools and resources that deliver industry-leading performance, high availability, and security. This means boosting the impact of both new and existing applications, and easily extending applications with new capabilities – including deploying across multiple devices.

The story behind these pillars and these products is an important part of our vision for the future of corporate computing and the modern datacenter, and in the following post, David B. Cross, the Partner Director of Test and Operations for Windows Server, shares some of the insights the Windows Server & System Center team have applied during every stage of our planning, build, and deployment of this awesome new wave of products.

People want access to information and applications on the devices of their choice. IT needs keep data protected and without breaking the budget. Learn how the Microsoft People-centric IT vision helps businesses address their consumerization of IT challenges. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/pcit.aspx
Hear from Dell and Accenture how Microsoft Windows Server 2012 R2 and System Center 2012 R2 enable a more flexible workstyle and people-centric IT through virtual desktop infrastructure (VDI). Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The modern workforce isn’t just better connected and more mobile than ever before, it’s also more discerning (and demanding) about the hardware and software used on the job. While company leaders around the world are celebrating the increased productivity and accessibility of their workforce, the exponential increase in devices and platforms that the workforce wants to use can stretch a company’s infrastructure (and IT department!) to its limit.

If your IT team is grappling with the impact and sheer magnitude of this trend, let me reiterate a fact I’ve noted several times before on this blog: The “Bring Your Own Device” (BYOD) trend is here to stay.

Building products that address this need is a major facet of the first design pillar I noted last week: People-centric IT(PCIT).

In today’s post (and in each one that follows in this series), this overview of the architecture and critical components of the PCIT pillar will be followed by a “Next Steps” section at the bottom. The “Next Steps” will include a list of new posts (each one written specifically for that day’s topic) developed by our Windows Server & System Center engineers. Every week, these engineering blogs will provide deep technical detail on the various components discussed in this main post. Today, these blogs will systematically examine and discuss the technology used to power our PCIT solution.

The PCIT solution detailed below enables IT Professionals to set access policies to corporate applications and data based on three incredibly important criteria:

  1. The identity of the user
  2. The user’s specific device
  3. The network the user is working from

What’s required here is a single management solution that enables specific features where control is necessary and appropriate, and that also provides what I call “governance,” or light control when less administration is necessary. This means a single pane of glass for managing PCs and devices. Far too often I meet with companies that have two separate solutions running side-by-side – one for every PC, and the second to manage devices. Not only is this more expensive and more complex, it creates two disjointed experiences for end users and a big headache for the IT pros responsible for managing.

In today’s post, Paul Mayfield, the Partner Program Manager for the System Center Configuration Manager/Windows Intune team, discusses how everything that Microsoft has built with this solution is focused on creating the capability for IT teams to use the same System Center Configuration Manager that they already have in place managing their PCs and now extend this management power to devices. This means double the management capabilities from within the same familiar console. This philosophy can be extended even further by using Windows Intune to manage devices where they live – i.e. cloud-based management for cloud-based devices. Cloud-based management is especially important for user-owned devices that need regular updates.

This is an incredible solution, and the benefit and ease of use for you, the consumer, is monumental.

People want access to corporate applications from anywhere, on whatever device they choose—laptop, smartphone, tablet, or PC. IT departments are challenged to provide consistent, rich experiences across all these device types, with access to native, web, and remote applications or desktops. In this video we take a look at how IT can enable people to choose their devices, reduce costs and complexity, as well as maintain security and compliance by protecting data and having comprehensive settings management across platforms.

In today’s post, we tackle a common question I get from customers: “Why move to the cloud right now?” Recently, however, this question has changed a bit to, “What should I move to the cloud first?

An important thing to keep in mind with either of these questions is that every organization has their own unique journey to the cloud. There are a lot of different workloads that run on Windows Server, and the reality is that these various workloads are moving to the cloud at very different rates. Web servers, e-mail and collaboration are examples of workloads moving to the cloud very quickly. I believe that management, and the management of smart devices, will be one of the next workloads to make that move to the cloud – and, when the time comes, that move will happen fast.

Using a SaaS solution is a move to the cloud, and taking this approach is a game changer because of its ability to deliver an incredible amount of value and agility without an IT pro needing to manage any of the required infrastructure.

Cloud-based device management is a particularly interesting development because it allows IT pros to manage this rapidly growing population of smart, cloud-connected devices, and manage them “where they live.” Today’s smart phones and tablets were built to consume cloud services, and this is one of the reasons why I believe that a cloud-based management solution for them is so natural. As you contemplate your organization’s move to the cloud, I suggest that managing all of your smart devices from the cloud should be one of your top priorities.

I want to be clear, however, about the nature of this kind of management: We believe that there should be one consistent management experience across PC’s and devices.

Achieving this single management experience was a major focus of these 2012 R2 releases, and I am incredibly proud to say we have successfully engineered products which do exactly that. The R2 releases deliver this consistent end-user experience through something we call the “Company Portal.” The Company Portal is already deployed here at Microsoft, and it is what we are currently using to upgrade our entire workforce to Windows 8.1. I’ve personally used it to upgrade my desktop, laptop, and Surface – and the process could not have been easier.

In this week’s post, Paul Mayfield, the Partner Program Manager for System Center Configuration Manager/Windows Intune, and his team return to discuss in deep technical detail some of the specific scenarios our PCIT [“People Centric IT”] team has enabled (cloud-based management, Company Portal, etc.).

Cloud computing is bringing new opportunities and new challenges to IT. Learn how Microsoft can help transform your datacenter to take advantage of the vast possibilities of the cloud while leveraging your existing resources. Learn more: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-data-center.aspx
  • Part 4, July 24, 2013: Enabling Open Source Software

image

There are a lot of great surprises in these new R2 releases – things that are going to make a big impact in a majority of IT departments around the world. Over the next four weeks, the 2012 R2 series will cover the 2nd pillar of this release:Transform the Datacenter. In these four posts (starting today) we’ll cover many of the investments we have made that better enable IT pros to transform their datacenter via a move to a cloud-computing model.

This discussion will outline the ambitious scale of the functionality and capability within the 2012 R2 products. As with any conversation about the cloud, however, there are key elements to consider as you read. Particularly, I believe it’s important in all these discussions – whether online or in person – to remember that cloud computing is a computing model, not a location. All too often when someone hears the term “cloud computing” they automatically think of a public cloud environment. Another important point to consider is that cloud computing is much more than just virtualization – it is something that involves change: Change in the tools you use (automation and management), change in processes, and a change in how your entire organization uses and consumes its IT infrastructure.

Microsoft is extremely unique in this perspective, and it is leading the industry with its investments to deliver consistency across private, hosted and public clouds. Over the course of these next four posts, we will cover our innovations in the infrastructure (storage, network, compute), in both on-premise and hybrid scenarios, support for open source, cloud service provider & tenant experience, and much, much more.

As I noted above, it simply makes logical sense that running the Microsoft workloads in the Microsoft Clouds will deliver the best overall solution. But what about Linux? And how well does Microsoft virtualize and manage non-Windows platforms, in particular Linux?  Today we’ll address these exact questions.

Our vision regarding other operating platforms is simple: Microsoft is committed to being your cloud partner. This means end-to-end support that is versatile, flexible, and interoperable for any industry, in any environment, with any guest OS. This vision ensures we remain realistic – we know that users are going to build applications on open source operating systems, so we have built a powerful set of tools for hosting and managing them.

A great deal of the responsibility to deliver the capabilities that enable the Microsoft Clouds (private, hosted, Azure) to effectively host Linux and the associated open source applications falls heavily on the shoulders of the Windows Server and System Center team. In today’s post Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will detail how building the R2 wave with an open source environment in mind has led to a suite of products that are more adaptable and more powerful than ever.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

Back during the planning phase of 2012 R2, we carefully considered where to focus our investments for this release wave, and we chose to concentrate our efforts on enabling Service Providers to build out a highly-available, highly-scalable IaaS infrastructure on cost-effective hardware. With the innovations we have driven in storage, networking, and compute, we believe Service Providers can now build-out an IaaS platform that enables them to deliver VMs at 50% of the cost of competitors. I repeat: 50%. The bulk of the savings comes from our storage innovations and the low costs of our licenses.

At the core of our investments in 2012 R2 is the belief that customers are going to be using multiple clouds, and they want those clouds to be consistent.

Consistency across clouds is key to enabling the flexibility and frictionless movement of applications across these clouds, and, if this consistency exists, applications can be developed once and then hosted in any clouds. This means consistency for the developer. If clouds are consistent with the same management and operations tools easily used to operate these applications, that means consistency for the IT Pro.

It really all comes down to the friction-free movement of applications and VMs across clouds. Microsoft is very unique in this regard; we are the only cloud vendor investing and innovating in public, private and hosted clouds – with a promise of consistency (and no lock-in!) across all of them.

We are taking what we learn from our innovations in Windows Azure and delivering them through Windows Server, System Center and the Windows Azure Pack for you to use in your data center. This enables us to do rapid innovation in the public cloud, battle harden the innovations, and then deliver them to you to deploy. This is one of the ways in which we have been able to quicken our cadence and deliver the kind of value you see in these R2 releases. You’ll be able to see a number of areas where we are driving consistency across clouds in today’s post.

And speaking of today’s post – this IaaS topic will be published in two parts, with the second half appearing tomorrow morning.

In this first half of our two-part overview of the 2012 R2’s IaaS capabilities, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, examines the amazing infrastructure innovations delivered by Windows Server 2012 R2, System Center 2012 R2, and the new features in the Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

I recently had an opportunity to speak with a number of leaders from the former VMWare User Group (VMUG), and it was an incredibly educational experience. I say “former” because many of the VMUG user group chapters are updating their focus/charter and are renaming themselves the Virtual Technology User Group (VTUG). This change is a direct result of how they see market share and industry momentum moving to solutions like the consistent clouds developed by Microsoft.

In a recent follow up conversation with these leaders, I asked them to describe some common topics they hear discussed in their meetings. One of the leaders commented that the community is saying something really specific: “If you want to have job security and a high paying job for the next 10 years, you better be on your way to becoming an expert in the Microsoft clouds. That is where this industry is going.” 

When I look at what is delivered in these R2 releases, the innovation is just staggering. This industry-leading innovation – the types of technical advances that VTUG groups are confidently betting on – is really exciting.

With this innovation in mind, in today’s post I want to discuss some of the work we are doing around the user experience for the teams creating the services that are offered, and I want to examine the experience that can be offered to the consumer of the cloud (i.e. the tenants). While we were developing R2, we spent a lot of time ensuring that we truly understood exactly who would be using our solutions. We exhaustively researched their needs, their motivations, and how various IT users and IT teams relate to each other. This process was incredibly important because these individuals and teams all have very different needs – and we were committed to supporting all of them.

The R2 wave of products have been built with this understanding.  The IT teams actually building and operating a cloud(s) have very different needs than individuals who are consuming the cloud (tenants).  The experience for the infrastructure teams will focus on just that – the infrastructure; the experience for the tenants will focus on the applications/ services and their seamless operation and maintenance.

In yesterday’s post we focused heavily on the innovations in these R2 releases in the infrastructure – storage, network, and compute – and, in this post, Erin Chapple, a Partner Group Program Manager in the Windows Server & System Center team, will provide an in-depth look at Service Provider and Tenant experience and innovations with Windows Server 2012 R2, System Center 2012 R2, and the new features in Windows Azure Pack.

As always in this series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.  Also, if you haven’t started your own evaluation of the 2012 R2 previews, visit the TechNet Evaluation Center and take a test drive today!

Today, people want to work anywhere, on any device and have access to all the resources they need to do their job. How do you enable your users to be productive on the device of their choice, yet retain control of information and meet compliance requirements? In this video we take a look at how the Microsoft access and information protection solutions allow you to enable your users to be productive, provide them with a single identity to access all resources, and protect your data. Learn more: http://www.microsoft.com/aip

In the 13+ years since the original Active Directory product launched with Windows 2000, it has grown to become the default identity management and access-control solution for over 95% of organizations around the world.  But, as organizations move to the cloud, their identity and access control also need to move to the cloud. As companies rely more and more on SaaS-based applications, as the range of cloud-connected devices being used to access corporate assets continue to grow, and as more hosted and public cloud capacity is used companies must expand their identity solutions to the cloud.

Simply put, hybrid identity management is foundational for enterprise computing going forward.

With this in mind, we set out to build a solution in advance of these requirements to put our customers and partners at a competitive advantage.

To build this solution, we started with our “Cloud first” design principle. To meet the needs of enterprises working in the cloud, we built a solution that took the power and proven capabilities of Active Director and combined it with the flexibility and scalability of Windows Azure. The outcome is the predictably named Windows Azure Active Directory.

By cloud optimizing Active Directory, enterprises can stretch their identity and access management to the cloud and better manage, govern, and ensure compliance throughout every corner of their organization, as well as across all their utilized resources.

This can take the form of seemingly simple processes (albeit very complex behind the scenes) like single sign-on which is a massive time and energy saver for a workforce that uses multiple devices and multiple applications per person.  It can also enable the scenario where a user’s customized and personalized experience can follow them from device to device regardless of when and where they’re working. Activities like these are simply impossible without a scalable, cloud-based identity management system.

If anyone doubts how serious and enterprise-ready Windows Azure AD already is, consider these facts:

  • Since we released Windows Azure AD, we’ve had over 265 billion authentications.
  • Every two minutes Windows Azure AD services over 1,000,000 authentication requests for users and devices around the world (that’s about 9,000 requests per second).
  • There are currently more than 420,000 unique domains uploaded and now represented inside of Azure Active Directory.

Windows Azure AD is battle tested, battle hardened, and many other verbs preceded by the word “battle.”

But, perhaps even more importantly, Windows Azure AD is something Microsoft has bet its own business on: Both Office 365 (the fastest growing product in Microsoft history) and Windows Intune authenticate every user and device with Windows Azure AD.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center), Alex Simons (Director of Program Management for Active Directory), Sam Devasahayam (Principle Program Management Lead for Windows Azure AD), and Mark Wahl (Principle Program Manager for Active Directory) take a look at one of R2’s most innovative features, Hybrid Identity Management.

As always in this series, check out the “Next Steps” at the bottom of this post for links to wide range of engineering content with deep, technical overviews of the concepts examined in this post.

One of the key elements in delivering hybrid cloud is networking. Learn how software-defined networking helps make hybrid real. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/software-defined-networking.aspx
[so called Application Centric Infrastructure (ACI)] Microsoft and Cisco will deliver unique customer value through new integrated networking solutions that will combine software-enabled flexibility with hardware-enabled scale/performance. These solutions will keep apps and workloads front and center and have the network adapt to their needs. Learn more by visiting: http://www.cisco.com/web/learning/le21/onlineevts/acim/index.html

One of the foundational requirements we called out in the 2012 R2 vision document was our promise to help you transform the datacenter. A core part of delivering on that promise is enabling Hybrid IT.

By focusing on Hybrid IT we were specifically calling out the fact that almost every customer we interacted with during our planning process believed that in the future they would be using capacity from multiple clouds. That may take the form of multiple private clouds an organization had stood up, or utilizing cloud capacity from a service provider [i.e. managed cloud] or a public cloud like Azure, or using SaaS solutions running from the public cloud.

We assumed Hybrid IT would really be the norm going forward, so we challenged ourselves to really understand and simplify the challenges associated with configuring and operating in a multi-cloud environment. Certainly one of the biggest challenges associated with operating in a hybrid cloud environment is associated with the network – everything from setting up the secure connection between clouds, to ensuring you could use your IP addresses (BYOIP) in the hosted and public clouds you chose to use.

The setup, configuration and operation of a hybrid IT environment is, by its very nature incredibly complex – and we have poured hundreds of thousands of hours into the development of R2 to solve this industry-wide problem.

With the R2 wave of products – specifically Windows Server 2012 R2 and System Center 2012 R2 – enterprises can now benefit from the highly-available and secure connection that enables the friction-free movement of VMs across those clouds. If you want or need to move a VM or application between clouds, the transition is seamless and the data is secure while it moves.

The functionality and scalability of our support for hybrid IT deployments has not been easy to build, and each feature has been methodically tested and refined in our own datacenters. For example, consider that within Azure there are over 50,000 network changes every day, and every single one of them is fully automated. If even 1/10 of 1% of those changes had to be done manually, it would require a small army of people working constantly to implement and then troubleshoot the human errors. With R2, the success of processes like these, and our learnings from Azure, come in the box.

Whether you’re a service provider or working in the IT department of an enterprise (which, in a sense, is like being a service provider to your company’s workforce), these hybrid networking features are going to remove a wide range of manual tasks, and allow you to focus on scaling, expanding and improving your infrastructure.

In this post, Vijay Tewari (Principle Program Manager for Windows Server & System Center) and Bala Rajagopalan(Principle Program Manager for Windows Server & System Center), provide a detailed overview of 2012 R2’s hybrid networking features, as well as solutions for common scenarios like enabling customers to create extended networks spanning clouds, and enabling access to virtualized networks.

Don’t forget to take a look at the “Next Steps” section at the bottom of this post, and check back tomorrow for the second half of this week’s hybrid IT content which will examine the topic of Disaster Recovery.

As business becomes more dependent on technology, business continuity becomes increasingly vital for IT. Learn how Micosoft is making it easier to build out business continuity plans. Learn more: http://www.microsoft.com/en-us/server-cloud/solutions/business-continuity.aspx

With Windows Server 2012 R2, with Hyper-V Replica, and with System Center 2012 R2 we have delivered a DR solution for the masses.

This DR solution is a perfect example of how the cloud changes everything

Because Windows Azure offers a global, highly available cloud platform with an application architecture that takes full advantage of the HA capabilities – you can build an app on Azure that will be available anytime and anywhere.  This kind of functionality is why we made the decision to build the control plane or administrative console for our DR solution on Azure. The control plane and all the meta-data required to perform a test, planned, or unplanned recovery will always be available.  This means you don’t have to make the huge investments that have been required in the past to build a highly-available platform to host your DR solution – Azure automatically provides this.

(Let me make a plug here that you should be looking to Azure for all the new application you are going to build – and we’ll start covering this specific topic in next week’s R2 post.)

With this R2 wave of products, organizations of all sizes and maturity, anywhere in the world, can now benefit from a simple and cost-effective DR solution.

There’s also another other thing that I am really proud of here: Like most organizations, we regularly benchmark ourselves against our competition.   We use a variety of metrics, like: ‘Are we easier to deploy and operate?’ and ‘Are we delivering more value and doing it a lower price?’  Measurements like these have provided a really clear answer: Our competitors are not even in the same ballpark when it comes to DR.

During the development of R2, I watched a side-by-side comparison of what was required to setup DR for 500 VMs with our solution compared to a competitive offering, and the contrast was staggering. The difference in simplicity and the total amount of time required to set everything up was dramatic.  In a DR scenario, one interesting unit of measurement is total mouse clicks. It’s easy to get carried away with counting clicks (hey, we’re engineers after all!), but, in the side-by-side comparison, the difference was 10’s of mouse clicks compared to 100’s. It is literally a difference of minutes vs. days.

You can read some additional perspectives I’ve shared on DR here.

In yesterday’s post we looked at the new hybrid networking functionality in R2 (if you haven’t seen it yet, it is a must-read), and in this post Vijay Tewari (Principal Program Manager for Windows Server & System Center) goes deep into the architecture of this DR solution, as well this solution’s deployment and operating principles.

As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this post for links to a variety of engineering content with hyper-technical overviews of the concepts examined in this post.

A revolution is taking place, impacting the speed at which Business Apps need to be built, and the jaw dropping capabilities they need to deliver. Ignoring these trends isn’t an option and yet you have no time to hit the reset button. Learn how to deliver revolutionary benefits in an evolutionary way. Learn More: http://www.microsoft.com/en-us/server-cloud/cloud-os/modern-business-apps.aspx
Hear from Accenture and Hostway how Microsoft Windows Azure enables the development and deployment of modern business applications faster and more cost effectively through cloud computing. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ

image

The future of the IT Pro role will require you to know how applications are built for the cloud, as well as the cloud infrastructures where these apps operate, is something every IT Pro needs in order to be a voice in the meetings that will define an organization’s cloud strategy. IT pros are also going to need to know how their team fits in this cloud-centric model, as well as how to proactively drive these discussions.

These R2 posts will get you what you need, and this “Enable Modern Business Apps” pillar will be particularly helpful.

Throughout the posts in this series we have spoken about the importance of consistency across private, hosted and public clouds, and we’ve examined how Microsoft is unique in its vision and execution of delivering consistent clouds. The Windows Azure Pack is a wonderful example of Microsoft innovating in the public cloud and then bringing the benefits of that innovation to your datacenter.

The Windows Azure Pack is – literally speaking – a set of capabilities that we have battle-hardened and proven in our public cloud. These capabilities are now made available for you to enhance your cloud and ensure that “consistency across clouds” that we believe is so important.

A major benefit of the Windows Azure Pack is the ability to build an application once and then deploy and operate it in any Microsoft Cloudprivate, hosted or public.

This kind of flexibility means that you can build an application, initially deploy it in your private cloud, and then, if you want to move that app to a Service Provider or Azure in the future, you can do it without having to modify the application. Making tasks like this simple is a major part of our promise around cloud consistency, and it is something only Microsoft (not VMware, not AWS) can deliver.

This ability to migrate an app between these environments means that your apps and your data are never locked in to a single cloud. This allows you to easily adjust as your organization’s needs, regulatory requirements, or any operational conditions change.

A big part of this consistency and connection is the Windows Azure Service Bus which will be a major focus of today’s post.

The Windows Azure Service Bus has been a big part of Windows Azure since 2010. I don’t want to overstate this, but Service Bus has been battle-hardened in Azure for more than 3 years, and now we are delivering it to you to run in your datacenters. To give you a quick idea of how critical Service Bus is for Microsoft, consider this: Service Bus is used in all the billing for Windows Azure, and it is responsible for gathering and posting all the scoring and achievement data to the Halo 4 leaderboards (now that is really, really important – just ask my sons!). It goes without saying that the people in charge of Azure billing and the hardcore gamers are not going to tolerate any latency or downtime getting to their data.

With today’s topic, take the time to really appreciate the app development and app platform functionality in this R2 wave. I think you’ll be really excited about how you can plug into this process and lead your organization.

This post, written by Bradley Bartz (Principal Program Manager from Windows Azure) and Ziv Rafalovich (Senior Program Manager in Windows Azure), will get deep into these new features and the amazing scenarios that the Windows Azure Pack and Windows Azure Service Bus enable. As always in this 2012 R2 series, check out the “Next Steps” at the bottom of this for links to additional information about the topics covered in this post.

A major promise underlying all of the 2012 R2 products is really simple: Consistency.

Consistency in the user experiences, consistency for IT professionals, consistency for developers and consistency across clouds. A major part of delivering this consistency is the Windows Azure Pack (WAP). Last week we discussed how Service Bus enables connections across clouds, and in this post we’ll examine more of the PaaS capabilities built and tested in Azure data centers and now offered for Windows Server. With the WAP, Windows Server 2012 R2, and System Center IT pros can make their data center even more scalable, flexible, and secure.

Throughout the development of this R2 wave, we looked closely at what organizations needed and wanted from the cloud. A major piece of feedback was the desire to build an app once and then have that app live in any data center or cloud. For the first time this kind of functionality is now available. Whether your app is in a private, public, or hosted cloud, the developers and IT Professionals in you organization will have consistency across clouds.

One of the elements that I’m sure will be especially popular is the flexibility and portability of this PaaS. I’ve had countless customers comment that they love the idea of PaaS, but don’t want to be locked-in or restricted to only running it in specific data centers. Now, our customers and partners can build a PaaS app and run it anywhere. This is huge! Over the last two years the market has really began to grasp what PaaS has to offer, and now the benefits (auto-scale, agility, flexibility, etc.) are easily accessible and consistent across the private, hosted and public clouds Microsoft delivers.

This post will spend a lot of time talking about Web Sites for Windows Azure and how this high density web site hosting delivers a level of power, functionality, and consistency that is genuinely next-gen.

Microsoft is literally the only company offering these kinds of capabilities across clouds – and I am proud to say that we are the only ones with a sustained track record of enterprise-grade execution.

With the features added by the WAP [Windows Azure Pack], organizations can now take advantage of PaaS without being locked into a cloud. This is, at its core, the embodiment of Microsoft’s commitment to make consistency across clouds a workable, viable reality.

This is genuinely PaaS for the modern web.

Today’s post was written by Bradley Bartz, a Principal Program Manager from Windows Azure. For more information about the technology discussed here, or to see demos of these features in action, check out the “Next Steps” at the bottom of this post.

More information: in the Success with Hybrid Cloud series blog posts [Brad Anderson, Nov 12, Nov 14, Nov 20, Dec 2, Dec 5, and 21 upcoming blogs posts] which “will examine the building/deployment/operation of Hybrid Clouds, how they are used in various industries, how they manage and deliver different workloads, and the technical details of their operation.”


4.2
Unlock Insights from any Data –
SQL Server 2014:

With growing demand for data, you need database scale with minimal cost increases. Learn how SQL Server 2014 provides speed and scalability with in-memory technologies to support your key data workloads, including OLTP, data warehousing, and BI. Learn more: http://www.microsoft.com/sqlserver2014
Hosting technical training overview.

Microsoft SQL Server 2014 CTP2 was announced by Quentin Clark during the Microsoft SQL PASS 2013 keynote.  This second public CTP is essentially feature complete and enables you to try and test all of the capabilities of the full SQL Server 2014 release. Below you will find an overview of SQL Server 2014 as well as key new capabilities added in CTP2:

SQL Server 2014 helps organizations by delivering:

  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel

Thank you to those that have already downloaded SQL Server 2014 CTP1 and started seeing first hand the performance gains that in-memory capabilities deliver along with better high availability with AlwaysOn enhancements.  CTP2 introduces additional mission critical capabilities with further enhancements to the in-memory technologies along with new hybrid cloud capabilities.

What’s new in SQL Server 2014 CTP2?

New Mission Critical Capabilities and Enhancements

  • Enhanced In-Memory OLTP, including new tools which will help you identify and migrate the tables and stored procedures will benefit most from In-Memory OLTP, as well as greater T-SQL compatibility and new indexes which enables more customers to take advantage of our solution.
  • High Availability for In-Memory OLTP Databases:  AlwaysOn Availability Groups are supported for In-Memory OLTP, giving you in-memory performance gains with high availability.  IO Resource Governance, enabling customers to more effectively manage IO across multiple databases and/or classes of databases to provide more predictable IO for your most critical workloads.  Customers today can already manage CPU and memory.
  • Improved resiliency with Windows Server 2012 R2 by taking advantage of Cluster Shared Volumes (CSVs).  CSV’s provide improved fault detection and recovery in the case of downtime.
  • Delayed Durability, providing the option for increased transaction throughput and lower latency for OLTP applications where performance and latency needs outweigh the need for 100% durability.

New Hybrid Cloud Capabilities and Enhancements

By enabling the above in-memory performance capabilities for your SQL Server instances running in Windows Azure Virtual Machines, you will see significant transaction and query performance gains.  In addition there are new capabilities listed below that will allow you to unlock new hybrid scenarios for SQL Server.

  • Managed Backup to Windows Azure, enabling you to backup on-premises SQL Server databases to Windows Azure storage directly in SSMS.  Managed Backup also optimizes backup policy based on usage, an advantage over the manual Backup to Windows Azure.
  • Encrypted Backup, offering customer the ability to encrypt both on-premises backup and backups to Windows Azure for enhance security.
  • Enhanced disaster recovery to Windows Azure with simplified UI, enabling customers to more easily add Windows Azure Virtual Machines as AlwaysOn secondaries in SQL Server Management Studio for greater cost-effective data protection and disaster recovery solution.  Customers may also use the secondaries in Windows Azure for to scale and offload reporting and backups.
  • SQL Server Data Files in Windows Azure – New capability to store large databases (>16TB) in Windows Azure and the ability to stream the database as a backend for SQL Server applications running on-premises or in the cloud.

Learn more and download SQL Server 2014 CTP2

SQL Server 2014 helps address key business challenges of ever growing data volumes, the need to transact and process data faster, the scalability and efficiency of cloud computing and an ever growing hunger for business insights.   With SQL Server 2014 you can now unlock real-time insights with mission critical and cloud performance and take advantage of one of the most comprehensive BI solutions in the marketplace today.

Many customers are already realizing the significant benefits of the new in-memory technologies in SQL Server 2014 including: Edgenet, Bwin, SBI Liquidity, TPP and Ferranti.  Stay tuned for an upcoming blog highlighting the impact in-memory had to each of their businesses.

Learn more about SQL Server 2014 and download the datasheet and whitepapers here.  Also if you would like to learn more about SQL Server In-Memory best practices, check out this SQL Server 2014 in-memory blog series compilation. There is also a SQL Server 2014 hybrid cloud scenarios blog compilation for learning best practices.

Also if you haven’t already download SQL Server 2014 CTP 2 and see how much faster your SQL Server applications run!  The CTP2 image is also available on Windows Azure, so you can easily develop and test the new features of SQL Server 2014.

To ensure that its customers received timely, accurate product data, Edgenet decided to enhance its online selling guide with In-Memory OLTP in Microsoft SQL Server 2014.

At the SQL PASS conference last November, we announced the In-memory OLTP (project code-named Hekaton) database technology built into the next release of SQL Server. Microsoft’s technical fellow Dave Campbell’s blog provides a broad overview of the motivation and design principles behind this project codenamed In-memory OLTP.

In a nutshell – In-memory OLTP is a new database engine optimized for memory resident data and OLTP workloads. In-memory OLTP is fully integrated into SQL Server – not a separate system. To take advantage of In-memory OLTP, a user defines a heavily accessed table as memory optimized. In-memory OLTP tables are fully transactional, durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both In-memory OLTP tables and regular tables, and a transaction can update data in both types of tables. Expensive T-SQL stored procedures that reference only In-memory OLTP tables can be natively compiled into machine code for further performance improvements. The engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out mid-tier. To achieve this it uses latch-free data structures and a new optimistic, multi-version concurrency control technique. The end result is a selective and incremental migration into In-memory OLTP to provide predictable sub-millisecond low latency and high throughput with linear scaling for DB transactions. The actual performance gain depends on many factors but we have typically seen 5X-20X in customer workloads.

In the SQL Server product group, many years ago we started the investment of reinventing the architecture of the RDBMS engine to leverage modern hardware trends. This resulted in PowerPivot and In-memory ColumnStore Index in SQL2012, and In-memory OLTP is the new addition for OLTP workloads we are introducing for SQL2014 together with the updatable clustered ColumnStore index and (SSD) bufferpool extension. It has been a long and complex process to build this next generation relational engine, especially with our explicit decision of seamlessly integrating it into the existing SQL Server instead of releasing a separate product – in the belief that it provides the best customer value and onboarding experience.

Now we are releasing SQL2014 CTP1 as a public preview, it’s a great opportunity for you to get hands-on experience with this new technology and we are eager to get your feedback and improve the product. In addition to BOL (Books Online) content, we will roll out a series of technical blogs on In-memory OLTP to help you understand and leverage this preview release effectively.

In the upcoming series of blogs, you will see the following in-depth topics on In-memory OLTP:

  • Getting started – to walk through a simple sample database application using In-memory OLTP so that you can start experimenting with the public CTP release.
  • Architecture – to understand at a high level how In-memory OLTP is designed and built into SQL Server, and how the different concepts like memory optimized tables, native compilation of SPs and query inter-op fit together under the hood.
  • Customer experiences so far – we had many TAP customer engagements since about 2 years ago and their feedback helped to shape the product, and we would like to share with you some of the learnings and customer experiences, such as typical application patterns and performance results.
  • Hardware guidance – it is apparent that memory size is a factor, but since most of the applications require full durability, In-memory OLTP still requires log and checkpointing IO, and with the much higher transactional throughput, it can put actually even higher demand on the IO subsystem as a result. We will also cover how Windows Azure VMs can be used with In-memory OLTP.
  • Application migration – how to get started with migrating to or building a new application with In-memory OLTP. You will see multiple blog posts covering the AMR tool, Table and SP migrations and pointers on how to work around some unsupported data types and T-SQL surface area, as well as the transactional model used. We will highlight the unique approach on SQL server integration which supports a partial database migration.
  • Managing In-memory OLTP – this will cover the DBA considerations, and you will see multiple posts ranging from the tooling supporting (SSMS) to more advanced topics such as how memory and storage are managed.
  • Limitations and what’s coming – explain what limitations exist in CTP1 and new capabilities expected to be coming in CTP2 and RTM, so that you can plan your roadmap with clarity.

In addition – we will also have blog coverage on what’s new with In-memory ColumnStore and introduction to bufferpool extension. 

SQL2014 CTP1 is available for download here or you can read the complete blog series here:

bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012, gaining significant in-memory benefit using xVelocity Column Store. Here, bwin takes their systems one step further by using the technology preview of SQL Server 2014 In-memory OLTP (formerly known as Project “Hekaton”). Prior to using OLTP their online gaming systems were handling about 15,000 requests per second. Using OLTP the fastest tests so far have scaled to 250,000 transactions per second.

Recently I posted a video about how the SQL Server Community was looking into emerging trends in BI and Database technologies – one of the key technologies mentioned in that video was in-memory.

Many Microsoft customers have been using in-memory technologies as part of SQL Server since 2010 including xVelocity Analytics, xVelocity Column Store and Power Pivot, something we recently covered in a blog post following the ‘vaporware’ outburst from Oracle SVP of Communications, Bob Evans. Looking forward, Ted Kummert recently announced project codenamed “Hekaton,” available in the next major release of SQL Server. “Hekaton” will provide a full in-memory transactional engine, and is currently in private technology preview with a small set of customers. This technology will provide breakthrough performance gains of up to 50 times.

For those who are keen to get a first view of customers using the technology, below is the video of online gaming company bwin using “Hekaton”.

Bwin is the largest regulated online gaming company in the world, and their success depends on positive customer experiences. They had recently upgraded some of their systems to SQL Server 2012 – a story you can read here. Bwin had already gained significant in-memory benefit using xVelocity Column Store, for example – a large report that used to take 17 minutes to render now takes only three seconds.

Given the benefits, they had seen with in-memory technologies, they were keen to trial the technology preview of “Hekaton”. Prior to using “Hekaton”, their online gaming systems were handling about 15,000 requests per second, a huge number for most companies. However, bwin needed to be agile and stay at ahead of the competition and so they wanted access to the latest technology speed.

Using “Hekaton” bwin were hoping they could at least double the number of transactions. They were ‘pretty amazed’ to see that the fastest tests so far have scaled to 250,000 transactions per second.

So how fast is “Hekaton” – just ask Rick Kutschera, the Database Engineering Manager at bwin – in his words it’s ‘Wicked Fast’! However, this is not the only point that Rick highlights, he goes on to mention that “Hekaton” integrates seamlessly into the SQL Server engine, so if you know SQL Server, you know “Hekaton”.

— David Hobbs-Mallyon, Senior Product Marketing Manager

Quentin Clark
Corporate Vice President, Data Platform Group

This morning, during my keynote at the Professional Association of SQL Server (PASS) Summit 2013, I discussed how customers are pushing the boundaries of what’s possible for businesses today using the advanced technologies in our data platform. It was my pleasure to announce the second Community Technology Preview (CTP2) of SQL Server 2014 which features breakthrough performance with In-Memory OLTP and simplified backup and disaster recovery in Windows Azure.

Pushing the boundaries

We are pushing the boundaries of our data platform with breakthrough performance, cloud capabilities and the pace of delivery to our customers. Last year at PASS Summit, we announced our In-Memory OLTP project “Hekaton” and since then released SQL Server 2012 Parallel Data Warehouse and public previews of Windows Azure HDInsight and Power BI for Office 365. Today we have SQL Server 2014 CTP2, our public and production-ready release shipping a mere 18 months after SQL Server 2012. 

Our drive to push the boundaries comes from recognizing that the world around data is changing.

  • Our customers are demanding more from their data – higher levels of availability as their businesses scale and globalize, major advancements in performance to align to the more real-time nature of business, and more flexibility to keep up with the pace of their innovation. So we provide in-memory, cloud-scale, and hybrid solutions. 
  • Our customers are storing and collecting more data – machine signals, devices, services and data from outside even their organizations. So we invest in scaling the database and a Hadoop-based solution. 
  • Our customers are seeking the value of new insights for their business. So we offer them self-service BI in Office 365 delivering powerful analytics through a ubiquitous product and empowering users with new, more accessible ways of gaining insights.

In-memory in the box for breakthrough performance

A few weeks ago, one of our competitors announced plans to build an in-memory column store into their database product some day in the future. We shipped similar technology two years ago in SQL Server 2012, and have continued to advance that technology in SQL Server 2012 Parallel Data Warehouse and now with SQL Server 2014. In addition to our in-memory columnar support in SQL Server 2014, we are also pushing the boundaries of performance with in-memory online transaction processing (OLTP). A year ago we announced project “Hekaton,” and today we have customers realizing performance gains of up to 30x. This work, combined with our early investments in Analysis Services and Excel, means Microsoft is delivering the most complete in-memory capabilities for all data workloads – analytics, data warehousing and OLTP. 

We do this to allow our customers to make breakthroughs for their businesses. SQL Server is enabling them to rethink how they can accelerate and exceed the speed of their business.

image

  • TPP is a clinical software provider managing more than 30 million patient records – half the patients in England – including 200,000 active registered users from the UK’s National Health Service.  Their systems handle 640 million transactions per day, peaking at 34,700 transactions per second. They tested a next-generation version of their software with the SQL Server 2014 in-memory capabilities, which has enabled their application to run seven times faster than before – all of this done and running in half a day. 
  • Ferranti provides solutions for the energy market worldwide, collecting massive amounts of data using smart metering. With our in-memory technology they can now process a continuous data flow up to 200 million measurement channels making the system fully capable of meeting the demands of smart meter technology.
  • SBI Liquidity Market in Japan provides online services for foreign currency trading. By adopting SQL Server 2014, the company has increased throughput from 35,000 to 200,000 transactions per second. They now have a trading platform that is ready to take on the global marketplace.

A closer look into In-memory OLTP

Previously, I wrote about the journey of the in-memory OLTP project Hekaton, where a group of SQL Server database engineers collaborated with Microsoft Research. Changes in the ratios between CPU performance, IO latencies and bandwidth, cache and memory sizes as well as innovations in networking and storage were changing assumptions and design for the next generation of data processing products. This gave us the opening to push the boundaries of what we could engineer without the constraints that existed when relational databases were first built many years ago. 

Challenging those assumptions, we engineered for dramatically changing latencies and throughput for so-called “hot” transactional tables in the database. Lock-free, row-versioning data structures and compiling T-SQL and queries into native code, combined with making the programming semantics consistent with SQL Server means our customers can apply the performance benefits of extreme transaction processing without application rewrites or the adoption of entirely new products. 

image

The continuous data platform

Windows Azure fulfills new scenarios for our customers – transcending what is on-premises or in the cloud. Microsoft is providing a continuous platform from our traditional products that are run on-premises to our cloud offerings. 

With SQL Server 2014, we are bringing the cloud into the box. We are delivering high availability and disaster recovery on Windows Azure built right into the database. This enables customers to benefit from our global datacenters: AlwaysOn Availability Groups that span on-premises and Windows Azure Virtual Machines, database backups directly into Windows Azure storage, and even the ability to store and run database files directly in Windows Azure storage. That last scenario really does something interesting – now you can have an infinitely-sized hard drive with incredible disaster recovery properties with all the great local latency and performance of the on-premises database server. 

We’re not just providing easy backup in SQL Server 2014, today we announced backup to Windows Azure would be available for all our currently supported SQL Server releases. Together, the backup to Windows Azure capabilities in SQL Server 2014 and via the standalone tool offer customers a single, cost-effective backup strategy for secure off-site storage with encryption and compression across all supported versions of SQL Server.

By having a complete and continuous data platform we strive to empower billions of people to get value from their data. It’s why I am so excited to announce the availability of SQL Server 2014 CTP2, hot on the heels of the fastest-adopted release in SQL Server’s history, SQL Server 2012. Today, more businesses solve their data processing needs with SQL Server than any other database. It’s about empowering the world to push the boundaries.


4.3
Unlock Insights from any Data / Big Data – Microsoft SQL Server Parallel Data Warehouse (PDW) and Windows Azure HDInsights
:

Data is being generated faster than ever before, so what can it do for your business? Learn how to unlock insights on any data by empowering people with BI and big data tools to go from raw data to business insights faster and easier. Learn more: http://www.microsoft.com/datainsights
With the abundance of information available today, BI shouldn’t be confined to analysts or IT. Learn how to empower all with analytics through familiar Office tools, and how to manage all your data needs with a powerful and scalable data platform. Learn more: http://www.microsoft.com/BI
With data volumes exploding by 10x every five years, and much of this growth coming from new data types, data warehousing is at a tipping point. Learn how to evolve your data warehouse infrastructure to support variety, volume, and velocity of data. Learn more: http://www.microsoft.com/datawarehousing
Hear from HP, Dell and Hortonworks how Microsoft SQL Server Parallel Data Warehouse and Windows Azure HDInsights can unlock data insights and respond to business opportunities through big data analytics. Find solutions and services from partners that span the entire stack of Microsoft Cloud OS products and technologies: http://www.microsoft.com/en-us/server-cloud/audience/partner.aspx#fbid=631zRfiT0WJ
The idea that big data will transform businesses and the world is indisputable, but are there enough resources to fully embrace this opportunity? Join Quentin Clark, Microsoft Corporate Vice President, who will share Microsoft’s bold goal to consumerize big data — simplifying the data science process and providing easy access to data with everyday tools. This keynote is sponsored by Microsoft
Quentin Clark discusses the ever-changing big data market and how Microsoft is meeting its demands.

Announcing Windows Azure HDInsight: Where big data meets the cloud [The Official Microsoft Blog, Oct 28, 2013]

post is from Quentin Clark, Corporate Vice President of the Data Platform Group at Microsoft

I am pleased to announce that Windows Azure HDInsight – our cloud-based distribution of Hadoop – is now generally available on Windows Azure. The GA of HDInsight is an important milestone for Microsoft, as its part of our broader strategy to bring big data to a billion people.

On Tuesday at Strata + Hadoop World 2013, I will discuss the opportunity of big data in my keynote, “Can Big Data Reach One Billion People?” Microsoft’s perspective is that embracing the new value of data will lead to a major transformation as significant as when line of business applications matured to the point where they touched everyone inside an organization. But how do we realize this transformation? It happens when big data finds its way to everyone in business – when anyone with a question that can be answered by data, gets their answer. The impact of this is beyond just making businesses smarter and more efficient. It’s about changing how business works through both people and data-driven insights. Data will drive the kinds of changes that, for example, allow personalization to become truly prevalent. People will drive change by gaining insights into what impacts their business, enabling them to change the kinds of partnerships and products they offer.

Our goal to empower everyone with insights is the reason why Microsoft is investing, not just in technology like Hadoop, but the whole circuit required to get value from big data. Our customers are demanding more from the data they have – not just higher availability, global scale and longer histories of their business data, but that their data works with business in real time and can be leveraged in a flexible way to help them innovate. And they are collecting more signals – from machines and devices and sources outside their organizations.

Some of the biggest changes to businesses driven by big data are created by the ability to reason over data previously thought unmanageable, as well as data that comes from adjacent industries. Think about the use of equipment data to do better operational cost and maintenance management, or a loan company using shipping data as part of the loan evaluation. All of this data needs all forms of analyticsand the ability to reach the people making decisions. Organizations that complete this circuit, thereby creating the capability to listen to what the data can tell them, will accelerate.

Bringing Hadoop to the enterprise

Hadoop is a cornerstone of how we will realize value from big data. That’s why we’ve engineered HDInsight as 100 percent Apache Hadoop offered as an Azure cloud service. The service has been in public production preview for a number of months now – the reception has been tremendous and we are excited to bring it to full GA status in Azure. 

Microsoft recognizes Hadoop as a standard and is investing to ensure that it’s an integral part of our enterprise offerings. We have invested through real contributions across the project – not just to make Hadoop work great on Windows, but even in projects like Tez, Stinger and Hive. We have put in thousands of engineering hours and tens of thousands of lines of code. We have been doing this in partnership with Hortonworks, who will make HDP (Hortwonworks Data Platform) 2.0 for Windows Server generally available next month, giving the world access to a supported Apache-pure Hadoop v2 distribution for Windows Server. Working with Hortonworks, we will support Hadoop v2 in a future update to HDInsight.

Windows Azure HDInsight combines the best of Hadoop open source technology with the security, elasticity and manageability that enterprises require. We have built it to integrate with Excel and Power BI – our business intelligence offering that is part of Office 365 – allowing people to easily connect to data through HDInsight, then refine and do business analytics in a turnkey fashion. For the developer, HDInsight also supports choice of languages: .NET, Java and more.

We have key customers currently using HDInsight, including:

  • The City of Barcelona uses Windows Azure HDInsight to pull in data about traffic patterns, garbage collection, city festivals, social media buzz and more to make critical decisions about public transportation, security and overall spending.
  • A team of computer scientists at Virginia Tech developed an on-demand, cloud-computing model using the Windows Azure HDInsight Service, enabling  easier, more cost-effective access to DNA sequencing tools and resources.
  • Christian Hansen, a developer of natural ingredients for several industries, collects electronic data from a variety of sources, including automated lab equipment, sensors and databases. With HDInsight in place, they are able to collect and process data from trials 100 time times faster than before.

End-to-end solutions for big data

These kinds of uses of Hadoop are examples of how big data is changing what’s possible. Our Hadoop-based solution HDInsight is a building block – one important piece of the end-to-end solutions required to get value from data.

All this comes together in solutions where people can use Excel to pull data directly from a range of sources, including SQL Server (the most widely-deployed database product), HDInsight, external Hadoop clusters and publicly available datasets. They can then use our business intelligence tools in Power BI to refine that data, visualize it and just ask it questions. We believe that by putting widely accessible and easy-to-deploy tools in everyone’s hands, we are helping big data reach a billion people. 

I am looking forward to tomorrow. The Hadoop community is pushing what’s possible, and we could not be happier that we made the commitment to contribute to it in meaningful ways.

Quentin Clark, Microsoft, at Big Data NYC 2013 with John Furrier and Dave Vellante

“We’re here here because we’re super committed to Hadoop,” Clark said, explaining that Microsoft is dedicated to help its customers embrace the benefits Big Data can provide them with. “Hadoop is the cornerstone of Big Data but not the entire infrastructure,” he added. Microsoft is focusing around adding security and tool integration, with thousands of hours of development put into Hadoop, to make it ready for the enterprise. “There’s a foundational piece where customers are starting,” which they can build upon and Microsoft focuses on helping them embrace Hadoop as part of the IT giant’s business goals.

Asked to compare the adoption of traditional Microsoft products with the company’s Hadoop products, Clark said, “a big part of our effort was to get to that enterprise expectations.” Security and tools integration, getting Hadoop to work on Windows is part of that effort. Microsoft aims to help people “have a conversation and dialogue with the data. We make sure we funnel all the data to help them get the BI and analytics” they need.

Commenting on Microsoft’s statement of bringing Big Data to its one billion Office users, Vellante asked if the company’s strategy was to put the power of Big Data into Excel. Clark explained it was about putting Big Data in the Office suite, going on to explain that there is already more than a billion people who are passively using Big Data. Microsoft focuses on those actively using it.

Clark mentions Microsoft has focused on the sports arena, helping major sports leagues use Big Data to power fantasy teams. “We actually have some models, use some data sets. I have a fantasy team that I’m doing pretty well with, partly because of my ability to really have a conversation with the data. On the business side, it’s transformational. Our ability to gain insight in real time and interact is very different using these tools,” Clark stated.

Why not build its own Hadoop distro?

Asked why Microsoft decided not to have its own Hadoop distribution, Clark explained that “primarily our focus has been in improving the Apache core, make Hadoop work on Windows and work great. Our partnership with Hortonworks just made sense. They are able to continue to push and have that cross platform capability, we are able to offer our customers a solution.”

Explaining there were great discrepancies in how different companies in the same industries made use of the benefits Big Data, he advised our viewers to “look at what the big companies are doing” embracing the data, and to look what they are achieving with it.

As far as the future of the Big Data industry is concerned, Clark stated: “There’s a consistent meme of how is this embraced by business for results. Sometimes with the evolution of technology, everyone is exploring what it’s capable of.” Now there’s a focus shift of the industry towards what greater purpose it leads to, what businesses can accomplish.

@thecube

#BigDataNYC


4.4
Empower people-centric IT – Microsoft Virtual Desktop Infrastructure (VDI)
:

Microsoft Virtual Desktop Infrastructure (VDI) enables IT to deliver desktops and applications to users that employees can access from anywhere on both personal and corporate devices . Centralizing and controlling applications and data through a virtual desktop enables your people to get their work done on the devices they choose while helping maintain compliance. Learn more: http://www.microsoft.com/msvdi
With dramatic growth in the number of mobile users and personal devices at work, and mounting pressure to comply with governmental regulations, IT organizations are increasingly turning to Microsoft Virtual Desktop Infrastructure (VDI) solutions. This session will provide an overview of Microsoft’s VDI solutions and will drill into some of the new, exciting capabilities that Windows Server 2012 R2 offers for VDI solutions.

In October, we announced Windows Server 2012 R2 which delivers several exciting improvements for VDI solutions. Among the benefits, Windows Server 2012 R2 reduces the cost per seat for VDI as well as enhances your end user’s experience. The following are just some of the features and benefits of Windows Server 2012 R2 for VDI:

  • Online data deduplication on actively running VMs reduces storage capacity requirements by up to 90% on persistent desktops.
  • Tiered storage spaces manage your tiers of storage (fast SSDs vs. slower HDDs) intelligently so that the most frequently accessed data blocks are automatically moved onto faster-tier drives. Likewise, older or seldom-accessed files are moved onto the cheaper and slower SAS drives.
  • The Microsoft Remote Desktop App provides easy access to a variety devices and platforms including Windows, Windows RT, iOS, Mac OS X and Android. This is good news for your end users and your mobility/BYOD strategy!
  • Your user experience is also enhanced due to improvements on several fronts including RemoteFX, DirectX 11.1 support, RemoteApp, quick reconnect, session shadowing, dynamic monitor and resolution changes.

If your VDI solutions run on Dell servers or if you are looking at deploying new VDI infrastructure, we are excited to let you know about the work we have been doing in partnership with Dell around VDI. Dell recently updated their Desktop Virtualization Solution (DVS) for Windows Server to support Windows Server 2012 R2, and DVS now delivers all of the benefits mentioned above. Dell is also delivering additional enhancements into Dell DVS for Windows Server so it will also support:

  • Windows 8.1 with touch screen devices and new Intel Haswell processors
  • Unified Communication with Lync 2013, via an endpoint plug-in that enables P2P audio and video. (Dell Wyse has certified selected Windows thin clients to this effect, such as the D90 and Z90.)
  • Virtualized shared graphics on NVidia GRID K1/K2 and AMD FirePro cards using Microsoft RemoteFX technology
  • Affordable persistent desktops
  • Highly-secure and dual/quad core Dell Wyse thin clients, for a true end-to-end capability, even when using high-end server graphics cards or running UC on Lync 2013
  • Optional Dell vWorkspace software, also supporting Windows Server 2012 R2, that brings scalability to tens of thousands of seats, advanced VM provisioning, IOPS efficiency to reduce storage requirement and improve performance, diagnostics and monitoring, flexible resource assignments, support for multi-tenancy and more.
  • Availability in more than 30 countries

Depending on where you stand in the VDI deployment cycle in your organization, Dell DVS for Windows Server is already supported today on multiple Dell PowerEdge server platforms:

  • The T110 for a pilot/POC up to 10 seats
  • The VRTX for implementation in a remote or branch office of up to about 500 users
  • The R720 for a traditional enterprise-like, flexible and scalable implementation to several thousand seats. It supports flexible deployments such as application virtualization, RDSH, pooled and persistent VMs.

This week, Microsoft and Dell will present a technology showcase at Dell World in Austin (TX), USA. If you happen to be at the show, you will be able to see for yourself how well Windows Server 2012 R2 and Windows 8.1 integrate into Dell DVS. We will show:

  • The single management console of Windows Server 2012 installed on a Dell PowerEdge VRTX, demonstrating how easy it can be for an IT administrator to manage VDI workloads based on Hyper-V in a remote or branch office environment
  • How users can chat, talk, share, meet, transfer files and conduct video conferencing within virtualized desktops set up for unified communication
  • That you can watch HD multimedia and 3D graphics files on multiple virtual desktops sharing a graphic card installed remotely in a server
  • How affordable it is to run persistent desktops with DVS and Windows Server 2012 R2

We are excited about the work that we are doing with Dell around VDI and hope you have a chance to come visit our joint VDI showcase in Austin. We will be located in the middle of the Dell booth in show expo hall. Also, we will show a VDI demo as part of the Microsoft Cloud OS breakout session at noon on Thursday (December 12th ) in room 9AB. Finally, we will show a longer VDI demo in the show expo theater (next to the Microsoft booth) at 10am on Friday (December 13th ) morning. We are looking forward to seeing you there.

With the Microsoft Remote Desktop app, you can connect to a remote PC and your work resources from almost anywhere. Experience the power of Windows with RemoteFX in a Remote Desktop client designed to help you get your work done wherever you are.

Post from Brad Anderson,
Corporate Vice President of Windows Server & System Center at Microsoft.

As of yesterday afternoon, the Microsoft Remote Desktop App is available in the Android, iOS, and Mac stores (see screen shots below). There was a time, in the very recent past, when many thought something like this would never happen.

If your company has users who work on iPads, Android, and Windows RT devices, you also likely have a strategy (or at least of point-of-view) for how you will deliver Windows applications to those devices. With the Remote Desktop App and the 2012 R2 platforms made available earlier today, you now have a great solution from Microsoft to deliver Windows applications to your users across all the devices they are using.

As I have written about before, one of the things I am actively encouraging organizations to do is to step back and look at their strategy for delivering applications and protecting data across all of their devices. Today, most enterprises are using different tools for enabling users on PCs, and then they deploy another tool for enabling users on their tablets and smart phones. This kind of overheard and the associated costs are unnecessary – but, even more important (or maybe I should say worse), is that your end-users therefore have different and fragmented experiences as they transition across their various devices. A big part of an IT team’s job must be to radically simplify the experience end users have in accomplishing their work – and users are doing that work across all their devices.

I keep bolding “all” here because I am really trying to make a point:  Let’s stop thinking about PCs and devices in a fragmented way. What we are trying to accomplish is pretty straightforward: Enable users to access the apps and datathey need to be productive in a way that can ensure the corporate assets are secure. Notice that nowhere in that sentence did I mention devices. We should stop talking about PC Lifecycle management, Mobile Device Management and Mobile Application Management – and instead focus our conversation on how we are enabling users. We need a user-enablement Magic Quadrant!

OK – stepping off my soapbox. Smile

Delivering Windows applications in a server-computing model, through solutions like Remote Desktop Services, is a key requirement in your strategy for application access management. But keep in mind that this is only one of many ways applications can be delivered – and we should consider and account for all of them.

For example, you also have to consider Win32 apps running in a distributed model, modern Windows apps, iOS native apps (side-loaded and deep-linked), Android native apps (side-loaded and deep-linked), SaaS applications, and web applications.

Things have really changed from just 5 years ago when we really only had to worry about Windows apps being delivered to Windows devices.

As you are rethinking your application access strategy, you need solutions that enable you to intelligently manage all these applications types across all the devices your workforce will use.

You should also consider what the Remote Desktop Apps released yesterday are proof of Microsoft’s commitment to enable you to have a single solution to manage all the devices your users will use.

Microsoft describes itself as a “devices and services company.” Let me provide a little more insight into this.

Devices: We will do everything we can to earn your business on Windows devices.

Services: We will light up those Windows devices with the cloud services that we build, and these cloud services will alsolight-up all (there’s that bold again) your other devices.

The funny thing about cloud services is that they want every device possible to connect to them – we are working to make sure the cloud services that we are building for the enterprise will bring value to all (again!) the devices your users will want to use – whether those are Windows, iOS, or Android.

The RDP clients that we released into the stores yesterday are not v1 apps. Back in June, we acquired IP assets from an organization in Austria (HLW Software Development GMBH) that had been building and delivering RDP clients for a number of years. In fact, there were more than 1 million downloads of their RDP clients from the Apple and Android stores.  The team has done an incredible job using them as a base for development of our Remote Desktop App, creating a very simple and compelling experience on iOS, Mac OS X and Android. You should definitely give them a try!

Also: Did I mention they are free?

To start using the Microsoft Remote Desktop App for any of these platforms, simply follow these links:

setup: – Windows 8.1 Pro run on a slow Netbook, BenQ Joybook Lite U101 with Aton N270! – HTC One X running Android 4.2.2 – HTC Fly running Android 3.2.1 How to: http://android-er.blogspot.com/2013/10/basic-setup-for-microsoft-remote.html

Satya Nadella’s (?the next Microsoft CEO?) next ten years’ vision of “digitizing everything”, Microsoft opportunities and challenges seen by him with that, and the case of Big Data

… as one of the crucial issues for that (in addition to the cloud, mobility and Internet-of-Things), via the current tipping point as per Microsoft, and the upcoming revolution in that as per Intel

Satya Nadella, Cloud & Enterprise Group, Microsoft and Om Malik, Founder & Senior Writer, GigaOM [LeWeb YouTube channel, Dec 10, 2013]

Satya is responsible for building and running Microsoft’s computing platforms, developer tools and cloud services. He and his team deliver the “Cloud OS.” Rumored to be on the short list for CEO, he shares his views on the future. [Interviewed during the “Plenary I” devoted to “The Next 10 years” at Day 1 on Dec 10, 2013.]

And why I will present Big Data after that? For very simple reason: IMHO exactly in Big Data Microsoft’s innovations came to a point at which its technology has the best chances to become dominant and subsequently define the standard for the IT industry—resulting in “winner-take-all” economies of scale and scope. Whatever Intel is going to add to that in terms of “technologies for the next Big Data revolution” is going only to help Microsoft with its currently achieved innovative position even more. But for this reason I will include here the upcoming Intel innovations for Big Data as well.

In this next-gen regard it is highly recommended to read also: Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others [‘Experiencing the Cloud’. Dec 12, 2013] !

Now the detailed discussion of Big Data:

Microsoft® makes Big Data work for you! [HP Discover YouTube channel, recorded on Dec 11; published on Dec 12, 2013]

[Doug Smith, Director, Emerging Technologies, Microsoft] Come and join our Innovation Theatre session to hear how customers are solving Big Data challenges in big ways jointly with HP!

The Garage Series: Unleashing Power BI for Office 365 [Office 365 technology blog, Nov 20, 2013]

In this week’s show, host Jeremy Chapman is joined by Michael Tejedor from the SQL Server team to discuss Power BI and show it in action. Power BI for Office 365 is a cloud based solution that reduces the barriers to deploying a self-service Business Intelligence environment for sharing live Excel based reports and data queries as well as new features and services that enable ease of data discover and information access from anywhere. Michael draws up the self-service approach to Power BI as well as how public data can be queried and combined in a unified view within Excel. Then they walk through an end-to-end demo of Excel and Power BI componentsPower Query [formerly known as “Data Explorer], Power Pivot, Power View, Power Map [formerly known as product codename “Geoflow] and Q&A–as they optimize profitability of a bar and rein in bartenders with data.

Last week Mark Kashman and I went through the administrative controls of managing user access and mobile devices, but this week I’m joined by Michael Tejedor and we shift gears completely to talk data, databases and business intelligence. Back in July we announced Power BI for Office 365 and how this new service along with the  using the familiar tools within Excel, enables you can to discover, analyze, visualize and share data in powerful ways. Power BIThe solution includes Power Query, Power Pivot, Power View, Power Map and as well as a host of Power BI features including Q&A.  and how using the familiar tools within Excel, you can discover, analyze, visualize and share data in powerful ways. Power BI includes Power Query, Power Pivot, Power View, Power Map and Q&A.  

  • Power Query [formerly known as “Data Explorer] is a data search engine allowing you to query data from within your company and from external data sources on the Internet, all within Excel.
  • Power Pivot lets you create flexible models within Excel that can process large data sets quickly using SQL Server’s in-memory database.
  • Power View allows you to manipulate data and compile it into charts, graphs and other visualizations. It’s great for presentations and reports
  • Power Map [formerly known as product codename “Geoflow] is a 3D data visualization tool for mapping, exploring and interacting with geographic and temporal data.
  • Q&A is a natural language query engine that lets users easily query data using common terms and phrases.

In many cases, the process to get custom reports and dashboards from the people running your databases, sales or operations systems is something like submitting a request to your database administrator and a few phone calls or meetings to get what you want. I came from an logistics and operations management background, it could easily take 2 or 3 weeks to even make minor tweaks to an operational dashboard. Now you can use something familiar–Excelin a self-service way to hook into your local databases, Excel flat files, modern data sources like Hadoop or public data sources via Power Query and the data catalogue. All of these data sources can be combined create powerful insights and data visualizations, all can be easily and securely shared with the people you work with through the Power BI for Office 365 service.

Of course all of this sounds great, but you can’t really get a feel for it until you see it. Michael and team built out a great demo themed after a bar and using data to track alcohol profitability, pour precision per bartender and Q&A to query all of this using normal query terms. You’ll want to watch the show to see how everything turns out and of course to see all of these power tools in action. Of course if you want to kick the tires and try Power BI for Office 365, you can register for the preview now.

Intel: technologies for the next Big Data revolution [HP Discover YouTube channel, recorded on Dec 11; published on Dec 12, 2013]

[Patrick Buddenbaum, Director, Enterprise Segment, Intel Corporation at HP Discover Barcelona 2013 on Dec 11, 11:40 AM – 12:00 PM] HP and Intel share the belief that every organization and individual should be able to unlock intelligence from the world’s ever increasing set of data sources—the Internet of Things.

 

Related “current tipping point” announcements from Microsoft:

From: Organizations Speed Business Results With New Appliances From HP and Microsoft [joint press release, Jan 18, 2011]

New solutions for business intelligence, data warehouse, messaging and database consolidation help increase employee productivity and reduce IT complexity.

… The HP Business Decision Appliance is available now to run business intelligence services ….

Delivering on the companies’ extended partnership announced a year ago, the new converged application appliances from HP and Microsoft are the industry’s first systems designed for IT, as well as end users. They deliver application services such as business intelligence, data warehousing, online transaction processing and messaging. The jointly engineered appliances, and related consulting and support services, enable IT to deliver critical business applications in as little as one hour, compared with potentially months needed for traditional systems.3 One of the solutions already offered by HP and Microsoft — the HP Enterprise Data Warehouse Appliance — delivers up to 200 times faster queries and 10 times the scalability of traditional Microsoft SQL Server deployments.4

With the HP Business Decision Appliance, HP and Microsoft have greatly reduced the time and effort it takes for IT to configure, deploy and manage a comprehensive business intelligence solution, compared with a traditional business intelligence solution where applications, infrastructure and productivity tools are not pre-integrated. This appliance is optimized for Microsoft SQL Server and Microsoft SharePoint and can be installed and configured by IT in less than one hour.

The solution enables end users to share data analyses built with Microsoft’s award-winning5 PowerPivot for Excel 2010 and collaborate with others in SharePoint 2010. It allows IT to centrally audit, monitor and manage solutions created by end users from a single dashboard.

Availability and Pricing6

  • The HP Business Decision Appliance with three years of HP 24×7 hardware and software support services is available today from HP and HP/Microsoft Frontline channel partners for less than $28,000 (ERP). Microsoft SQL Server 2008 R2 and Microsoft SharePoint 2010 are licensed separately.

  • The HP Enterprise Data Warehouse Appliance with services for site assessment, installation and startup, as well as three years of HP 24×7 hardware and software support services, is available today from HP and HP/Microsoft Frontline channel partners starting at less than $2 million. Microsoft SQL Server 2008 R2 Parallel Data Warehouse is licensed separately.

3 Based on HP’s experience with customers using HP Business Decision Appliance.
4 SQL Server Parallel Data Warehouse (PDW) has been evaluated by 16 early adopter customers in six different industries. Customers compared PDW with their existing environments and saw typically 40x and up to 200x improvement in query times.
5 Messaging and Online Collaboration Reviews, Nov. 30, 2010, eWEEK.com.
6 Estimated retail U.S. prices. Actual prices may vary.

From: HP Delivers Enterprise Agility with New Converged Infrastructure Solutions [press release, June 6, 2011]

HP today announced several industry-first Converged Infrastructure solutions that improve enterprise agility by simplifying deployment and speeding IT delivery.

Converged Systems accelerate time to application value

HP Converged Systems speed solution deployment by providing a common architecture, management and security model across virtualization, cloud and dedicated application environments. They include:

  • HP AppSystem maximizes performance while simplifying deployment and application management. These systems offer best practice operations with a standard architecture that lowers total cost of ownership. Among the new systems are HP Vertica Analytics System, as well as HP Database Consolidation Solution and HP Business Data Warehouse Appliance, which are both optimized for Microsoft SQL Server 2008 R2.

From: Microsoft Expands Data Platform With SQL Server 2012, New Investments for Managing Any Data, Any Size, Anywhere [press release, Oct 12, 2011]

New technologies will give businesses a universal platform for data management, access and collaboration.

… Kummert described how SQL Server 2012, formerly code-named “Denali,” addresses the growing challenges of data and device proliferation by enabling customers to rapidly unlock and extend business insights, both in traditional datacenters and through public and private clouds. Extending on this foundation, Kummert also announced new investments to help customers manage “big data,” including an Apache Hadoop-based distribution for Windows Server and Windows Azure and a strategic partnership with Hortonworks Inc. …

The company also made available final versions of the Hadoop Connectors for SQL Server and Parallel Data Warehouse. Customers can use these connectors to integrate Hadoop with their existing SQL Server environments to better manage data across all types and forms.

SQL Server 2012 delivers a powerful new set of capabilities for mission-critical workloads, business intelligence and hybrid IT across traditional datacenters and private and public clouds. Features such as Power View (formerly Project “Crescent,”) and SQL Server Data Tools (formerly “Juneau”) expand the self-service BI capabilities delivered with PowerPivot, and provide an integrated development environment for SQL Server developers.

From: Microsoft Releases SQL Server 2012 to Help Customers Manage “Any Data, Any Size, Anywhere” [press release, March 6, 2012]

Microsoft’s next-generation data platform releases to manufacturing today.

REDMOND, Wash. — March 6, 2012 — Microsoft Corp. today announced that the latest version of the world’s most widely deployed data platform, Microsoft SQL Server 2012, has released to manufacturing. SQL Server 2012 helps address the challenges of increasing data volumes by rapidly turning data into actionable business insights. Expanding on Microsoft’s commitment to help customers manage any data, regardless of size, both on-premises and in the cloud, the company today also disclosed additional details regarding its plans to release an Apache Hadoop-based service for Windows Azure.

Tackling Big Data

IT research firm Gartner estimates that the volume of global data is growing at a rate of 59 percent per year, with 70 to 85 percent in unstructured form.* Furthering its commitment to connect SQL Server and rich business intelligence tools, such as Microsoft Excel, PowerPivot for Excel 2010 and Power View, with unstructured data, Microsoft announced plans to release an additional limited preview of an Apache Hadoop-based service for Windows Azure in the first half of 2012.

To help customers more cost-effectively manage their enterprise-scale workloads, Microsoft will release several new data warehousing solutions in conjunction with the general availability of SQL Server 2012, slated to begin April 1. This includes a major software update and new half-rack form factors for Microsoft Parallel Data Warehouse appliances, as well as availability of SQL Server Fast Track Data Warehouse reference architectures for SQL Server 2012.

Microsoft Simplifies Big Data for the Enterprise [press release, Oct 24, 2012]

New Apache Hadoop-compatible solutions for Windows Azure and Windows Server enable customers to easily extract insights from big data.

NEW YORK — Oct. 24, 2012 — Today at the O’Reilly Strata Conference + Hadoop World, Microsoft Corp. announced new previews of Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows, the company’s Apache Hadoop-based solutions for Windows Azure and Windows Server. The new previews, available today athttp://www.microsoft.com/bigdata, deliver Apache Hadoop compatibility for the enterprise and simplify deployment of Hadoop-based solutions. In addition, delivering these capabilities on the Windows Server and Azure platforms enables customers to use the familiar tools of Excel, PowerPivot for Excel and Power View to easily extract actionable insights from the data.

“Big data should provide answers for business, not complexity for IT,” said David Campbell, technical fellow, Microsoft. “Providing Hadoop compatibility on Windows Server and Azure dramatically lowers the barriers to setup and deployment and enables customers to pull insights from any data, any size, on-premises or in the cloud.”

The company also announced today an expanded partnership with Hortonworks, a commercial vendor of Hadoop, to give customers access to an enterprise-ready distribution of Hadoop with the newly released solutions.

“Hortonworks is the only provider of Apache Hadoop that ensures a 100 percent open source platform,” said Rob Bearden, CEO of Hortonworks. “Our expanded partnership with Microsoft empowers customers to build and deploy on platforms that are fully compatible with Apache Hadoop.”

More information about today’s news and working with big data can be found at http://www.microsoft.com/bigdata.

Choose the Right Strategy to Reap Big Value From Big Data [feature article for the press, Nov 13, 2012]

From devices to storage to analytics, technologies that work together will be key for business’ next information age.

REDMOND, Wash. — Nov. 13, 2012 — It seems the gigabyte is going the way of the megabyte — another humble unit of computational measurement that is becoming less and less relevant. Long live the terabyte, impossibly large, increasingly common.
Consider this: Of all the data that’s been collected in the world, more than 90 percent has been gathered in the last two years alone. According to a June 2011 report from the McKinsey Global Institute, 15 out of 17 industry sectors of the U.S. have more data stored — per company — than the U.S. Library of Congress.
The explosion in data has been catalyzed by several factors. Social media sites such as Facebook and Twitter are creating huge streams of unstructured data in the form of opinions, comments, trends and demographics arising from a vast and growing worldwide conversation.
And then there’s the emerging world of machine-generated information. The rise of intelligent systems and the Internet of Things means that more and more specialized devices are connected to information technology — think of a national retail chain that is connected to every one of its point-of-sale terminals across thousands of locations or an automotive plant that can centrally monitor hundreds of robots on the shop floor.
Combine it all and some industry observers are predicting that the amount of data stored by organizations across industries will increase ten-fold every five years, much of it coming from new streams that haven’t yet been tapped.
It truly is a new information age, and the opportunity is huge. The McKinsey Global Instituteestimates that the U.S. health care system, for example, could save as much as $300 billion from more effective use of data. In Europe, public sector organizations alone stand to save 250 billion euros.
In the ever-competitive world of business, data strategy is becoming the next big competitive advantage. According to analyst firm Gartner Group,* “By tapping a continual stream of information from internal and external sources, businesses today have an endless array of new opportunities for: transforming decision-making; discovering new insights; optimizing the business; and innovating their industries.”
According to Microsoft’s Ted Kummert, corporate vice president of the Business Platforms Division, companies addressing this challenge today may wonder where to start. How do you know which data to store without knowing what you want to measure? But then again, how do you know what insights the data holds without having it in the first place?
“There is latent value in the data itself,” Kummert says. “The good news is storage costs are making it economical to store the data. But that still leaves the question of how to manage it and gain value from it to move your business forward.”
With new data services in the cloud such as Windows Azure HDInsight Service and Microsoft HDInsight Server for Windows and Microsoft’s Apache Hadoop-based solutions for Windows Azure and Windows Server, organizations can afford to capture valuable data streams now while they develop their strategy — without making a huge financial bet on a six-month, multimillion-dollar datacenter project.
Just having access to the data, says Kummert, can allow companies to start asking much more complicated questions, combining information sources such as geolocation or weather information with internal operational trends such as transaction volume.
“In the end, big data is not just about holding lots of information,” he says. “It’s about how you harness it. It’s about insight, allowing end users to get the answers they need and doing so with the tools they use every day, whether that’s desktop applications, devices at the network edge or something else.”
His point is often overlooked with all the abstract talk of big data. In the end, it’s still about people, so making it easier for information workers to shift to a new world in which data is paramount is just as important as the information itself. Information technology is great at providing answers, but it still doesn’t know how to ask the right questions, and that’s where having the right analytics tools and applications can help companies make the leap from simply storing mountains of data to actually working with it.
That’s why in the Windows 8 world, Kummert says, the platform is designed to extend from devices and phones to servers and services, allowing companies to build a cohesive data strategy from end to end with the ultimate goal of empowering workers.
“When we talk about the Microsoft big data platform, we have all of the components to achieve exactly that,” Kummert says. “From the Windows Embedded platform to the Microsoft SQL Server stack through to the Microsoft Office stack. We have all the components to collect the data, store it securely and make it easier for information workers to find it — and, more importantly, understand what it means.”
For more information on building intelligent systems to get the most out of business data, please visit the Windows Embedded home page.
* Gartner, “Gartner Says Big Data Creates Big Jobs: 4.4 Million IT Jobs Globally to Support Big Data By 2015,” October 2012

Which data management solution delivers against today’s top six requirements? [The HP Blog Hub, March 25, 2013]

By Manoj Suvarna – Director, Product Management, HP AppSystems

In my last post I talked about the six key requirements I believe a data management

solution should deliver against today, namely:

1.      High performance

2.      Fast time to value

3.      Built with Big Data as a priority

4.      Low cost

5.      Simplified management

6.      Proven expertise

Today, 25th March 2013, HP has announced the HP AppSystem for Microsoft SQL Server 2012 Parallel Data Warehouse, a comprehensive data warerehouse solution jointly engineered with Microsoft, with a wide array of complementary tools, to effectively manage, store, and unlock valuable business insights.

Let’s take a look at how the solution delivers against each of the key requirements in turn:

1  High performance

With its MPP (Massively Parallel Processing) engine, and ‘shared nothing’ architecture, to effectively manage, store, and unlock valuable business insights, the HP AppSystem for Parallel Data Warehouse can deliver linear scale starting from a configuration to support small terabyte requirements all the way up to configurations supporting six Petabytes of data. 

The solution features the latest HP ProLiant Gen8 servers, with InfiniBand FDR networking, and uses the xVelocity in-memory analytics engine and the xVelocity memory-optimized columnstore index feature in Microsoft SQL Server 2012 to greatly enhance query performance. 

The combination of Microsoft software with HP Converged Infrastructure means HP AppSystem for Parallel Data Warehouse offers leading performance for complex workloads, with up to 100x faster query performance and a 30% faster scan rate than previous generations.

2  Fast time to value

HP AppSystem for Parallel Data Warehouse is a factory built, turn-key system, delivered complete from HP’s factory as an integrated set of hardware and software including servers, storage, networking, tools, software, services, and support.   Not only is the solution pre-integrated, but it’s backed by unique, collaborative HP and Microsoft support with onsite installation and deployment services to smooth implementation.  

3  Built with Big Data as a priority

Designed to integrate with Hadoop, HP AppSystem for Parallel Data Warehouse is ideally suited for “Big Data” environments. This integration allows customers to perform comprehensive analytics on unstructured, semi-structured and structured data, to effectively gain business insights and make better, faster decisions.

4  Simplified management

Providing the optimal management environment has been a critical element of the design, and is delivered through HP Support Pack Utility Suite.  This set of tools simplifies updates and several other maintenance tasks across the system to ensure that it is continually running at optimal performance.  Unique in the industry, HP Support Pack Utility Suite can deliver up to 2000 firmware updates with the click of a button.  In addition, the HP AppSystem for Parallel Data Warehouse is manageable via the Microsoft System Center console, leveraging deep integration with HP Insight Control.

5  Low cost

The HP AppSystem for Parallel Data Warehouse has been designed as part of an end to end stack for data management, integrating data warehousing seamlessly with BI solutions to minimize the cost of ownership.

It has also been re-designed with a new form factor to minimize space and maximize ease of expansion, which means the entry point for a quarter rack system is approximately 35% less expensive than the previous generation solution.    It is expandable in modular increments up to 64 nodes, which means no need for the type of fork-lift upgrade that might be needed with a proprietary solution, and is targeted to be approximately half the cost per TB of comparable offerings in the market from Oracle, IBM, and EMC*.

6 Proven expertise

Together HP and Microsoft have over 30 years experience delivering integrated solutions from desktop to datacenter.  HP AppSystem for Parallel Data Warehouse completes the portfolio for HP Data Management solutions, which give customers the ability to deliver insights on any data, of any size, combining best in class Microsoft software with HP Converged Infrastructure.

For customers, our ability to deliver on the requirements above ultimately provides agility for faster, lower risk deployment of data management in the enterprise, helping them make key business decisions more quickly and drive more value to the organization.

If you’d like to find out more, please go to www.hp.com/solutions/microsoft/pdw.

http://www.valueprism.com/resources/resources/Resources/PDW%20Compete%20Pricing%20FINAL.pdf

HP AppSystem for SQL 2012 Parallel Data Warehouse [HP product page, March 25, 2013]

Overview

Rapid time-to-value data warehouse solution

The HP AppSystem for Microsoft SQL Server 2012 Parallel Data Warehouse, jointly engineered, built and supported with Microsoft, is for customers who realize limitations and inefficiencies of their legacy data warehouse infrastructure. This converged system solution delivers significant advances over the previous generation solution including:

Enhanced performance and massive scalability

  • Up to 100x faster query performance and a 30% faster scan rate
  • Ability to start from small terabyte requirements that can  linearly scale out to 6 Petabytes for mission critical needs

Minimize costs and management complexity

  • Redesigned form factor minimizes space  and allows ease of expansion with significant up-front acquisition savings as well as reduce OPEX heating, cooling and real estate cost requirements
  • Appliance  solution is pre-built and tested as a complete, end-to-end stack — easy to deploy and minimal technical resources required
  • Extensive integration of Microsoft and 3rd party tools  allow users to work with familiar tools like Excel as well as within heterogeneous BI environments
  • Unique HP Support Pack Utility Suite set of tools significantly simplifies updates and  other maintenance tasks to ensure system is running at optimal performance

Reduce risks and manage change

  • Services delivered jointly under a unique collaborative support agreement, integrated across hardware and software, to help avoid IT disruptions and deliver faster resolution to issues
  • Backed by more than 48,000 Microsoft professionals—with more than 12,000 Microsoft Certified—one of the largest, most specialized forces of consultants and support professionals for Microsoft environments in the world

Solution Components

HP Products

    HP Services

    HP Software

      Partner’s Software

        HP Support

        [also available with Dell Parallel Data Warehouse Appliance]
        Appliance: Parallel Data Warehouse (PDW) [Microsoft PDW Software product page, Feb 27, 2013]

        PDW is a massively parallel processing data warehousing appliance built for any volume of relational data (with up to 100x performance gains) and provides the simplest integration to Hadoop.

        Unlike other vendors who opt to provide their high-end appliances for a high price or provide a relational data warehouse appliance that is disconnected from their “Big Data” and/or BI offerings, Microsoft SQL Server Parallel Data Warehouse provides both a high-end massively parallel processing appliance that can improve your query response times up to 100x over legacy solutions as well as seamless integration to both Hadoop and with familiar business intelligence solutions. What’s more, it was engineered to lower ongoing costs resulting in a solution that has the lowest price/terabyte in the market.

        What’s New in SQL Server 2012 Parallel Data Warehouse

        Key Capabilities

        • Built For Big Data with PolyBase

          SQL Server 2012 Parallel Data Warehouse introduces PolyBase, a fundamental breakthrough in data processing used to enable seamless integration between traditional data warehouses and “Big Data” deployments.

          • Use standard SQL queries (instead of MapReduce) to access and join Hadoop data with relational data.
          • Query Hadoop data without IT having to pre-load data first into the warehouse.
          • Native Microsoft BI Integration allowing analysis of relational and non-relational data with familiar tools like Excel.
        • Next-Generation Performance at Scale

          Scale and perform beyond your traditional SQL Server deployment with PDW’s massively parallel processing (MPP) appliance that can handle the extremes of your largest mission critical requirements of performance and scale.

          • Up to 100x faster than legacy warehouses with xVelocity updateable columnstore.
          • Massively Parallel Processing (MPP) architecture that parallelizes and distributes computing for high query concurrency and complexity.
          • Rest assured with built-in hardware redundancies for fault tolerance.
          • Rely on Microsoft as your single point of contact for hardware and software support.
        • Engineered For Optimal Value

          Unlike other vendors in the data warehousing space who deliver a high-end appliance at a high price, Microsoft engineered PDW for optimal value by lowering the cost of the appliance.

          • Resilient, scalable, and high performance storage features built into software lowering hardware costs.
          • Compress data up to 15x with the xVelocity updateable columnstore saving up to 70% of storage requirements.
          • Start small with a quarter rack allowing you to right-size the appliance rather than over-acquiring capacity.
          • Use the same tools and knowledge as SQL Server without retaining new tools or knowledge for scale-out DW or Big Data.
          • Co-engineered with hardware partners offering highest level of product integration and shipped to your door offering fastest time to value.
          • The lowest price/terabyte than overall appliance market (and 2.5x lower than SQL 2008 R2 PDW).

          PolyBase [Microsoft page, Feb 26, 2013]

          PolyBase is a fundamental breakthrough in data processing used in SQL Server 2012 Parallel Data Warehouse to enable truly integrated query across Hadoop and relational data.

          Complementing Microsoft’s overall Big Data strategy, PolyBase is a breakthrough new technology on the data processing engine in SQL Server 2012 Parallel Data Warehouse designed as the simplest way to combine non-relational data and traditional relational data in your analysis. While customers would normally burden IT to pre-populate the warehouse with Hadoop data or undergo an extensive training on MapReduce in order to query non-relational data, PolyBase does this all seamlessly giving you the benefits of “Big Data” without the complexities.

          Key Capabilities

          • Unifies Relational and Non-relational Data

            PolyBase is one of the most exciting technologies to emerge in recent times because it unifies the relational and non-relational worlds at the query level. Instead of learning a new query like MapReduce, customers can leverage what they already know (T-SQL)

            • Integrated Query: Accepts a standard T-SQL query that joins tables containing a relational source with tables in a Hadoop cluster without needing to learn MapReduce.
            • Advanced query options: Apart from simple SELECT queries, users can perform JOINs and GROUP BYs on data in the Hadoop cluster.
          • Enables In-place Queries with Familiar BI Tools

            Microsoft Business Intelligence (BI) integration enables users to connect to PDW with familiar tools such as Microsoft Excel, to create compelling visualizations and make key business decisions from structured or unstructured data quickly.

            • Integrated BI tools: End users can connect to both relational or Hadoop data with Excel abstracting the complexities of both.
            • Interactive visualizations: Explore data residing in HDFS using Power View for immersive interactivity and visualizations.
            • Query in-place: IT doesn’t have to pre-load or pre-move data from Hadoop into the data warehouse and pre-join the data before end users do the analysis.
          • Part of an Overall Microsoft Big Data Story

            PolyBase is part of an overall Microsoft “Big Data” solution that already includes HDInsight (a 100% Apache Hadoop compatible distribution for Windows Server and Windows Azure), Microsoft Business Intelligence, and SQL Server 2012 Parallel Data Warehouse.

            • Integrated with HDInsight: PolyBase can source the non-relational analysis from Microsoft’s 100% Apache compatible Hadoop distribution, HD Insights.
            • Built into PDW: PolyBase is built into SQL Server 2012 Parallel Data Warehouse to bring “Big Data” benefits within the power of a traditional data warehouse.
            • Integrated BI tools: PolyBase has native integration with familiar BI tools like Excel (through Power View and PowerPivot).

          Announcing Power BI for Office 365 [Office News, July 8, 2013]

          Today, at the Worldwide Partner Conference, we announced a new offering–Power BI for Office 365. Power BI for Office 365 is a cloud-based business intelligence (BI) solution that enables our customers to easily gain insights from their data, working within Excel to analyze and visualize the data in…

          Exciting new BI features in Excel [Excel Blog, July 9, 2013]

          Yesterday during the Microsoft’s Worldwide Partner Conference we announced some exciting new Business Intelligence (BI) features available for Excel. Specifically, we announced the expansion of the BI offerings available as part of Power BIa cloud-based BI solution that enables our customers to easily gain insights from their data, working within Excel to analyze and visualize the data in a self-service way.

          Power BI for Office 365 now includes:

          • Power Query, enabling customers to easily search and access public data and their organization’s data, all within Excel (formerly known as “Data Explorer).  Download details here
          • Power Map, a 3D data visualization tool for mapping, exploring and interacting with geographic and temporal data (formerly known as product codename “Geoflow).  Download details here.
          • Power Pivot for creating and customizing flexible data models within Excel. 
          • Power View for creating interactive charts, graphs and other visual representations of data.

          Head on over to the Office 365 Technology Blog, Office News Blog, and Power BI site to learn more.

          Clearing up some confusion around the Power BI “Release” [A.J. Mee’s Business Intelligence and Big Data Blog, Aug 13, 2013]

          Hey folks.  Thanks again for checking out my blog.
          Yesterday (8/12/2013), Power BI received some attention from the press.  Here’s one of the articles that I had seen talking about the “release” of Power BI:
          http://www.neowin.net/news/microsoft-releases-power-bi-office-365-for-windows-8rt

          Some of us inside Microsoft had to address all sorts of questions around this one.  For the most part, the questions revolved around the *scope* of what was actually released.  You have to remember that Power BI is a broad brand name that takes into account:

          * Power Pivot/View/Query/Map (which is available now, for the most part)

          * The Office 365 hosting of Power BI applications with cloud-to-on-premise data refresh, Natural Language query, data stewardship, etc..

          * The Mobile BI app for Windows and iOS devices

          Net-net: we announced the availability of the Mobile app (in preview form).  At present, it is only available on Windows 8 devices (x86 or ARM) – no iOS just yet.  The rest of the O365 / Power BI offering is yet to come.  Check out this article to find out how to sign up.
          http://blogs.msdn.com/b/ajmee/archive/2013/07/17/how-can-i-check-out-power-bi.aspx
          So, the headline story is really all around the Mobile app.  You can grab it today from the Store – just search on “Power BI” and it should be the first app that shows up.

          From: Power Map for Excel earns new name with significant updates to 3D visualizations and storytelling [Excel Blog, Sept 25, 2013]

          We are announcing a significant update to Power Map Preview for Excel (formerly Project codename “GeoFlow” Preview for Excel) on the Microsoft Download Center. Just over five months ago, we launched the preview of Project codename “GeoFlow” amidst a passionately announced “tour” of global song artists through the years by Amir Netz (see 1:17:00 in the keynote) at the first ever PASS Business Analytics conference in April. The 3D visualization add-in has now become a centerpiece visualization (along with Power View) within the business intelligence capabilities of Microsoft Power BI in Excel, earning the new name Power Map to align with other Excel features (Power Query, Power Pivot, and Power View).

          Information workers with their data in Excel have realized the potential of Power Map to identify insights in their geospatial and time-based data that traditional 2D charts cannot. Digital marketers can better target and time their campaigns while environmentally-conscious companies can fine-tune energy-saving programs across peak usage times. These are just a few of the examples of how location-based data is coming alive for customers using Power Map and distancing them from their competitors who are still staring blankly at a flat table, chart, or map. Feedback from customers like this lead us to introduce Power Map with some new features across experience of mapping data, discovering insights, and sharing stories.

          From: Microsoft unleashes fall wave of enterprise cloud solutions [press release, Oct 7, 2013]

          New Windows Server, System Center, Visual Studio, Windows Azure, Windows Intune, SQL Server, and Dynamics solutions will accelerate cloud benefits for customers.

          REDMOND, Wash. — Oct. 7, 2013 — Microsoft Corp. on Monday announced a wave of new enterprise products and services to help companies seize the opportunities of cloud computing and overcome today’s top IT challenges. Complementing Office 365 and other services, these new offerings deliver on Microsoft’s enterprise cloud strategy.

          Data platform and insights

          As part of its vision to help more people unlock actionable insights from big data, Microsoft next week will release a second preview of SQL Server 2014. The new version offers industry-leading in-memory technologies at no additional cost, giving customers 10 times to 30 times performance improvements without application rewrites or new hardware. SQL Server 2014 also works with Windows Azure to give customers built-in cloud backup and disaster recovery.

          For big data analytics, later this month Microsoft will release Windows Azure HDInsight Service, an Apache Hadoop-based service that works with SQL Server and widely used business intelligence tools, such as Microsoft Excel and Power BI for Office 365. With Power BI, people can combine private and public data in the cloud for rich visualizations and fast insights.

          How to take full advantage of Power BI in Excel 2013 [News from Microsoft Business UK, Oct 14, 2013]

          The launch of Power BI features in Excel 2013 gives users an added range of options for data analysis and gaining business intelligence (BI). Power Query, Power Pivot, Power View, and Power Map work seamlessly together, making it much simpler to discover and visualise data. And for small businesses looking to take advantage of self-service intelligence solutions, this is a major stride forwards.

          Power Query

          With Power Query, users can search the entire cloud for data – both public and private. With access to multiple data sources, users can filter, shape, merge, and append the information, without the need to physically bring it in to Excel.

          Once your query is shaped and filtered how you want it, you can download it into a worksheet in Excel, into the Data Model, or both. When you have the dataset you need, shaped and formed and properly merged, you can save the query that created it, and share it with other users.

          Power Pivot

          Power Pivot enables users to create their own data models from various sources, structured to meet individual needs. You can customise, extend with calculations and hierarchies, and manage the powerful Data Model that is part of Excel.

          The solution works seamlessly and automatically with Power Query, and with other features of Power BI, allowing you to manage and extend your own custom database in the familiar environment of Excel. The entire Data Model in Power Pivot – including tables, columns, calculations and hierarchies – exist as report-ready elements in Power View.

          Power View

          Power View allows users to create engaging, interactive, and insightful visualisations with just a few clicks of their mouse. The tool brings the Data Model alive, turning queries into visual analysis and answers. Data can be presented in a variety of different forms, with the reports easily shareable and open for interactive analysis.

          Power Map

          A relatively new addition to ExcelPower Map is a geocentric and temporal mapping feature of Power BI. It brings location data into powerful, engaging 3D map visualisations. This allows users to create location-based reports, visualised over a time continuum, that tour the available data.

          Using the features together

          Power BI offers a collection of services which are designed to make self-service BI intuitive and collaborative. The solution combines the power and familiarity of Excel with collaboration and cloud-based functionality. This vastly increases users’ capacity to gather, manage and draw insights from data, ensuring they can make the most of business intelligence.

          The various feature of BI can add value independently, but the real value is in integration. When used in conjunction with one another – rather than in silo – the services become more than the sum of their parts. They are designed to work seamlessly together in Excel 2013, supporting users as they look to find data, process it and create visualisations which add value to the decision making process.

          Posted by Alex Boardman

          Related upcoming technology announcements from Intel:

          GraphBuilder: Revealing hidden structure within Big Data [Intel Labs blog, Dec 6, 2012]

          By Ted Willke, Principal Engineer with Intel and the General Manager of the Graph Analytics Operation in Intel Labs.

          Big Data.  Big.  Data.  We hear the term frequently used to describe data of unusual size or generated at spectacular velocity, like the amount of social data that Facebook has amassed on us (30 PB in one cluster) or the rate at which sensors at the Large Hadron Collider collect information on subatomic particles (15 PB/year).  And it’s often deemed “unstructured or semi-structured” to describe its lack of apparent, well, structure.  What’s meant is that this data isn’t organized in a way that can directly answer questions, like a database can if you ask it how many widgets you sold last week.

          But Big Data does have structure; it just needs to be discovered from within the raw text, images, video, sensor data, etc., that comprise it.  And, companies, led by pioneers like Google, have been doing this for the better part of a decade, using applications that churn through the information using data-parallel processing and convenient frameworks for it, like Hadoop MapReduce.  Their systems chop the incoming data into slices, farm it out to masses of machines, which subsequently filter it, order it, sum it, transform it, and do just about anything you’d want to do with it, within the practical limits of the readily available frameworks.

          But until recently, only the wizards of Big Data were able to rapidly extract knowledge from a different type of structure within the data, a type that is best modeled by tree or graph structures.  Imagine the pattern of hyperlinks connecting Wikipedia pages or the connections between Tweeters and Followers on Twitter.  In these models, a line is drawn between two bits of information if they are related to each other in some way.  The nature of the connection can be less obvious than in these examples and made specifically to serve a particular algorithm.  For example, a popular form of machine learning called Latent Dirichlet Allocation (a mouthful, I know) can create “word clouds” of topics in a set of documents without being told the topics in advance. All it needs is a graph that connects word occurrences to the filenames.  Another algorithm can accurately guess the type of noun (i.e., person, place, or thing) if given a graph that connects noun phrases to surrounding context phrases.

          Many of these graphs are very large, with tens of billions of vertices (i.e., things being related) and hundreds of billions of edges (i.e., the relationships).  And, many that model natural phenomena possess power-law degree distributions, meaning that many vertices connect to a handful of others, but a few may have edges to a substantial portion of the vertices.  For instance, a graph of Twitter relationships would show that many people only have a few dozen followers while only a handful of celebrities have millions. This is all very problematic for parallel computation in general and MapReduce in particular.  As a result, Carlos Guestrin and his crack team at the University of Washington in Seattle have developed a new framework, called GraphLab, that is specifically designed for graph-based parallel machine learning.  In many cases, GraphLab can process such graphs 20-50X faster than Hadoop MapReduce.  Learn more about their exciting work here.

          Carlos is a member of the Intel Science and Technology Center for Cloud Computing, and we started working with him on graph-based machine learning and data mining challenges in 2011.  Quickly it became clear that no one had a good story about how to construct large-scale graphs that frameworks like GraphLab could digest.  His team was constantly writing scripts to construct different graphs from various unstructured data sources.  These scripts ran on a single machine and would take a very long time to execute.  Essentially, they were using a labor-intensive, low-performance method to feed information to their elegant high-performance GraphLab framework.  This simply would not do.

          Scanning the environment, we identified a more general hole in the open source ecosystem: A number of systems were out there to process, store, visualize, and mine graphs but, surprisingly, not to construct them from unstructured sources.  So, we set out to develop a demo of a scalable graph construction library for Hadoop.  Yes, for Hadoop.  Hadoop is not good for graph-based machine learning but graph construction is another story.  This work became GraphBuilder, which was demonstrated in July at the First GraphLab Workshop on Large-Scale Machine Learning and open sourced this week at 01.org (under Apache 2.0 licensing).

          GraphBuilder not only constructs large-scale graphs fast but also offloads many of the complexities of graph construction, including graph formation, cleaning, compression, partitioning, and serialization.  This makes it easy for just about anyone to build graphs for interesting research and commercial applications.  In fact, GraphBuilder makes it possible for a Java programmer to build an internet-scale graph for PageRank in about 100 lines of code and a Wikipedia-sized graph for LDA in about 130.

          This is only the beginning for GraphBuilder but it has already made a lot of connections.  We will continually update it with new capabilities, so please try it out and let us know if you’d value something in particular.  And, let us know if you’ve got an interesting graph problem for us to grind through.  We are always looking for new revelations.

          Intel, Facebook Collaborate on Future Data Center Rack Technologies  [press release, Jan 16, 2013]

          New Photonic Architecture Promises to Dramatically Change Next Decade of Disaggregated, Rack-Scale Server Designs

          NEWS HIGHLIGHTS

          • Intel and Facebook* are collaborating to define the next generation of rack technologies that enables the disaggregation of compute, network and storage resources.
          • Quanta Computer* unveiled a mechanical prototype of the rack architecture to show the total cost, design and reliability improvement potential of disaggregation.
          • The mechanical prototype includes Intel Silicon Photonics Technology, distributed input/output using Intel Ethernet switch silicon, and supports the Intel® Xeon® processor and the next-generation system-on-chip Intel® Atom™ processor code named “Avoton.”
          • Intel has moved its silicon photonics efforts beyond research and development, and the company has produced engineering samples that run at speeds of up to 100 gigabits per second (Gbps).

          OPEN COMPUTE SUMMIT, Santa Clara, Calif., Jan. 16, 2013 – Intel Corporation announced a collaboration with Facebook* to define the next generation of rack technologies used to power the world’s largest data centers. As part of the collaboration, the companies also unveiled a mechanical prototype built by Quanta Computer* that includes Intel’s new, innovative photonic rack architecture to show the total cost, design and reliability improvement potential of a disaggregated rack environment.

          “Intel and Facebook are collaborating on a new disaggregated, rack-scale server architecture that enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade,” said Justin Rattner, Intel’s chief technology officer during his keynote address at Open Computer Summit in Santa Clara, Calif. “The disaggregated rack architecture [since renamed RSA (Rack Scale Architecture)] includes Intel’s new photonic architecture, based on high-bandwidth, 100Gbps Intel® Silicon Photonics Technology, that enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today’s copper based interconnects.”

          Rattner explained that the new architecture is based on more than a decade’s worth of research to invent a family of silicon-based photonic devices, including lasers, modulators and detectors using low-cost silicon to fully integrate photonic devices of unprecedented speed and energy efficiency. Silicon photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable. Intel has spent the past two years proving its silicon photonics technology was production-worthy, and has now produced engineering samples.

          Silicon photonics made with inexpensive silicon rather than expensive and exotic optical materials provides a distinct cost advantage over older optical technologies in addition to providing greater speed, reliability and scalability benefits. Businesses with server farms or massive data centers could eliminate performance bottlenecks and ensure long-term upgradability while saving significant operational costs in space and energy.

          Silicon Photonics and Disaggregation Efficiencies

          Businesses with large data centers can significantly reduce capital expenditure by disaggregating or separating compute and storage resources in a server rack. Rack disaggregation refers to the separation of those resources that currently exist in a rack, including compute, storage, networking and power distribution into discrete modules. Traditionally, a server within a rack would each have its own group of resources. When disaggregated, resource types can be grouped together and distributed throughout the rack, improving upgradability, flexibility and reliability while lowering costs.

          “We’re excited about the flexibility that these technologies can bring to hardware and how silicon photonics will enable us to interconnect these resources with less concern about their physical placement,” said Frank Frankovsky, chairman of the Open Compute Foundation and vice president of hardware design at supply chain at Facebook. “We’re confident that developing these technologies in the open and contributing them back to the Open Compute Project will yield an unprecedented pace of innovation, ultimately enabling the entire industry to close the utilization gap that exists with today’s systems designs.”

          By separating critical components from one another, each computer resource can be upgraded on its own cadence without being coupled to the others. This provides increased lifespan for each resource and enables IT managers to replace just that resource instead of the entire system. This increased serviceability and flexibility drives improved total-cost for infrastructure investments as well as higher levels of resiliency. There are also thermal efficiency opportunities by allowing more optimal component placement within a rack.

          The mechanical prototype is a demonstration of Intel’s photonic rack architecture for interconnecting the various resources, showing one of the ways compute, network and storage resources can be disaggregated within a rack. Intel will contribute a design for enabling a photonic receptacle to the Open Compute Project (OCP) and will work with Facebook*, Corning*, and others over time to standardize the design. The mechanical prototype includes distributed input/output (I/O) using Intel Ethernet switch silicon, and will support the Intel® Xeon® processor and the next generation, 22 nanometer system-on-chip (SoC) Intel® Atom™ processor, code named “Avoton” available this year.

          The mechanical prototype shown today is the next evolution of rack disaggregation with separate distributed switching functions.

          Intel and Facebook: A History of Collaboration and Contributions

          Intel and Facebook have long been technology collaboration partners on hardware and software optimizations to drive more efficiency and scale for Facebook data centers. Intel is also a founding board member of the OCP, along with Facebook. Intel has several OCP engagements in flight including working with the industry to design OCP boards for Intel Xeon and Intel Atom based processors, support for cold storage with the Intel Atom processor, and common hardware management as well as future rack definitions including enabling today’s photonics receptacle.

          Disruptive technologies to unlock the power of Big Data [Intel Labs blog, Feb 26, 2013]

          By Ted Willke, Principal Engineer with Intel and the General Manager of the Graph Analytics Operation in Intel Labs.

          This week’s announcement by Intel that it’s expanding the availability of the Intel® Distribution for Apache Hadoop* software to the US market is seriously exciting for the employees of this semiconductor giant, especially researchers like me.  Why?  Why would I say this given the amount of overexposure that Hadoop has received?  I mean, isn’t this technology nearly 10 years old already??!!  Well, because the only thing I hear more than people touting Hadoop’s promise are people venting frustration in implementing it.  Rest assured that Intel is listening.  We get that users don’t want to make a career out of configuring Hadoop… debugging it…  managing it… and trying to figure out why the “insight” it’s supposed to be delivering often looks like meaningless noise.

          Which brings me back to why this is a seriously exciting event for me.  With our product teams doing the heavy lifting of making the Hadoop framework less rigid and easier to use while keeping it inexpensive, Intel Labs gets a landing zone for some cool disruptive technologies. In December, I blogged about the launch of our open source scalable graph construction library for Hadoop, called Intel® Graph Builder for Apache Hadoop software (f.k.a. GraphBuilder), and explained how it makes it easy to construct large scale graphs for machine learning and data mining. These structures can yield insights from relationships hidden within a wide range of big data sources, from social media and business analytics to medicine and e-science. Today I’ll delve a bit more into Graph Builder technology and introduce the Intel® Active Tuner for Apache Hadoop software, an auto-tuner that uses Artificial Intelligence (AI) to configure Hadoop for optimal performance.  Both technologies will be available in the Intel Distribution.

          So, Intel® Graph Builder leverages Hadoop MapReduce to turn large unstructured (or semi-structured) datasets into structured output in graph form.  This kind of graph may be mined using graph search of the sort that Facebook recently announced.  Many companies would like construct such graphs out of unstructured datasets and Graph Builder makes it possible.  Beyond search, analysis may be applied to an entire graph to answer questions of the type shown in the figure below.  The analysis may be performed using distributed algorithms implemented in frameworks like GraphLab, which I also discussed in my previous post.

          image

          Intel® Graph Builder performs extract, transform, and load operations, terms borrowed from databases and data warehousing.  And, it does so at Hadoop MapReduce scale.  Text is parsed and tokenized to extract interesting features.  These operations are described in a short map-reduce program written by the data scientist.  This program also defines when two vertices (i.e., features) in the graph are related by an edge.  The rule is applied repeatedly to form the graph’s topology (i.e., the pattern of edge relationships between vertices), which is stored via the library.  In addition, most applications require that additional tabulated information, or “network information,” be associated with each vertex/edge and the library provides a number of distributed algorithms for these tabulations.

          At this point, we have a large-scale graph ready for HDFS, HBase, or another distributed store.  But we need to do a few more things to ensure that queries and computations on the graph will scale up nicely, like:

          • Cleaning the graph’s structure and checking that it is reasonable
          • Compressing the graph and network information to conserve cluster resources
          • Partitioning the graph in a way that will minimize cluster communications while load balancing computational effort

          The Intel Graph Builder library provides efficient distributed algorithms for all of the above, and more, so that data scientists can spend more of their time analyzing data and less of their time preparing it.  Enough said. The library will be included in the Intel Distribution shortly and we look forward to your feedback.  We are constantly on the hunt for new features as we look to the future of big data.

          Whereas Intel® Graph Builder was developed to simplify the programming of emerging applications, Intel® Active Tuner was developed to simplify the deployment of today’s applications by automating the selection of configuration settings that will result in optimal cluster performance. In fact, we initially codenamed this technology “Gunther,” after a well-known circus elephant trainer, because of its ability to train Hadoop to run faster :-) .  It’s cruelty-free to boot, I promise.  Anyway, many Hadoop configuration parameters need to be tuned for the characteristics of each particular application, such as web search, medical image analysis, audio feature analysis, fraud detection, semantic analysis, etc.  This tuning significantly reduces both job execution and query time but is time consuming and requires domain expertise. If you use Hadoop you know that the common practice is to tune it up using rule-of-thumb settings published by industry leaders.  But these recommendations are too general and fail to capture the specific requirements of a given application and cluster resource constraints.  Enter the Active Tuner.

          Intel® Active Tuner implements a search engine that uses a small number of representative jobs to identify the best configuration from among millions or billions of possible Hadoop configurations.  It uses a form of AI known as a genetic algorithm to search out the best settings for the number of maps, buffer sizes, compression settings, etc., constantly striving to derive better settings by combining those from pairs of trials that show the most promise (this is where the genetic part comes in) and deriving future trials from these new combinations.  And, the Active Tuner can do this faster and more effectively than a human can using the rules-of-thumb.  It can be controlled from a slick GUI in the new Intel Manager for Apache Hadoop, so take it for a test run when you pick up a copy of the Intel Distribution.  You may see your cluster performance improve by up to 30% without any hassle.

          To wrap, these are one-of-a-kind technologies that I think you’ll have fun playing with.  And, despite offering quite a lot, Intel® Graph Builder and Intel® Active Tuner are just the beginning.  I am very excited by what’s coming next.  Intel is moving to unlock the power of Big Data and Intel Labs is preparing to blow it wide open.

          *Other names and brands may be claimed as the property of others

          Intel Unveils New Technologies for Efficient Cloud Datacenters [press release, Sept 4, 2013]

          From New SoCs to Optical Fiber, Intel Delivers Cloud-Optimized Innovations Across Network, Storage, Microservers, and Rack Designs

          NEWS HIGHLIGHTS

          • The Intel® Atom™ C2000 processor family is the first based on Silvermont micro-architecture, has 13 customized configurations and is aimed at microservers, entry-level networking and cold storage.
          • New 64-bit, system-on-chip family for the datacenter delivers up to six times1 the energy efficiency and up to seven times2 the performance compared to previous generation.
          • The first live demonstration of a Rack Scale Architecture-based system with high-speed Intel® Silicon Photonics components including a new MXC connector and ClearCurve* optical fiber developed in collaboration with Corning*, enabling data transfers speeds up to 1.6 terabits4 per second at distances up to 300 meters5 for greater rack density.

          SAN FRANCISCO, Calif., September 4, 2013 – Intel Corporation today introduced a portfolio of datacenter products and technologies for cloud service providers looking to drive greater efficiency and flexibility into their infrastructure to support a growing demand for new services and future innovation.

          Server, network and storage infrastructure is evolving to better suit an increasingly diverse set of lightweight workloads, creating the emergence of microserver, cold storage and entry networking segments. By optimizing technologies for specific workloads, Intel will help cloud providers significantly increase utilization, drive down costs and provide compelling and consistent experiences to consumers and businesses.

          The portfolio includes the second generation 64-bit Intel® Atom™ C2000 product family of system-on-chip (SoC) designs for microservers and cold storage platforms (code named “Avoton”) and for entry networking platforms (code named “Rangeley”). These new SoCs are the company’s first products based on the Silvermont micro-architecture, the new design in its leading 22nm Tri-Gate SoC process delivering significant increases in performance and energy efficiency, and arrives only nine months after the previous generation.

          “As the world becomes more and more mobile, the pressure to support billions of devices and users is changing the very composition of datacenters,” said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. “From leadership in silicon and SoC design to rack architecture and software enabling, Intel is providing the key innovations that original equipment manufacturers, telecommunications equipment makers and cloud service providers require to build the datacenters of the future.”

          Intel also introduced the Intel® Ethernet Switch FM5224 silicon which, when combined with the WindRiver Open Network Software suite, brings Software Defined Networking (SDN) solutions to servers for improved density and lower power.

          Intel also demonstrated the first operational Intel Rack Scale Architecture (RSA)-based rack with Intel® Silicon PhotonicsTechnology in combination with the disclosure of a new MXC connector and ClearCurve* optical fiber developed by Corning* with requirements from Intel. This demonstration highlights the speed with which Intel and the industry are moving from concept to functionality.

          Customized, Optimized Intel® Atom™ SoCs for New and Existing Market Segments
          Manufactured using Intel’s leading 22nm process technology, the new Intel Atom C2000 product family features up to eight cores, a range of 6 to 20Watts TDP, integrated Ethernet and support for up to 64 gigabytes (GB) of memory, eight times the previous generation. OVH* and 1&1, leading global web-hosting services companies, have tested Intel Atom C2000 SoCs and plan to deploy them in its entry-level dedicated hosting services next quarter. The 22 nanometer process technology delivers superior performance and performance per watt.

          Intel is delivering 13 specific models with customized features and accelerators that are optimized for particular lightweight workloads such as entry dedicated hosting, distributed memory caching, static web serving and content delivery to ensure greater efficiency. The designs allow Intel to expand into new markets like cold storage and entry-level networking.

          For example, the new Intel Atom configurations for entry networking address the specialized needs for securing and routing Internet traffic more efficiently. The product features a set of hardware accelerators called Intel® QuickAssist Technology that improves cryptographic performance. They are ideally suited for routers and security appliances.

          By consolidating three communications workloads – application, control and packet processing – on a common platform, providers now have tremendous flexibility. They will be able to meet the changing network demands while adding performance, reducing costs and improving time-to-market.

          Ericsson, a world-leading provider of communications technology and services announced that its blade-based switches used in the Ericsson Cloud System, a solution which enables service providers to add cloud capabilities to their existing networks, will sooninclude the Intel Atom C2000 SoC product family.

          Microserver-Optimized Switch for Software Defined Networking
          Network solutions that manage data traffic across microservers can significantly impact the performance and density of the system. The unique combination of the Intel Ethernet Switch FM5224 silicon and the WindRiver Open Network Software suite will enable the industry’s first 2.5GbE, high-density, low latency, SDN Ethernet switch solutions specifically developed for microservers. The solution enhances system level innovation, and complements the integrated Intel Ethernet controller within the Intel Atom C2000 processor. Together, they can be used to create SDN solutions for the datacenter.

          Switches using the new Intel Ethernet Switch FM5224 silicon can connect up to 64 microservers, providing up to 30 percent3 higher node density. They are based on Intel Open Network Platform reference design announced earlier this year.

          First Demonstration of Silicon Photonics-Powered Rack
          Maximum datacenter efficiency requires innovation at the silicon, system and rack level. Intel’s RSA design helps industry partners to re-architect datacenters for modularity of components (storage, CPU, memory, network) at the rack level. It provides the ability to provision or logically compose resources based on application specific workload requirements. Intel RSA also will allow for the easier replacement and configuration of components when deploying cloud computing, storage and networking resources.

          Intel today demonstrated the first operational RSA-based rack equipped with the newly announced Intel Atom C2000 processors, Intel® Xeon® processors, a top-of-rack Intel SDN-enabled switch and Intel Silicon Photonics Technology. As part of the demonstration, Intel also disclosed the new MXC connector and ClearCurve* fiber technology developed by Corning* with requirements from Intel. The fiber connections are specifically designed to work with Intel Silicon Photonics components.

          The collaboration underscores the tremendous need for high-speed bandwidth within datacenters. By sending photons over a thin optical fiber instead of electrical signals over a copper cable, the new technologies are capable of transferring massive amounts of data at unprecedented speeds over greater distances. The transfers can be as fast as 1.6 terabits per second4 at lengths up to 300 meters5 throughout the datacenter.

          To highlight the growing range of Intel RSA implementations, Microsoft and Intel announced a collaboration to innovate on Microsoft’s next-generation RSA rack design. The goal is to bring even better utilization, economics and flexibility to Microsoft’s datacenters.

          The Intel Atom C2000 product family is shipping to customers now with more than 50 designs for microservers, cold storage and networking. The products are expected to be available in the coming months from vendors including Advantech*, Dell*, Ericsson*, HP*, NEC*, Newisys*, Penguin Computing*, Portwell*, Quanta*, Supermicro*, WiWynn*, ZNYX Networks*.

          Intel Brings Supercomputing Horsepower to Big Data Analytics [press release, Nov 19, 2013]

          NEWS HIGHLIGHTS.

          • Intel discloses form factors and memory configuration details of the CPU version of the next generation Intel® Xeon Phi™ processor (code named “Knights Landing“), to ease programmability for developers while improving performance.
          • Intel® Xeon® processor-based systems power more than 82 percent of all supercomputers on the recently announced 42nd edition of the Top500 list.
          • New Intel® HPC Distribution for Apache Hadoop* and Intel® Cloud Edition for Lustre* software tools bring the benefits of Big Data analytics and HPC together.
          • Collaboration with HPC community designed to deliver customized products to meet the diverse needs of customers.

          SUPERCOMPUTING CONFERENCE, Denver, Nov. 19, 2013 –Intel Corporation unveiled innovations in HPC and announced new software tools that will help propel businesses and researchers to generate greater insights from their data and solve their most vital business and scientific challenges.

          “In the last decade, the high-performance computing community has created a vision of a parallel universe where the most vexing problems of society, industry, government and research are solved through modernized applications,” said Raj Hazra, Intel vice president and general manager of the Technical Computing Group. “Intel technology has helped HPC evolve from a technology reserved for an elite few to an essential and broadly available tool for discovery. The solutions we enable for ecosystem partners for the second half of this decade will drive the next level of insight from HPC. Innovations will include scale through standards, performance through application modernization, efficiency through integration and innovation through customized solutions.”

          Accelerating Adoption and Innovation
          From Intel® Parallel Computing Centers to Intel® Xeon Phi™ coprocessor developer kits, Intel provides a range of technologies and expertise to foster innovation and adoption in the HPC ecosystem. The company is collaborating with partners to take full advantage of technologies available today, as well as create the next generation of highly integrated solutions that are easier to program for and are more energy-efficient. As a part of this collaboration Intel also plans to deliver customized HPC products to meet the diverse needs of customers. This initiative is aimed to extend Intel’s continued value of standards-based scalable platforms to include optimizations that will accelerate the next wave of scientific, industrial, and academic breakthroughs.

          During the Supercomputing Conference (SC’13), Intel unveiled how the next generation Intel Xeon Phi product (codenamed “Knights Landing”), available as a host processor, will fit into standard rack architectures and run applications entirely natively instead of requiring data to be offloaded to the coprocessor. This will significantly reduce programming complexity and eliminate “offloading” of the data, thus improving performance and decreasing latencies caused by memory, PCIe and networking.

          Knights Landing will also offer developers three memory options to optimize performance. Unlike other Exascale concepts requiring programmers to develop code specific to one machine, new Intel Xeon Phi processors will provide the simplicity and elegance of standard memory programming models.

          In addition, Intel and Fujitsu recently announced an initiative that could potentially replace a computer’s electrical wiring with fiber optic links to carry Ethernet or PCI Express traffic over an Intel® Silicon Photonics link. This enables Intel Xeon Phi coprocessors to be installed in an expansion box, separated from host Intel Xeon processors, but function as if they were still located on the motherboard. This allows for much higher density of installed coprocessors and scaling the computer capacity without affecting host server operations.

          Several companies are already adopting Intel’s technology. For example, Fovia Medical*, a world leader in volume rendering technology, created high-definition, 3D models to help medical professionals better visualize a patient’s body without invasive surgery. A demonstration from the University of Oklahoma’s Center for Analysis and Prediction of Storms (CAPS) showed a 2D simulation of an F4 tornado, and addressed how a forecaster will be able to experience an immersive 3D simulation and “walk around a storm” to better pinpoint its path. Both applications use Intel® Xeon® technology.

          High Performance Computing for Data-Driven Discovery
          Data intensive applications including weather forecasting and seismic analysis have been part of the HPC industry from its earliest days, and the performance of today’s systems and parallel software tools have made it possible to create larger and more complex simulations. However, with unstructured data accounting for 80 percent of all data, and growing 15 times faster than other data1, the industry is looking to tap into all of this information to uncover valuable insight.

          Intel is addressing this need with the announcement of the Intel® HPC Distribution for Apache Hadoop* software (Intel® HPC Distribution) that combines the Intel® Distribution for Apache Hadoop software with Intel® Enterprise Edition of Lustre* software to deliver an enterprise-grade solution for storing and processing large data sets. This powerful combination allows users to run their MapReduce applications, without change, directly on shared, fast Lustre-powered storage, making it fast, scalable and easy to manage.

          The Intel® Cloud Edition for Lustre* software is a scalable, parallel file system that is available through the Amazon Web Services Marketplace* and allows users to pay-as-you go to maximize storage performance and cost effectiveness. The software is ideally suited for dynamic applications, including rapid simulation and prototyping. In the case of urgent or unplanned work that exceeds a user’s on-premise compute or storage performance, the software can be used for cloud bursting HPC workloads to quickly provision the infrastructure needed before moving the work into the cloud.

          With numerous vendors announcing pre-configured and validated hardware and software solutions featuring the Intel Enterprise Edition for Lustre, at SC’13, Intel and its ecosystem partners are bringing turnkey solutions to market to make big data processing and storage more broadly available, cost effective and easier to deploy. Partners announcing these appliances include Advanced HPC*, Aeon Computing*, ATIPA*, Boston Ltd.*, Colfax International*, E4 Computer Engineering*, NOVATTE* and System Fabric Works*.

          Intel Tops Supercomputing Top 500 List
          Intel’s HPC technologies are once again featured throughout the 42nd edition of the Top500 list, demonstrating how the company’s parallel architecture continues to be the standard building block for the world’s most powerful supercomputers. Intel-based systems account for more than 82 percent of all supercomputers on the list and 92 percent of all new additions. Within a year after the introduction of Intel’s first Many Core Architecture product, Intel Xeon Phi coprocessor-based systems already make up 18 percent of the aggregated performance of all Top500 supercomputers. The complete Top500 list is available at www.top500.org.


          1
          From IDC Digital Universe 2020 (2013)
          Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
          Optimization Notice
          Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
          Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

          Fujitsu Lights up PCI Express with Intel Silicon Photonics [The Data Stack blog of Intel, Nov 5, 2013]

          Victor Krutul is the Director of Marketing for the Silicon Photonics Operation at Intel.  He shares the vision and passion of Mario Paniccia that Silicon Photonics will one day revolutionize the way we build computers and the way computers talk to each other.  His other passions are tennis and motorcycles (but not at the same time)!

          I am happy to report that Fujitsu announced at its annual Fujitsu Forum on November 5th 2013, that it has worked with Intel to build and demonstrate the world’s first Intel® Optical PCIe Express (OPCIe) based server.  This OPCIe server was enabled by Intel® Silicon Photonics technology.  I think Fujitsu has done some good work when they realized that OPCIe powered servers offer several advantages over non OPCIe based servers.  Rack based servers, especially 1u and 2u servers are space and power constrained.  Sometimes OEMs and end users want to add additional capabilities such as more storage and CPUs to these servers but are limited  because there is simply not enough space for these components or because packing too many components too close to each other increases the heat density and prevents the system from being able to cool the components.

          Fujitsu found a way to fix these limitations!

          The solution to the power and space density problems is to locate the storage and compute components on a remote blade or tray in a way that they appear to the CPU to be on the main motherboard.  The other way to do this is to have a pool of hard drives managed by a second server – but this approach requires messages be sent between the two servers and this adds latency – which is bad.  It is possible to do this with copper cables; however the distance the copper cables can span is limited due to electro-magnetic interference (EMI).  One could use amplifiers and signal conditioners but these obviously add power and cost.  Additionally PCI Express cables can be heavy and bulky.  I have one of these PCI Express Gen 3 16 lanes cables and it feels like it weighs 20 lbs.  Compare this to a MXC cable that carries 10x the bandwidth and weighs one to two pounds depending on length.

          Fujitsu took two standard Primergy RX200 servers and added an Intel® Silicon Photonics module into each along with an Intel designed FPGA.  The FPGA did the necessary signal conditioning to make PCI Express “optical friendly”.  Using Intel® Silicon Photonics they were able to send PCI Express protocol optically through an MXC connector to an expansion box (see picture below).  In this expansion box was several solid state disks (SSD) and Xeon Phi co-processors and of course there was a Silicon Photonics module along with the FPGA to make PCI Express optical friendly.  The beauty of this approach was that the SSD’s and Xeon Phi’s appeared to the RX200 server as if they were on the mother board.  With photons traveling at 186,000 miles per second the extra latency of travelling down a few meters of cable cannot reliably be measured (it can be calculated to be ~5ns/meter or 5 billionths of a second).So what are the benefits of this approach?  Basically there are four.  First, Fujitsu was able to increase the storage capacity of the server because they now were able to utilize the additional disk drives in the expansion box.  The number of drives is determined by the physical size of the box.  The 2nd benefit is they were able to increase the effective CPU capacity of the Xeon E5’s in the RX200 server because the Xeon E5’s could now utilize the CPU capacity of the Xeon Phi co-processors. In a standard 1u rack it would be hard if not impossible to incorporate Xeon Phi’s.  The third benefit is the cooling.  First putting the SSD’s in a expansion box allows one to burn more power because the cooling is divided between the fans in the 1U rack and those in the expansion box,  The fourth benefit is what is called cooling density or, how much heat needs to be cooled per cubic centimeter.  Let me make up an example. For simplicity sake let’s say the volume of a 1u rack is 1 cubic meter and let’s say there are 3 fans cooling that rack and each fan can cool 333 watts for a total capacity of 1000 watts of cooling.  If I evenly space components in the rack each fan does its share and I can cool 1000 watts.  Now assume I put all the components so that just one fan is cooling them because there is no room in front of the other two fans.  If those components expend more than 330 watts they can’t be cooled.  That’s cooling density.  The Fujitsu approach solves the SSD expansion problem, the CPU expansion problem and the total cooling and cooling density problems.

          image

          Go to:https://www-ssl.intel.com/content/dam/www/public/us/en/images/research/pci-express-and-mxc-2.jpg  if you want to see the PCI Express copper cable vs the MXC optical cable (you will also see we had a little fun with the whole optical vs copper thing.)

          Besides Intel® Silicon Photonics the Fujitsu demo also included Xeon E5 microprocessors and Xeon Phi co-processors.

          Why does Intel want to put lasers in and around computers?

          Photonic signaling (aka fiber optics) has 2 fundamental advantages over copper signaling.  First, when electric signals go down a wire or PCB trace they emit electromagnetic radiation (EMI) and when this EMI from one wire or trace couples into an adjacent wire it causes noise, which limits the bandwidth distance product.  For example, 10G Ethernet copper cables have a practical limit of 10 meters.  Yes, you can put amplifies or signal conditioners on the cables and make an “active copper cable” but these add power and cost.  Active copper cables are made for 10G Ethernet and they have a practical limit of 20 meters.

          Photons don’t emit EMI like electrons do thus fiber based cables can go much longer.  For example with the lower cost lasers used in data centers today at 10G you can build 500 meter cables.  You can go as far as 80km if you used a more expensive laser, but these are only needed a fraction of the time in the data center (usually when you are connecting the data center to the outside world.)

          The other benefit of optical communication is lighter cables.  Optical fibers are thin, typically 120 microns and light.  I have heard of situations where large data centers had to reinforce the raised floors because with all the copper cable, the floor loading limits would be exceeded.

          So how come optical communications is not used more in the data center today? The answer is cost!

          Optical devices made for data centers are expensive.  They are made out of expensive and exotic materials like Lithium-Niobate or Gallium-Arsenide.  Difficult to pronounce, even more difficult to manufacture.  The state of the art for these exotic materials is 3 inch wafers with very low yields.  Manufacturing these optical devices is expensive.  They are designed inside of gold lined cans and sometimes manual assembly is required as technicians “light up” the lasers and align them to the thin fibers.  A special index matching epoxy is used that sometimes can cost as much as gold per ounce.  Bottom line is that while optical communications can go further and uses light fiber cables it costs a lot more.

          Enter Silicon Photonics!  Silicon Photonics is the science of making Photonic devices out of Silicon in a CMOS fab.  Also known as optical but we use the word photonics because the word “optical” is also used when describing eye glasses or telescopes.  Silicon is the most common element in the Earth’s crust, so it’s not expensive.  Intel has 40+ years of CMOS manufacturing experience and has worked over the 40 years to drive costs down and manufacturing speed up.  In fact, Intel currently has over $65 Billion of capital investment in CMOS fabs around the world.  In short, the vision of Intel® Silicon Photonics is to combine the natural advantages of optical communications with the low cost advantages of making devices out of Silicon in a CMOS fab.

          Intel has been working on Intel® Silicon Photonics (SiPh) for over ten years and has begun the process of productizing SiPh.  Earlier this year, at the OCP summit Intel announced that we have begun the long process of building up our manufacturing abilities for Silicon Photonics.  We also announced we had sampled customers with early parts.

          People will often ask me when we will ship our products and how much they will cost?   They also ask me for all sort of technical details about out SiPh modules.  I tell them that Intel is focusing on a full line of solutions – not a single component technology. What our customers want are complete Silicon Photonic based solutions that will make computing easier, faster or less costly.  Let me cite our record of delivering end-to-end solutions:

          Summary of Intel Solution Announcements

          January 2013:  We did a joint announcement with Facebook at the Open Compute Project (OCP) meeting that we worked together to design disaggregated rack architecture (since renamed RSA [Rack Scale Architecture]).  This architecture used Intel® Silicon Photonics and allowed for the storage and networking to be disaggregated or moved away from the CPU mother board.  The benefit is that users can now choose which components they want to upgrade and are not forced to upgrade everything at the same time.

          April 2013: At the Intel Developer Forum we demonstrated the first ever public demonstration of Intel® Silicon Photonics at 100G.

          September 2013: We demonstrated a live working Rack Scale Architecture solution using Intel® Silicon Photonics links carrying Ethernet protocol.

          September 2013: Joint announcement with Corning for new MXC and ClearCurve fiber solution capable of transmission of 300m with Intel® Silicon Photonics at 25G.  This reinforced our strategy of delivering a complete solution including cables and connectors that are optimized for Intel® Silicon Photonics.

          September 2013Updated Demonstration of a solution using Silicon Photonics to send data at 25G for more than 800 meters over multimode fibers – A new world record.

          Today: Intel has extended its Silicon Photonics solution leadership with a joint announcement with Fujitsu demonstrating the world’s first Intel® Silicon Photonics link carrying PCI Express protocol.

          I hope you will agree with me that Intel is focusing on more than just CPUs or optical modules and will deliver a complete Silicon Photonics solution!

          Disaggregation in the next-generation datacenter and HP’s Moonshot approach for the upcoming HP CloudSystem “private cloud in-a-box” with the promised HP Cloud OS based on the 4 years old OpenStack effort with others

          My Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post already introduced the HP Moonshot System. This post is discussing Moonshot in a much wider context, as well as providing the information which came after Dec 6, 2013, particularly at the HP Discover Barcelona 2013 event:
          1. The essence of IT industry’s state-of-the-art regarding the datacenter and the cloud
          2. Recent academic research: the disaggregated datacenter phenomenon
          3. Details about HP’s converged systems and next-gen cloud technology
          4. Latest details about HP’s Moonshot technology
          1. The essence of IT industry’s state-of-the-art regarding the datacenter and the cloud

          There is a new way of thinking in the IT industry which is best represented by No silo left behind: Convergence in the age of virtualization, cloud, and Big Data [HP Discover YouTube channel, recorded on Dec 10, 11:20 AM – 12:20 PM; published on Dec 11, 2013] presentation by HP on its HP Discover Barcelona 2013 event:

          Join Tom Joyce, Senior Vice President, Converged Systems, as he highlights the latest HP innovations and solutions that are leading the new end-to-end revolution.

          As far as the cloud is concerned today’s issue is Making hybrid real for IT and business success [HP Discover YouTube channel, recorded on Dec 10, 12:40 PM – 1:40 PM; published on Dec 11, 2013]

          Join Saar Gillai, senior vice president and general manager, HP Cloud, in this provocative, hype-busting session to learn what IT and business leaders can do today to set their organizations on a successful path to the cloud. [IMHO the part mentioning “traditional apps put into the cloud via virtualization” (typically via VMware) vs. modern “scale-out apps” is especially important. Starting at [10:45]!]

          Then one should at least briefly understand HP Cloud strategy and benefit of leveraging a portfolio of solutions [HP Discover YouTube channel, Dec 12, 2013]

          [Margaret Dawson, VP Marketing, HP Cloud] HP Cloud is designed to run and operate next gen web services at scale on a global basis. HP’s Converged Cloud strategy approach provides customers a common architecture allowing them to integrate private, managed and public cloud with traditional IT infrastructures.

          And HP is just about half year from the point (in time) when it will have its final answer to the question: How open source will reinvent cloud computing – again [HP Discover YouTube channel, Dec 12, 2013], the presentation which was originally announced under the title “The Rise of Open Source Clouds” and finally delivered with the following slides (to wet your apetite for watching the record of the presentation following next):

          image

          image

          image

          image

          image

          image

          image
          “Different delivery models being private, manage and public. … On the top you can the see six workload areas. These areas are basicaly we’ll build our product portfolio against. So we’ll be moving away from just sort of a catalogue of SKUs and piece parts into building offers in a workload base, things like dev test, business continuity, technical computing or HPC, and of course things like analytics and infrastructure.”

          Bill Hilf, Vice President of Product Management for HP Cloud, will walk you through HP’s strategy and innovation with OpenStack and how it helps customers deploy, manage, and secure cloud environments.

          Now we can take a brief Tour of the Cloud Booth at HP Discover Barcelona [hpcloud YouTube channel, Dec 11, 2013] in order to understand the cloud-related announcements made by HP (some of these will be detailed in this post later as related to the title of post)

          And Moonshot-specific announcements are briefly summarized in HP Moonshot latest innovations allow your business can embrace the new style of IT [HP Discover YouTube channel, Dec 12, 2013]

          HP has defined and led the industry standard server market for years. HP’s John Gromala and Janet Bartleson discuss how HP has taken HP Moonshot to the next level with the latest innovations and how they can benefit you. The idea is simple: Using energy efficient CPUs in architecture tailored for a specific application results in radical power, space, and cost savings when run at scale.

          Finally The future according to HP Labs [HP Discover YouTube channel, Dec 12, 2013]

          HP Discover is all about the future. HP Labs — HP’s central research arm — is all about the far future. Come and hear how three of HP’s most senior technologists see the IT landscape evolving and how it will transform all our lives.

          This is the essence of IT industry’s state-of-the-art regarding the datacenter and the cloud.


          2. On the other hand recent academic research has just been awakening to, what they are calling, the disaggregated datacenter phenomenon
          already happening as the “next big thing” in the industry, as evidenced by the following excerpts from the Network Support for Resource Disaggregation in Next-Generation Datacenters [research paper* on HotNets-XII**, Nov 21-22, 2013]

          Datacenters have traditionally been architected as a collection of servers wherein each server aggregates a fixed amount of computing, memory, storage, and communication resources. In this paper, we advocate an alternative construction in which the resources within a server are disaggregated and the datacenter is instead architected as a collection of standalone resources.

          Disaggregation brings greater modularity to datacenter infrastructure, allowing operators to optimize their deployments for improved efficiency and performance. However, the key enabling or blocking factor for disaggregation will be the network since communication that was previously contained within a single server now traverses the datacenter fabric. This paper thus explores the question of whether we can build networks that enable disaggregation at datacenter scales.

          image

          image

          Figure 2: Architectural differences
          between server-centric and resource-centric datacenters***

          As illustrated in Figure 2, the high-level idea behind diaggregation is to develop standalone hardware “blades”for each resource type including CPUs, memory, storage, and network interfaces as well as specialized components (GPUs, various ASIC accelerators, etc.). Those resource
          blades
          are interconnected by a datacenter-wide network fabric. Understanding the specifications and nature of this network fabric is our focus in this paper.

          Abbreviations used above for Figure 2. (in addition to “C” for CPU and “M” for Memory):

          Martin Fink, CTO and Director of HP Labs, speaks at NTH Generation’s 13th Annual Symposium.

          * Sangjin Han (U.C.Berkeley), Norbert Egi (Huawei Corp.), Aurojit Panda, Sylvia Ratnasamy (U.C.Berkeley), Guangyu Shi (Huawei Corp.), Scott Shenker (U.C.Berkeley and ICSI)
          ** Twelfth ACM Workshop on Hot Topics in Networks
          *** I should emphasize here that a disaggregated datacenter with shared disaggregated memory (as on the (b) part of the Figure 2. above) is NOT a kind of academic exageration but a relatively “near term reality” of the future. It became somewhat obvious from the recent  The future according to HP Labs video included in the end of the first section above, especially when Moonshot was mentioned. To provide more evidence watch the Tectonic shifts: Where the future of convergence is taking us [NTH Generation Computing, Inc. YuTube channel, recorded on Aug 1; published on Aug 20, 2013] keynote presentation above. In this HP’s CTO Martin Fink said that a new type of device HP has been working on for years, called memristor, could be made into a non-volatile and non-hierarchical, i.e. universal memory system, replacing both DRAM and flash, as well as magnetic storage in perspective. He also hinted at specialised Moonshot cartridges, possibly using memristor memory instead of DRAM, linked by terabit-class photonic connects to memristor storage arrays. He was already showing a prototype memristor wafer as well. There is no wonder therefore that according to HP’s own Six IT technologies to watch [Enterprise 20/20 Blog, Sept 5, 2013] article:

          Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.
          The Future of Big Data – an interview with John Sontag, VP and director of HP Labs’ Systems Research [HP Enterprise Business Community, Nov 14, 2013] is providing even bigger prospects as:
          If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data.
          Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways
          On top of all that, we need to reduce costsif we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you can conduct experiments on this data literally as fast as we can think them up.
          The combination of non-volatile, memristor-powered memory and very large scales is causing the people who think about storage and algorithms to realize that the tradeoff has changed. For the last 50 years, we’ve had to think of every bit of data that we process as something that eventually has to get put on a disk drive if you intend to keep it. That means you have to think about the time to fetch it, to re-sort it into whatever way you want it to rest in memory, and to put it back when you’re done as one of your costs of doing business.
          If you don’t have those issues to worry about, you can leave things in memory – graphs, for example, which are powerful expressions of complex data – that at present you have to spend a lot of compute time and effort pulling apart for storage. The same goes for processing. Right now we have to worry about how we break data up, what questions we ask it and how many of us are asking it at the same time. It makes experimentation hard because you don’t know whether the answer’s going to come immediately or an hour later.
          Our vision is that you can sit at your desk and know you’ll get your answer instantly. Today we can do that for small scale problems, but we want to make that happen for all of the problems that you care about. What’s great is that we can begin to do this with some questions that we have right now. We don’t have to wait for this to change all at once. We can go at it in an incremental way and have pieces at multiple stages of evolution concurrently – which is exactly what we’re doing.
          There are people who have given up on thinking about certain problems because there’s no way to compactly express them with the systems we have today. They’re going to be able to look at those problems again – it’s already happening with Moonshot and HAVEn [HP’s Big Data platform], and at each stage of this evolution we’re going to allow another set of people to realize that the problem they thought was impossible is now within reach.
          One example of where this already happened is aircraft design. When we moved to 64-bit processors that fit on your desktop and that could hold more than four gigabytes of memory, the people who built software that modeled the mechanical stresses on aircraft realized that they could write completely different algorithms. Instead of having to have a supercomputer to run just a part of their query, they could do it on their desktop. They could hold an entire problem in memory, and then they could look at it differently. From that we got the Airbus A380, the Boing 777 and 787, and, jumping industries, most new cars.

          Now back to the academic research for Network Support for Resource Disaggregation in Next-Generation Datacenters [presentation slides on HotNets-XII*, Nov 21-22, 2013] to illustrate their understandin of the trends

          The Trends: Disaggregation 

          HP MoonShot
          –  Shared cooling/casing/power/mgmt for server blades
          [Note that Moonshot is much more than that, as it was already presented in all detail in my Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post.]

          image

          AMD SeaMicro
          –  Virtualized I/O
          image

          [from the research paper:]
          SeaMicro’s server architecture [6] uses a looser coupling of components within a single server … the network in SeaMicro’s architecture implements a 3D torus interconnect, which only disaggregate I/O and does not scale beyond the rack … [6] SeaMicro Technology Overview.

          Intel Rack Scale Architecture

          image

          [from the research paper: SeaMicro’s server architecture [6] uses a looser coupling of components within a single server,] while Intel’s Rack Scale
          Architecture (RSA) [15] extends this approach to rack scales. …
          [15] Intel Newsroom. Intel, Facebook Collaborate on Future Data Center Rack Technologies

          Open Compute Project

          image                

          image

          image

          Closing Remarks

          • Disaggregated datacenter will be “the next big thing”   
            – Already happening. We [i.e. the academic research] need to catch up!   


          3. And next continue with the details about HP’s converged systems and next-gen cloud technology
          :

          Why HP uses its own Converged Infrastructure solutions [Enterprise CIO Forum YouTube channel, Nov 11, 2013]

          HP’s CIO, Ramon Baez, tells us about the benefits HP has found in using its own Converged Infrastructure solutions, including Networking, Storage, and Moonshot servers. For more, see hp.com/ci

          From “Sharks” in the press at HP Discover, Barcelona – Day One coverage [HP Converged Infrastructure blog, Dec 10, 2013]

          … we were hosting a large press announcement that went out over the wire on Monday at 3 pm local time (CET).

          Here’s a brief summary of the announcement that was presented by Tom Joyce,  Senior Vice President and General Manager, HP Converged Systems. The HP ConvergedSystem is a new product line completely reengineered up based on 21st-century assets and architectures for the New Style of IT. This is an important point as Tom emphasized – this is not a collection of piece parts, this is a completely new engineered solution, built on core building that are workload-optimized systems which are easy to buy, manage, and support – order to operations in as few as 20 days, with ONE tool to manage and most importantly having ONE point of accountability.

          Built using HP Converged Infrastructure’s best-in-class servers, storage, networking, software and services, the new HP ConvergedSystem family of products deliver a total systems experience “out of the box.”

          • HP ConvergedSystem for Virtualization helps clients easily scale computing resources to meet business needs with preconfigured, modular virtualization systems supporting 50 to 1,000 virtual machines at twice the performance, and at an entry price 25 percent lower than competitive offerings.
          • HP ConvergedSystem 300 for Vertica speeds big data analytics, helping organizations turn data into actionable insights at 50 to 1,000 times faster performance and 70 percent lower cost per terabyte than legacy data warehouses.
          • HP ConvergedSystem 100 for Hosted Desktops, based on the award-winning HP Moonshot server, delivers a superior desktop experience compared to traditional virtual desktop infrastructure. This first PC on a chip for the data center delivers six times faster graphics performance and 44 percent lower total cost of ownership

          The physical press release in my opinion was pretty cool, and one of the better ones  I have attended. The new HP ConvergedSystem for Virtualization 300 and 700 debuted on stage with the theme from Jaws, with much snapping of camera flashes. Tom explained why the sharks theme was so integral to this particular system with core attributes of  most “efficient”, ”best in class”, extremely “fast”, very “agile” and that it “never sleeps”!!

          The best one liner from Tom Joyce during the session was “If I were VCE [VMware/Cisco/EMC combination] I would be getting out of the water!!” which was capture on the HP live streaming video s found here. Check it out as it is worth watching. I have also included the full “HP Shark” press release HP Introduces Innovations Built for the Data Center of the Future.

          Here is a detailed press report on that: HP Targets VCE With Converged System Lineup [Dec 10, 2013].

          HP ConvergedSystem: Innovation to reduce the complexity of technology integration [HP Discover YouTube channel, Dec 11, 2013]

          Tom Joyce, Senior Vice President of HP ConvergedSystem business unit talks about how over the last two decades, IT has been forced to focus too many products, too many tools, and overly complex processes and spending too many resources on maintenance and not enough on innovation. To break free, IT must move from infrastructure craftsmen to business service experts with workload-optimized, engineered systems that are easy to procure, manage, and support and enable their business to quickly capitalize on new applications like big data and new delivery models such as cloud.

          The HP “Sharks” are in the Water [HP Converged Infrastructure blog, Dec 9, 2013]

          Written by guest blogger Tom Joyce, Senior Vice President and General Manager, HP Converged Systems

          Seven months ago HP announced the formation of our new Converged Systems business unit.  I was excited to be asked to lead this new team because so many of our customers had told us they needed truly converged platforms for their datacenters.  Over the last five years HP had developed Converged Infrastructure technologies for storage, networking and servers that enabled better and more cost effective solutions, but it was time to take it to the next level.  We needed to bring all those technologies together in a way that collapsed the cost of IT infrastructure and made everything faster and easier.

          Starting last summer, we built our team.  We hired the best of the best from within HP and from elsewhere.  We put in place an operating model and set of processes that allow us to do agile product development and deliver products to market rapidly and with high quality.  And we got really creative in our thinking.  We were also fortunate to get a lot of time with Meg [Whitman, HP CEO] and other top people throughout HP.  This was critical because to deliver a game changing set of new products, we had to break down or change a lot of established processes in development, manufacturing, support and go-to-market.  We had to break some glass, and Meg helped us do that by making this a high priority.

          Based on the customer input, there were some critical things I knew we needed to do. 

          • Move fast.  The IT market is changing quickly, and I wanted to get our first set of products out by the end of the calendar year. 
          • Do more than just combine existing server, storage, networking and software components.  We needed to engineer these new products to deliver more with less infrastructure, and to handle the most important customer workloads exceptionally well. 
          • Everything had to be simple – the ordering process, the system design, management, support, easy upgrades – everything.
          • Think about the “whole offer” and experience for the customer, not just the product itself.  This meant providing a better process from end to end.
          • Deliver exceptional economics.  The new product had to be priced to market with a clear return on investment for the customer. 
          • Most importantly, we needed to make sure that our channel partners could make money selling this product, and could provide specialize services around it.

          After developing our plan, we started “Project Sharks”.  We called it this because if you think about it, a shark is perfectly engineered to accomplish its mission – it is the ideal hunting machine.  When I was a kid I was fascinated by sharks.  People tend to think of sharks as primitive creatures, but they are actually extremely sophisticated.  Everything is designed with a purpose, and there is no waste.  Sharks have a unique hydroskeleton, musculature, and skin.  All these parts are connected to maximize thrust so that the animal can move fast, like a torpedo.  Sharks are noted for being able to sense blood in the water, but beyond that they have an amazingly complete set of sensors – perhaps the most sophisticated set of “sensors in the sea.” 🙂

          Our goal with “project sharks” was to build a perfectly designed virtual infrastructure machine.  This week at HP Discover, Barcelona, we announced the new HP ConvergedSystem for VirtualizationClick here to find out more information.  The two models are designed to be core building blocks for constructing a converged data center.  They are very fast and efficient, delivering better raw IOPS for virtualization at a great cost point.  They can handle a lot more virtual machines than a traditional configuration.  They can also deliver about a 58% lower cost per VM over a 3 year period, as compared to our closest competitor.

          Perhaps more important, we redesigned our whole delivery process as part of “project sharks”.  The result is that HP or a channel partner can actually produce a configuration and quote for an HP ConvergedSystem in about 20 minutes, and the whole thing will be on one sheet of paper.  HP ConvergedSystem 300 and the 700 installed and in production in a customer data center in as few as 20 days.  We have also fully integrated the management, to make it simple, and the support.  If support is needed, only one call to HP is required; you don’t need to deal with a server vendor, a storage vendor, etc.  When it is time for firmware upgrades, the process for the whole system is integrated.  And when you need additional capacity, we can ship a module out from our factory in one day, and it will be up and running in about five days.

          These new “sharks” are not just for virtualization.  We also announced that the HP ConvergedSystem 300 for Vertica as a new platform for big data analytics.  The HP ConvergedSystem 100 is based on HP Moonshot servers, and ships as a Citrix XenDesktop system

          In the future the HP ConvergedSystem products will support additional workloads and ISV applications, and will be used as building blocks for HP CloudSystem private clouds, so stay tuned for more.

          Our new Converged Systems business unit team is very excited about the opportunity to unleash these new “sharks”, and put them in the water. We are looking forward to hearing from our customers and partners about what they want us to do next, because the spirit of innovation is alive and well at HP.

          On the Dec 10 HP Discover Barcelona 2013 keynote HP’s hybrid cloud strategy was presented with the following slides, with comments made by the presenter added only for the HP CloudSystem private clouds part:

          image

          image
          Bill Hilf
          Vice President, Converged Cloud Products and Services, is driving HP’s entire cloud roadmap
          (who came to HP 6 months ago from Microsoft where he was GM of Windows Azure Product Management): “HP Next Gen CloudSystemto be released in the 1st half of 2014” with the following major characteristics:
          ConsistencyChoice Confidence 

          More information:
          HP Unveils Innovations in Cloud to help Customers Thrive in a Hybrid World [The HP Blogs Hub, Dec 11, 2013] in which it is stated “As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft® Windows® Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company. “
          – A press release of similar title with additional lead and closing “Pricing and availability” parts
          HP CloudSystems stand apart [HP Enterprise Business community blog, Dec 10, 2013]
          How HP CloudSystem stacks up against competitors [Porter Consulting, June 14, 2013] Comparison of offerings from HP, IBM [PureSystems], and VCE [formed as a joint venture by Cisco and EMC, with minor investments from VMware and Intel; resulting in Vblock products based on Cisco UCS servers, Cisco network components, EMC storage arrays, and the VMware virtualization suite]

          image“We created a killer interface. An easy to use, consumer inspired interface that is consistent across multiple types of experiences (from classic PC, administration, to mobile experiences). We also designed and optimized the interface for the different types of roles in the organization (from architect who might be designing a service, to end user or consumer of that service, as well as for IT operator and adminstrator).”

          More information: Empowering users and the new face of cloud [HP Enterprise Business community blog, Dec 11, 2013] written by Ken Spear, Senior Marketing Manager (HP CloudSystem and OneView)

          image“We spent considerable effort and energy an choice and ability to really give customers the heterogeneous workload support they need. And now we are taking openess to an entirely new level. And so for the first time with CloudSystem we are shipping HP Cloud OS which is our enterprise class, OpenStack**** platform which gives customers the great innovation from OpenStack to build modern cloud workloads. But we are also supporting the power of matrix, so that you can bridge today’s and
          tomorrow’s workloads on the same system.”

          **** OpenStack APIs are compatible with Amazon EC2 (see Nova/APIFeatureComparison) and Amazon S3 (see Swift/APIFeatureComparison) and thus client applications written for Amazon Web Services can be used with OpenStack with minimal porting effort. Note that HP nixes Amazon EC2 API support — at least in its public cloud [Gigaom, Dec 6, 2013] “based upon significant input from developers and customers” as “customers want to avoid getting locked in to what he called, ‘Amazon’s spider web’ ”. Tier 1 Research analyst Carl Brooks said via email: “HP doesn’t need to support AWS APIs — OpenStack will do that for them to the limited extent it already does”.

          image“And finally we’re giving customers and partners more confidence
          than they’ve ever had before in this type of solution. … And that will be available in both a quick-ship, channel-ready fixed configuration as well as in a highly customizable solution. In addition CloudSystem will ship with cloud service automation (CSA), the industry-leading orchestration and hybrid cloud management software [read NEW! HP’s solution for managing private and hybrid clouds] that gives an easy experience and easy management of next hybrid cloud environment. That could be clouds delivered in any physical infrastructure: public, managed or private. And lastly, when customers use clouds as to build private cloud there is boundless growth, because you can extend CloudSystem with public cloud resources: from the HP public cloud, or Amazon, or Savvis. And this week we are also announcing support for Windows Azure, as well as two very important European partners: SFR and arsys, a service provider right here in Spain.

          More information:
          HP Cloud Service Automation – See new, do new at HP Discover! [HP Enterprise Business community blog, Dec 11, 2013]
          HP Unveils Innovations in Cloud to help Customers Thrive in a Hybrid World [The HP Blogs Hub, Dec 11, 2013] in which it is stated “As the foundation of a hybrid cloud solution, HP CloudSystem bursts to multiple public cloud platforms, including three new ones: Microsoft® Windows® Azure, and platforms from Arsys, a European-based cloud computing provider, and SFR, a French telecommunications company. “
          – A press release of similar title with additional lead and closing “Pricing and availability” parts 

          Underlying core technologies:

          HP Converged Cloud delivers choice, confidence, and consistency. Learn how HP Cloud OS as part of the HP Converged Cloud portfolio leverages OpenStack to enable workload portability, simplified installation, and enhanced service lifecycle management.http://hp.com/cloud
          Live demonstration of the HP Moonshot server running HP Cloud OS based on OpenStack at HP DIscover Barcelona 2013

          Open source has long been linked to innovation. With a history tracing back to the origins of the public web, the concept of open source relies on the assumption that shared knowledge produces more and better innovation, which is better for everyone—as well as the business world.

          Some pundits believe that it is the combination of cloud and the power of the open source community that has enabled such rapid cloud development, adoption, and innovation.

          OpenStack: cloud source code at the ready

          OpenStack® provides the building blocks for developing private and public cloud infrastructures. OpenStack comprises a series of interrelated projects, characterized by their powerful capabilities and massive scalability.

          Like all open source projects, OpenStack is a group collaboration, consisting of a global community of developers and cloud computing technologists. HP is a top contributor and driving force behind OpenStack, helping it to become a leading software for open cloud platforms.

          In other words, there’s a bright future for OpenStack, which is why HP chose it as the foundation for its hybrid cloud solutions.

          HP Cloud OS

          HP Cloud OS is the world’s first OpenStack-based cloud technology platform for hybrid delivery. HP Cloud OS enables our existing cloud solutions portfolio and new innovative offerings by providing a common architecture that is flexible, scalable, and easy to build on.

          “We are in a new phase of cloud computing. Enterprises, government agencies, and industry are all placing demands on cloud computing technologies that exceed a singular, one-size-fits all delivery model,” says Bill Hilf, vice president of product management for HP Cloud. “HP Cloud OS, built on the power of OpenStack, is the foundation for the HP Cloud portfolio and a key part of the HP solutions that enable real customer choice and consistency.”

          Watch the HP Cloud OS story at HP Discover

          Attendees at HP Discover 2013 in Barcelona, don’t miss this opportunity to hear the inside story of HP’s development of HP Cloud OS. Join the Innovation Theater session:

          IT3261 – The rise of open source clouds

          In this session, Bill Hilf will walk you through his experiences working with large public cloud systems, the rise of open source clouds in the enterprise, and HP’s strategy and innovation with OpenStack, including a discussion of HP Cloud OS (Wednesday, 12/11/13, 4:30 pm).

          Highlights from the presentation include:

          • How open source has affected the development of the cloud
          • The requirements of enterprises related to cloud computing
          • How OpenStack enables HP’s cloud platform
          • Top ten lessons learned when building HP’s public cloud
          • HP’s overall cloud strategy
          Discussion with William Franklin, VP OpenStack & Technology Enablement talks about HP Cloud and our open source strategy with OpenStack at the OpenStack Summit Hong Kong 2013.
          Monty Taylor, Distinguished Technologist and OpenStack Guru, talks about the OpenStack community, HP’s contributions to Havana and OpenStack projects, and the future of OpenStack.

          Gartner’s Allessandro Perilli’s latest observations about the OpenStack (he is focusing on private cloud computing in the Gartner for Technical Professionals (GTP) division):
          What I saw at the OpenStack Summit [Nov 12, 2013] in which he is particularly describing how OpenStack vendors are divided into two camps that I called “purists” and “pragmatists”. He notes that purists tend to ignore the fact that many large enterprises are interested in OpenStack for the reason of reducing their dependency from VMware and frightened by rewriting their traditional multi-tier LoB applications into new cloud-aware applications advocated by purists.
          Why vendors can’t sell OpenStack to enterprises [Nov 19, 2013] where he notes that: “In fact, for the largest part, vendors don’t know how to articulate the OpenStack story to win enterprises. They simply don’t know how to sell it.” Then he gives at least four reasons for why vendors can’t tell a resonating story about OpenStack to enterprise prospects:
          1. “Lack of clarity about what OpenStack does and does not.”
          2. “Lack of transparency about the business model around OpenStack.”
          3. “Lack of vision and long term differentiation.”
          4. “Lack of pragmatism”, i.e. “purist” approach described in his previous post.

          J.R. Horton, HP CloudOS Sr. Product Manager details the HP Cloud OS technology preview allowing developers access to a complete enterprise-grade OpenStack package for fast installation and deployment.
          [Mark Perreira, Chief Architect of HP Cloud OS:] This video demonstrates how HP Cloud OS can help simplify delivery, enhance lifecycle management and optimize workloads for your cloud environment. It includes information on Cloud OS architecture, kernel and base services, and administrative tools.
          Mark Perreira, Chief Architect of HP Cloud OS, whiteboards the hybrid provisioning capabilities in HP Cloud OS.

          J.R. Horton, HP Cloud OS Sr. Product Manager presents the HP Cloud architecture at HP Discover in Barcelona 2013. [Note that in addition to HP other OpenStack Foundation Platinum Members (providing a significant portion of the funding) are: AT&T, Canonical, IBM, Nebula, Rackspace, Red Hat, Inc., SUSE. Just today the news came as well that Oracle raised its membership to Platinum level.]


          4. Finally latest details about HP’s Moonshot technology


          • Moonshot: one of the “INFRA” (see above in the “HP Cloud OS Whiteboard Demo” video) building blocks for the HP CloudOS, actually the most future-oriented one

          The Power of Moonshot [HP Discover YouTube channel, Dec 10, 2013]

          “Like many companies, HP was a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States. With the addition of four EcoPODs and Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.”

          My Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post introduced the HP Moonshot System as follows:

          On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part  of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.

          image

          Also the Dec 6 update to the above post already provided significant roadmap information:

          With Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013] saying

          We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]

          For the details about the ARM SoC technologies behind that go to the Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013 – updated Dec 6, 2013] post!

          But the initial Moonshot System launched in April’13 had support just for light workloads, such as such as website front ends and simple content delivery. This meant, nevertheless, a lot in the hosting space as evidenced by serverCONDO Builds its Business on Moonshot [Janet Bartleson YouTube channel, Dec 9, 2013] video:

          serverCONDO President John Brown wanted to expand to offer dedicated hosting, and traditional 1U servers looked pretty good, until the team discovered HP Moonshot. Hear more about what he was looking for and the results serverCONDO achieved. http://www.servercondo.com http://www.hp.com/go/moonshot

          More information from the same source:
          Why serverCONDO is in the Dedicated Hosting Business
          Old School and New School Cloud Servers (serverCONDO)

          OR taking a true large-scale example watch this HP.com Takes 3M Hits on Moonshot [Janet Bartleson YouTube channel, Nov 26, 2013] video:

          Volker Otto talks about the results of using Moonshot for HP.com’s web site, caching, and ftp downloads. http://www.hp.com/go/moonshot

          According to Meg Whitman’s keynote at Discover 2013 on Dec 10 they would be able to go from 6 datacenters to 4 thanks to Moonshot, even considering the future needs and workloads. Something as dramatic as when HP moved previously (3 years ago) from 86 datacenters to 6 datacenters.

          So, to appreciate the full potential of Moonshot one should, on the other hand, understand the following system architecture information provided in the HP Moonshot System, the world’s first software defined servers [April 10, 2013] technical whitepaper:

          HP Moonshot System

          HP Moonshot System is the world’s first software defined server accelerating innovation while delivering breakthrough efficiency and scale with a unique federated environment, and processor-neutral architecture. Traditional servers rely on dedicated components, including management, networking, storage, power cords and cooling fans in a single enclosure. In contrast, the HP Moonshot System shares these enclosure components. The HP Moonshot 1500 Chassis has a maximum capacity of 1800 servers per 47U rack with quad server cartridges. This gives you more compute power in a smaller footprint, while significantly driving down complexity, energy use and costs.

          The first server available on HP Moonshot System is HP ProLiant Moonshot Server based on Intel® Atom™ processor S1260, and it provides an ideal solution for web serving, offline analytics and hosting.

          HP Moonshot 1500 Chassis design

          The HP Moonshot 1500 Chassis incorporates independent component design and hosts 45 cartridges, two network switches, and the infrastructure components within the chassis. The Moonshot 1500 Chassis’ electrically passive design makes this completely hot pluggable design possible. The Moonshot 1500 Chassis uses no active electrical components, other than EEPROMs required for manufacturing and configuration control purposes.

          Figure 1 shows the elements of the Moonshot 1500 Chassis. HP controls the design on all elements of the chassis except for the server (initial server contain a single server) and the network switch module which may be designed by the Moonshot server or network switch partners.

          Figure 1.

          image

          The HP Moonshot 1500 Chassis accommodates up to 45 individually serviceable hot plug cartridges. Two high-density, low-power HP Moonshot 45G Switch Modules, each with a 10g x6 HP Moonshot 6SFP Uplink Module, handle network communication for all cartridges in the chassis. These switches use Layer 2/Layer 3 routing, QoS management (CLI, SFLOW), and require no license keys. The dual network switches and I/O modules provide traffic isolation, or stacking capability for resiliency. Rack level stacking simplifies the management domain.

          The Moonshot System uses the HP Moonshot 1500 Chassis Management module (CM) module for complete chassis management, including power management with shared cooling. The server platform is powered by four 1200W Common Slot Power Supplies in an N+1 configuration and cooled by five hot pluggable fans also in an N+1 configuration. The CM uses component-based satellite controllers to communicate with and manage chassis elements. The modular faceplate design allows for future feature development.

          HP ProLiant Moonshot Server

          Each software defined server contains its own dedicated memory, storage, storage controller, and two NICs [Network Interface Controllers] (1Gb). For monitoring and management, each server contains management logic in the form of a Satellite Controller with a dedicated internal network connection (100 Mb). Figure 5 shows HP ProLiant Moonshot Server with a single Intel® Atom™ processor S1260and a single SFF drive.

          Figure 5. HP ProLiant Moonshot Server and functional block diagram

          image

          These servers provide the base hardware functionality of the system. Future software defined servers can take the following forms:

          • One or more discrete server with separate compute, storage, memory and I/O
          • One or more complete cartridge designs with integrated compute, storage, memory, and I/O
          • One or more forms of storage accessible to adjacent cartridges

          Future servers will incorporate these descriptions to provide a wide degree of flexibility for customizing and tuning based on the desired performance, cost, density, and power constraints.

          The available ProLiant Moonshot server design includes one processor and a single HDD or SDD. This server is ideal for application workloads such as website front ends and simple content delivery. Table 1 gives you the current server component descriptions.

          image

          The Intel Atom is the world’s first 6-watt server-class processor. In addition to lower power requirements, it includes data-center-class features such as 64-bit support, error correcting code (ECC) memory, increased performance, and broad software ecosystem. These features, coupled with the revolutionary HP Moonshot System design are ideal for workloads using many extreme low-energy servers densely packed into a small footprint can be much more efficient than fewer standalone servers.

          Intel® Atom™ processor S1260 integrates two CPU cores, single-channel memory controller, and PCI Express 2.0 interface. Each CPU core will has its own dedicated 32KB instruction and 24 KB data L1 caches, and 512 KB L2 cache. The processors incorporate Hyper-Threading, which allows them to run up to 4 threads simultaneously. Additionally, the chips have VT-x virtualization enabled.

          Each Moonshot server boots from a local hard drive, or the network using PXE [Preboot eXecution Environment]. The Moonshot System use HP BIOS and “headless” operation (no video or USB). No additional HP software is required to run the cartridge. NIC, storage, and other drivers are included in the compatible Linux distributions (described later in the OS management section).

          Fabrics and topology

          We designed the HP Moonshot System to provide application-specific processing for targeted workloads. Creating a fabric infrastructure capable of accommodating a wide range of application-specific workloads requires highly flexible fabric connectivity. This flexibility allows the Moonshot System fabric architecture to adapt to changing requirements of hyperscale workload interconnectivity.

          The Moonshot System design includes three physical production fabrics, the Radial Fabric, the Storage Fabric, and the 2D Torus Mesh Fabric. The fabrics are connected to 45 cartridges slots, two slots for the network switches, and two corresponding I/O modules.

          Figure 9 shows the eight 10Gb lanes routed from each of the cartridge slots to the pair of core network fabric slots in the center of the Moonshot 1500 chassis. Four lanes from each cartridge go to one core network fabric slot and four to the other (A and B). From each core fabric slot there are 16 10Gb lanes routed to the back of the chassis to attach to an I/O module.

          Figure 9.

          image

          Radial Fabric

          The Radial Fabric provides a high-speed interface between each cartridge and the two core fabric slots.

          The Radial fabric includes these links:
          • 2x GbE channels
          • One port to each network switch

          Figure 10 illustrates a torus topology interlinking cartridge to cartridge in combination with the radial topology linking to the network switches.

          Figure 10.

          image

          The Radial fabric handles all Ethernet-based traffic between the cartridge and external targets. The exception is iLO* management network traffic using the dedicated iLO port.

          *[iLO: Integrated Lights-Out]

          Storage fabric

          A Moonshot System Storage Fabric will use existing Moonshot 1500 Chassis connections to span each 3×3 cartridge slot subsection within the chassis baseboard (Figure 11). The Storage Fabric will be part of future HP Moonshot System releases. This fabric implementation will use the Storage Fabric as a connection between servers and local storage devices.

          Figure 11.

          image

          In this implementation, SAS/SATA is sent over lanes between each adjacent cartridge for primary storage along with additional lanes to other cartridges in the subsection for redundancy or other storage requirements. Although the figure shows a specific configuration of compute and storage nodes, there is flexibility to configure the subsections in different ways as long it does not violate the rules of the interface or storage technology. While the example in Figure 11 shows the proximal fabric being used for SAS/SATA, any type of communication is possible due to the dynamic nature of the fabric.

          2D Torus Mesh Fabric

          Like the Storage Fabric, future releases of the HP Moonshot System will use existing Moonshot 1500 Chassis connections to implement the 2D Torus Mesh Fabric, providing a high speed general purpose interface among the cartridges for those applications that benefit from high bandwidth node-to-node communication. The 2D Torus Mesh fabric can be used as Ethernet, PCIe, or any other interface protocol. At chassis power on, the CM [Chassis Management] ensures the compatibility on all interfaces before allowing the cartridges to power on.

          The 2D Torus Mesh fabric is routed as torus ring configuration capable of providing four 10Gb bandwidths in each direction to its north, south, east and west neighbors. This allows the HP Moonshot System to meet many unique HPC [High-Performance Computing] applications where efficient localized traffic is needed.

          • 16 lanes from each cartridge
          • Four up, four down, four left, and four right
          • Can support speeds up to 10Gb

          Topologies

          Topologies utilize the physical fabric infrastructure to achieve a desired configuration. In this case, Radial and 2D Torus Mesh fabrics are the desired Moonshot topologies. The Radial Fabric pathways are optimized for a network topology utilizing two Ethernet switches. The 2DTorus Mesh fabric pathways are passive copper connections negotiated with neighbors and optimized for topology protocols that change over time to accommodate future Moonshot System releases.

          Moonshot System network configurations

          Moonshot System network switches and uplink modules provide resiliency and efficiency when configured as stand-alone or stackable networks. This feature allows you to connect up to nine Moonshot 1500 Chassis and then to your core network, eliminating the need for a top of rack (TOR) switch.

          • Dual switches provide traffic isolation or can be stacked
          • Rack level stacking simplifies management domain
          • Redundant switch configurations provide a more resilient infrastructure
          • Layer 2, Layer 3 Routing & QoS, Management (CLI, SNMP, SFLOW). No license keys

          Moonshot 1500 Chassis stacking

          Stacking allows you to select a tradeoff between overall performance and cost of TOR switches. Stacking can eliminate the cost of TOR switches for workloads able to tolerate extra latency. The switch firmware architecture elects a master management processor to control all stacked switches. Stacking does not scale in a linear way; stacking size is constrained by the capability of a single management processor. The P2020 [switch management] processor is sized to reliably stack nine network switches (405 ports).

          We can create two stacked switches in a single rack with no performance issues. Up to nine modules can be stacked to form a single logical switch. A simple loop consumes two ports per I/O module in this Figure 12 layout.

          Figure 12.

          image

          Management

          The HP Moonshot System relies on a federated iLO system. Federation requires the physical or logical sharing of compute, storage or networking resources within the Moonshot 1500 Chassis. The chassis shares four individual iLO4 ASICs [Application-Specific Integrated Circuits] in the CM module with high-speed connections to the management network through a single management port uplink.

          The CM provides a single point of management for up to 45 cartridges, and all other components in the Moonshot 1500 Chassis, using Ethernet connections to the internal private network. Each hot pluggable component includes a resident satellite controller. The CM and satellite controllers use data structures embedded in non-volatile memory for discovery, monitoring, and control of each component.

          HP Moonshot 1500 Chassis Management module

          The CM includes four iLO processors sharing the management responsibility for 45 cartridges, the power and cooling processor, two networks switches and Moonshot 1500 chassis management. We’ve federated the iLO system functionality by assigning certain iLO processors responsibility for managing certain hardware interfaces. We balanced the workload among the three cartridge zones in the chassis (physically separated by network switches), and dedicated one iLO processor to manage chassis hardware and the switches. Communication between the CM and the Satellite Controllers is an internal private Ethernet network. This eliminates the requirements for a large number of IP addresses being used on the production network.

          The iLO subsystem includes an intelligent microprocessor, separate memory, and a dedicated network interface. iLO uses the management logic on each cartridge and module, and up to 1,500 sensors within the Moonshot 1500 Chassis, to monitor component thermal conditions. This design makes iLO independent of the host servers and their operating systems.

          iLO monitors all key Moonshot components. The CM user interfaces and API’s include a Command-Line Interface (CLI) and Intelligent Platform Management Interface (IPMI) support. These provide the primary gateway for node management, aggregation and inventory. A text-based interface is available for power capping, firmware management and aggregation, asset management and deployment. Alerts are generated directly from iLO, regardless of the host operating system or even if no host operating system is installed. Using iLO, you can do the following:

          • Securely and remotely control the power state of the Moonshot cartridges (text-based Remote Console)
          • Obtain access to each and all serial ports using a secure Virtual Serial Port (VSP) session
          • Obtain asset and hardware specific information (MAC Addresses, SN)
          • Control cartridge boot configuration

          OS deployment and support

          The Moonshot System hosts multiple individual systems, and network switches. Unlike other HP ProLiant BladeSystem-class servers, Moonshot cartridges provide OS installation only through network Installation, with console access provided by an integrated Virtual Serial Port to each server. Network Installation is performed in a manner similar to other HP ProLiant, or standard x86 servers, with the only required modifications being the specification of the serial console instead of a standard VGA display (described below.)

          Linux Distributions
          The initial release of the HP Moonshot System is compatible with these versions of Linux:
          • Red Hat Enterprise Linux 6.4
          • SuSE SLES 11SP2
          • Ubuntu 12.04
          HP Insight Cluster Management Utility
          The HP Insight Cluster Management Utility (CMU) is well suited for performing network installations, image capture and deploy, and ongoing management of large numbers of servers such as the density provided by the Moonshot 1500 Chassis. If you are using CMU, the directions included in the following “Setting up an installation server” section are not required, and you should instead refer to the CMU documentation.
          The CMU is optional and basic network installation of the OS may be performed using a standard PXE-based installation server.

          Conclusion

          The HP Moonshot System addresses the needs of data centers deploying servers at a massive scale for the new era of IoT. Industry sources estimate that lightweight web serving and analytics workloads will equal 14% of the x86 server market by 2015. The HP Moonshot System changes the current computing paradigm with an innovative completely hot pluggable architecture that increases the value of your investment and reduces TCO. You get a significant reduction in power usage, hardware costs, and use of space. You’ll see simplification in the areas of network switches, cabling, and management. Moonshot System’s use of shared hot pluggable infrastructure includes power supplies and fans. The HP Moonshot 1500 Chassis Management module, with proven HP iLO management processors, gives you detailed reporting on all platform components while the power and cooling controller manages the N+1 fan and power supply configurations. Dual network switches and I/O modules increase Moonshot’s resiliency and flexibility, allowing you to stack HP Moonshot Switch Modules. The Moonshot System is the first software defined, application-optimized server platform in the industry. Look for a growing library of software defined servers from multiple HP partners targeting specific IoT workloads compatible with emerging web, cloud, and massive scale environments, as well as analytics and telecommunications.

          Now we have 2 aditional cartridges: the m300 and the m700

          Moonshot ProLiant m300 Server Cartridge Overview [Janet Bartleson YouTube channel, Nov 27, 2013]

          @SC13, HP Product Manager Thai Nguyen gives us a quick overview of the ProLiant m300 Server Cartridge.

          A new big little HP Moonshot server cartridge is shipping!! [The HP Blog Hub, Dec 10, 2013]

          Guest blog written by Nigel Church, HP Servers

          We call it the HP ProLiant m300 Server cartridge for the HP Moonshot System. This is the “big brother” to the current HP ProLiant Moonshot server cartridge sporting the new Intel Atom Avoton—an eight core processor running at 2.4GHz with 32GB memory [with 1 TB disk storage on the cartridge] delivering up to six times the energy efficiency and up to seven times more performance.

          Now, in just one Moonshot System with 45 ProLiant m300 Servers you have 360 cores, 1,440GB memory and up to 45TB of storage. For the right workloads, you can accomplish the same work using just 19% of the power of a traditional server!

          What workloads can it support? If you have a growing web site serving dynamic content [note that for the first Atom based server cartridge static content was mentioned when describing the type of workload supported] currently running on ageing traditional servers you must take a look at Moonshot to save space, power and prepare yourself for the future.

          If you’re attending HP Discover in Barcelona, come to the show floor and see HP Moonshot in action–or visit the HP Discover News & Social Buzz page and get the latest updates!  Otherwise, visit the HP ProLiant m300 Server Cartridge web page for more details on the newest Moonshot Cartridge.

          HP ProLiant m300 Server Cartridge [HP product page, Dec 11, 2013]

          Overview

          Are traditional servers more than you need for your scale-out big data, Web and content delivery network workloads? Are you paying for underutilized servers that use more and more space and energy? Companies running scale-out big data applications, serving web pages, images, videos, or downloads over the Internet often need to carry out simultaneous lightweight computing tasks over and over, at widely distributed locations. The HP ProLiant m300 Server Cartridge based on the Intel® Atom™ System on a Chip (SOC) delivers breakthrough performance and scale with up to 360 processor cores, 1,440 GB of memory and 45 TB of storage in a single Moonshot System.       

          Features

          A Platform for Big Data with NoSQL/NewSQL

          NoSQL/NewSQL on HP ProLiant m300 Server Cartridges gives cost-effective scalable performance for online transactional processing and maintains the ACID (Atomicity, Consistency, Isolation, Durability) of traditional databases.
          NoSQL/NewSQL thrives in a distributed cluster of shared-nothing nodes like the HP ProLiant m300 Server Cartridges. SQL queries are split into query fragments and sent to the node that owns the data. These databases are able to scale linearly as nodes are added, without suffering from bottlenecks.

          Scale-out Platform for Your Web Needs

          Companies need the scalability of the HP ProLiant m300 Server Cartridge to serve web pages, including image and video downloads while carrying out simultaneous lightweight computing tasks over and over, at widely distributed locations.
          For Web workloads, a platform based on the HP ProLiant m300 Server Cartridge means you don’t waste energy, space, and money on a high-end server when a low-cost density-optimized server can handle the job.

          Content Delivery Anytime from Any Device

          The m300 Server Cartridge provides high-speed efficient transcoding of media streams to match specific user devices. This allows efficient management of content by reducing library size and transcoding on demand, for specific device characteristics.
          Using less energy and space at a lower cost compared to traditional servers, the compact m300 Server Cartridge has Intel Atom-based SOCs to quickly deliver Web content to a variety of mobile devices.

          System Features

          Compute: Intel® Atom™ Processor C2750, 2.4 GHz

          Memory: DDR3 PC3-12800 SDRAM (1600 MHz); Four (4) SODIMM slots; 32GB (4x8GB)

          Storage: (1) SFF 500GB HDD, 1TB HDD, and 240GB SSD

          Networking: (Internal) dual port 1GbE per CPU; HP Moonshot 45G Switch Module Kit; HP Moonshot 6SFP Uplink Module Kit

          Enclosure: Moonshot 1500 Chassis

          Warranty: 1 year

          Intel® Atom™ Processor C2750 (4M Cache, 2.40 GHz) [Intel product page, Dec 3, 2013]

          SPECIFICATIONS

          Essentials
          Status
          Launched
           
          Launch Date
          Q3’13
           
          Processor Number
          C2750
           
          # of Cores
          8
           
          # of Threads
          8
           
          Clock Speed
          2.4 GHz
           
          Max Turbo Frequency
          2.6 GHz
           
          Cache
          4 MB
           
          Instruction Set
          64-bit
           
          Embedded Options Available
          No
           
          Lithography
          22 nm
           
          Max TDP
          20 W
           
          Recommended Customer Price
          TRAY: $171.00
           
          Memory Specifications
          Max Memory Size (dependent on memory type)
          64 GB
           
          Memory Types
          DDR3, 3L 1600
           
          # of Memory Channels
          2
           
          Max Memory Bandwidth
          25.6 GB/s
           
          Physical Address Extensions
          36-bit
           
          ECC Memory Supported
          Yes
           
          Expansion Options
          PCI Express Revision
          2
           
          PCI Express Configurations
          x1,x2,x4,x8,x16
           
          Max # of PCI Express Lanes
          16
           
          I/O Specifications
          USB Revision
          2
           
          # of USB Ports
          4
           
          Total # of SATA Ports
          6
           
          Integrated LAN
          4x 2.5 GbE
           
          UART
          2
           
          Max # of SATA 6.0 Gb/s Ports
          2
           
          Package Specifications
          TCASE
          97°C
           
          Package Size
          34 mm x 28 mm
           
          Sockets Supported
          FCBGA1283
           
          Low Halogen Options Available
          See MDDS
           
          Advanced Technologies
          Intel® Turbo Boost Technology
          2.0
           
          Intel® Virtualization Technology (VT-x)
          Yes
           
          Intel® Data Protection Technology
          AES New Instructions
          Yes
           

           


          HP’s Moonshot and AMD are taking cloud computing to a whole new level
          [AMD YouTube channel, published on Dec 4, 2013]

          Learn more about AMD and HP cloud computing:http://bit.ly/HP_and_AMD At APU13 HP’s Scott Herbal, World Wide Product Marketing Manager, shows off the Moonshot chassis which holds up to 45 AMD server cartridges inside. [His presentation was at AMD Developer Summit – APU13, Nov 13, Wed, 4:00 – 4:45, CC-4150, Scott Herbel, HP, HP Moonshot System + AMD’s Opteron X2150 = develop anything, anywhere with hosted desktops]

          ProLiant m700 Server Cartridge in HP Moonshot Overview [Janet Bartleson YouTube channel, Dec 9, 2013]

          Product manager Scott Herbel [http://www.linkedin.com/in/scottherbel: WorldWide Product Marketing Manager, Moonshot at Hewlett-Packard since May, 2010]

          HP ProLiant m700 Server Cartridge [HP product page, Dec 11, 2013]

          Overview

          Looking for a cost-effective solution for hosted desktop infrastructure, mobile gaming or cloud multi-media workloads? The HP ProLiant m700 Server Cartridge in a Moonshot 1500 Chassis offers lower cost (price per seat), simplified systems management and user support, vastly improved system/data security, and efficient systems resource use for your hosted desktop infrastructure (HDI) and cloud multi-media workloads. Each m700 Server Cartridge has four servers, each with an AMD Opteron™ X2150 APU with fully-integrated graphics processing and CPU. The m700 Server Cartridge delivers outstanding compute density and price/performance for cloud multi-media workloads.
          You can power mobile games, or other web content, objects, or applications, live and on-demand streaming media.       

          Features

          Hosted Desktop Infrastructure (HDI) Solution with Power and Scalability

          The centralized nature of hosting desktops on the HP ProLiant m700 Server Cartridge provides lower cost (price per seat), simplified system management and user support, vastly improved system/data security, and efficient system resource use.
          Each cartridge has four AMD-processor-based servers. Each server contains the AMD Opteron™ X2150 APU with graphics processing and CPU.
          The overall density means that you can cost-effectively have 180 servers in less than 5U of rack space.

          Mobile Content and Gaming Any Time from Any Device

          The HP ProLiant m700 Server Cartridge excels at powering graphics-intensive content delivery such as hosted videos and mobile games.
          The cartridge provides high-speed, efficient transcoding of source media streams to match specific user devices. This allows efficient management of content by reducing library size and transcoding closer to the customer, on demand, for specific device characteristics.
          Using less energy and space at a lower cost compared to traditional servers, the m700 Server Cartridge has four AMD Opteron x2150-based servers, each with integrated graphics processing capabilities to quickly deliver mobile games to your device, wherever you are.

          System features

          Compute: AMD Opteron™ X2150 APU, 1.5 GHz, with AMD Radeon™ HD 8000 graphics

          Memory: DDR3 PC3-12800 SDRAM (1600 MHz); Four (4) SODIMM slots; 32GB (8GB per SoC)

          Storage: 4 x 32 GB iSSD (1 per SoC)

          Networking: (Internal) BCM5720 dual port 1GbE per CPU; HP Moonshot-180G Switch Module; HHP Moonshot-4QSFP+ Uplink Module

          Enclosure: Moonshot 1500 Chassis

          Warranty: 1 year

          AMD Opteron™ X2150 APU [AMD product page, May 29, 2013]

          Introducing the World’s First Server-class x86 APU SoC

          Specification

          image

          Features

          Feature
          Function
          Benefit
          4 Energy Efficient X86 Cores, Codenamed “Jaguar”
          Optimize x86 performance/watt for microservers.
          Helps enable low datacenter TCO
          Flexible TDP
          Allows user to control their own power profile by adjusting CPU and GPU frequencies in the BIOS to match their application needs (GPU integrated in X2150 only)
          Gives users more control over their workload performance and power consumption
          Integrated I/O
          Integrates legacy Northbridge and Southbridge functionality directly on the processor
          Smaller footprint enables dense microserver designs
          Core, Northbridge and Memory P-states
          Dynamically adjusts performance levels based on application requirements
          Helps reduce power consumption
          Server Infrastructure support

          Feature
          Function
          Benefit
          DDR3 Memory with ECC Support
          High-speed, highly reliable server-class memory
          Helps reduce server failures due to memory.
          Integrated I/O
          Integrate PCIe Gen2, SATA 2/3, USB 2.0 and USB 3.0 functionality onto the processor.
          Enable enterprise-class functionality in a single chip solution.
          Server Processor Reliability
          Processor undergoes a back-end test flow to ensure proper quality
          Ensure product quality is that of other server-class products for greater reliability.
          Integrated Graphics

          Feature
          Function
          Benefit
          Graphics Core Next Architecture with AMD Radeon™ HD 8000 Series Graphics
          Provide high-quality graphics capabilities in a server SoC.
          Outstanding performance in media-oriented workloads such as remote DT, online gaming and imaging
          Display Controller Engine
          Allows for VGA and HDMI display capabilities
          Helps reduce cost by eliminating need for add-on display cards
          Unified Video Decoder 4.2
          Dedicated hardware video decoding block
          Help enable a near-native experience in remote DT applications.
          Video Compression Engine 2.0
          Hardware-assisted encoding of HD video streams
          Help enable a near-native experience in remote DT applications

          Citrix hosted desktops–powered by HP Moonshot [The HP Blog Hub, Dec 10, 2013]

          Written by Citrix Guest Blogger Kevin Strohmeyer, Director Product Marketing, Citrix

          Veterans of server-based computing and VDI are all too familiar with the complexities of buying and deploying desktop virtualization. Great strides have been made to simplify the sizing and configuration of desktop virtualization infrastructure, but ultimately, when you build and deliver shared resources, you should carefully consider how those resources will be used; and decide how much excess capacity you need to ensure peak usage can be supported.

          The distributed nature of PCs, coupled with management challenges of patching and updates plus the vulnerability of unsecured, sensitive data has left IT looking for a better answer. This brings us right back to centralized desktop virtualization.

          The HP ConvergedSystem 100 for Hosted Desktops with Citrix XenDesktop is a new as well as unique type of desktop virtualization. Instead of just leveraging a hypervisor to abstract the OS from hardware, XenDesktop streams an OS right to bare metal to dedicated microsystems with dedicated CPU, memory and graphics all neatly arranged in a rack mount chassis. This eliminates the overhead and complexity of abstracting the hardware and managing VMs. This also eliminates the system overhead required to share those resources leaving more power for the desktop. All in all, the solution presents a very interesting alternative to VDI.

          image

          The HP ConvergedSystem 100 for Hosted Desktops is an all-in-one compute, storage and networking system based on HP Moonshot, delivering 180 desktops for Citrix XenDesktop environments.  The system provides an independent, remote PC experience with business graphics and multimedia performance essential for mainstream knowledge workers, and all while delivering up to 44% improvement in TCO and 63% lower power requirements.  Other benefits include:

          • Predictable, fixed cost per user reduces OPEX
          • Independent compute and graphics delivers consistent end user performance 
          • Deploy with Citrix XenDesktop in approximately 2 hours

          At the same time, this solution is great example of the power of FlexCast technology from Citrix. And that power is reflected in the way the FlexCast management infrastructure is designed to promote these innovative solutions that leverage common image management, profile management and app virtualization in a common delivery architecture. The unique Citrix Provisioning Services (PVS) technology that enables bare metal and just in time OS provisioning provides all the benefits of VDI without hypervisor management.

          What makes this solution most interesting is the ease of purchasing and deploying. There is no configuration work required to figure out how much hardware or storage to purchase,  you simply buy as many systems as you need and rack and stack as you grow from the first 180 desktop on up. This alone could make this solution very attractive to organizations desiring the security and management of centralized virtual desktops, but who want to avoid the management of virtual infrastructure.

          If you are attending HP Discover in Barcelona this week, come by to see the ConvergedSystem 100 for Hosted Desktops in the Discover Zone. 

          Learn more about the new HP ConvergedSystem 100 for Hosted Desktops.

          Offering a no compromise PC experience [The HP Blog Hub, Dec 9, 2013]

          By HP guest blogger Dan Nordhues, HP Client Virtualization Worldwide Manager

          Poor performance is one of the major reasons users reject VDI or remote desktop implementations. While all your workers may sit at PCs, each user population has unique needs that dictate requirements. For example, task workers need only a couple of applications to do their jobs, but workstation-class users require accelerated graphics capabilities to handle workloads like CAD/CAM and Oil and Gas applications.

          Right in the middle of the PC-user continuum sits the mainstream knowledge worker—the largest segment of the PC user population— with unique requirements of their own. Meeting the needs of these users is the goal of HP ConvergedSystem 100 for Hosted Desktops powered by HP Moonshot—a next-generation solution engineered specifically for meeting the needs of today’s knowledge workers, while also meeting your requirements for simplicity, lower deployment cost, and energy efficiency.

          image

          HP ConvergedSystem 100 for Hosted Desktops provides an all-in-one compute, storage, and networking system that delivers desktops for Citrix XenDesktop non-persistent users. Provide your mainstream users a dedicated PC experience with the business graphics and multimedia performance they need, while reducing TCO by up to 44 percent and lowering power requirements up to 63 percent.

          If you plan to attend HP Discover Barcelona 2013, you can take advantage of great hands-on experience with HP Converged Systems.  And check out these sessions for more information on HP’s client virtualization portfolio:

          • BB2391 – Architecting client virtualization for task worker to workstation-class users  10 December 10-11am
          • DT3108 – Moonshot-hosted desktop infrastructure: an innovative way for hosting end-user desktops  11 December, 11:30-12
          • DT3177 – Moonshot-hosted desktop infrastructure: an innovative way for hosting end-user desktops, Part II   12 December, 11:30-12

          Learn more about the new HP ConvergedSystem 100 for Hosted Desktops.