Home » Cloud Computing strategy (Page 8)

Category Archives: Cloud Computing strategy

Nokia: Continued moderate progress with Lumia, urgent Asha Touch refresh and new innovations to come against the onslaught of unbranded Android and forked Android players in China and India

Nokia Corporation’s CEO Discusses Q1 2013 Results – Earnings Call Transcript [Seeking Alpha, April 18, 2013]

… At the highest level, I am pleased that in Q1 2013 Nokia Group achieved underlying operating profitability for the third quarter in a row. In a moment, I will share my perspective on Nokia’s Q1 performance. However, I wanted to first note that we believe this quarter further underscored that Nokia, and other industry participants continue to operate in one of the most exciting and fast moving business environments in the world today.
Compared to a year ago a lot has changed in our industry, and I wanted to share some of the trends we’re seeing. For example, the distance between the various Android participants seems starker than ever before as the dominance of one hardware vendor becomes more visible. Additionally, unbranded Android and forked Android players continue to emerge from China and India creating new dynamics both within and increasingly outside of Asia. With this growth in low-priced fragmented versions of Android the Android experience is becoming inconsistent across the lower-end price range.
In February, Mobile World Congress highlighted the growth in startup alternative platforms with many new entrants placing bets on next generation technologies like HTML5. While we have not yet seen one of these alternative platforms gain broad scale we should not underestimate what could happen if a dominant Android provider shifts some of its focus to an alternative platform.
We also saw new attempts to disrupt existing business models, whether it is the new Facebook home forking the Android experience or Amazon providing a differentiated tablet that forks the Android stack, we see leading technology companies take deliberate steps to change Android and possibly disrupt our industry. There are some patterns of change that seem inevitable. For example, consumers are expecting their digital lives to be more and more mobile as evidenced by the recent statistics about the shift towards mobility at the expense of less mobile PC experiences. Consumers are also increasingly discerning about the capabilities and new experiences that attract their attention. They are less interested in counting cores and pixel density, and more interested in experiences that are truly innovative.
This constant pattern of change in our industry is an opportunity. We believe we can move through the industry fragmentation and churn within unrelenting focus on executing our strategy. Thus, we remain focused on improving the competitiveness of our products effectively managing our costs and moving with urgency. …

image

People are responding positively to the new innovation throughout the Lumia portfolio imaging, design and navigation are capturing attention among reviewers, operators, retail associates and ultimately consumers. We believe we have increased the competitiveness of our Smart Devices, and as a result Lumia is clearly making progress. We’re pleased that today Lumia is even out shipping the iPhone in countries like Argentina, India, Poland, Ukraine and of course in our home country of Finland.
Importantly, the positive consumer reaction to the innovation and differentiation in Lumia is starting to come through in our numbers. We are encouraged by the financial performance of our newer Lumia devices based on Windows Phone 8, which generated not only solid growth, but also a gross margin in Q1 that was somewhat above the Smart Devices average.
At the same time, we’ve recognize that we must continue to increase our sales and improve our retail execution for Lumia. For example, in the United States, securing what operators call hero status or the top spot at the point of sale is critically important, because it attracts premium subsidies and additional marketing investment.

Later this quarter, a new Lumia device is anticipated to have hero status with a leading U.S. operator, an event which will mark the beginning of a season of new product introductions. Additionally, Nokia, Microsoft and operators have committed to increase the global Windows Phone marketing dollars towards Lumia. Together with Microsoft we are working on major marketing campaigns, training more retail associates, improving how we leverage operator marketing that is available to us, and seeding more live devices to create a more engaging point of sale experience. Overall, we are very pleased with our progress around Lumia.

… indeed we plan a refresh of elements of our mobile phones portfolio. Some of which has been announced, and it’s just landing in the market. For example, the very lowest price points the Nokia 105, which when you look at the volumes for Q1, some of the significant movement in volume levels were at the end of the predecessor to the 105 product line, that space, and now we’re just entering the market with new product there. And, of course, we’ve also signaled that in the very near term you should expect to see a freshening in the Asha product line. If you know, we’re roughly 9 months or so into the Asha full touch line relative to when we began shipping it.

So, reasonable to expect that it’s due for freshening and we’re looking forward to that in the near-term.

With respect to the Lumia portfolio going into Q2 some of the principal drivers of additional volume relate to the newer products that are entering the market, the 720 and 520 are important in this, particularly the 520. 520 is obviously at a lower price point and moving into markets, where that’s far more competitive than some of the hero products could be except for the people willing to pay top dollar for a device.

Nokia Corporation Q1 2013 Interim Report [April 18, 2013]

Nokia Group net sales in Q1 2013 were EUR 5.9 billion
– Devices & Services Q1 net sales decreased 25% quarter-on-quarter to EUR 2.9 billion.
– Lumia Q1 volumes increased 27% quarter-on-quarter to 5.6 million units, reflecting increasing momentum.
– Mobile Phones Q1 volumes decreased 30% quarter-on-quarter to 55.8 million units, reflecting competitive industry dynamics and an estimated higher than normal seasonal decline in the market addressable by Mobile Phones.
– Nokia Siemens Networks net sales decreased 30% quarter-on-quarter to EUR 2.8 billion, reflecting industry seasonality.
Nokia Group net cash higher quarter-on-quarter
– Nokia Group ends first quarter 2013 with a strong balance sheet and solid cash position. Gross cash was EUR 10.1 billion and net cash was EUR 4.5 billion.
– Nokia Group strengthened its net cash position by approximately EUR 120 million sequentially. Nokia Siemens Networks contributed approximately EUR 210 million to the Nokia Group net cash position.
NOKIA OUTLOOK
– Nokia expects its Devices & Services non-IFRS operating margin in the second quarter 2013 to be approximately negative 2 percent, plus or minus four percentage points. This outlook is based on Nokia’s expectations regarding a number of factors, including:
>> competitive industry dynamics continuing to negatively affect the Mobile Phones and Smart Devices business units;
>> consumer demand for our products, particularly for our Mobile Phones products;
>> continued ramp up for our Lumia smartphones;
>> expected increases in Devices & Services’ operating expenses; and
>> the macroeconomic environment.
– In the second quarter 2013 supported by the wider availability of recently announced Lumia products, Nokia expects the sequential growth in Lumia unit volumes to be higher than the 27% sequential growth in the first quarter 2013.
– Nokia continues to target to reduce its Devices & Services non-IFRS operating expenses to an annualized run rate of approximately EUR 3.0 billion by the end of 2013.
– Nokia expects HERE’s non-IFRS operating margin in the second quarter 2013 to be negative primarily due to lower recognized revenue from internal sales.
– Nokia and Nokia Siemens Networks expect Nokia Siemens Networks non-IFRS operating margin in the second quarter 2013 to be approximately positive 5 percent, plus or minus four percentage points. This outlook is based on Nokia Siemens Networks’ expectations regarding a number of factors, including:
>> competitive industry dynamics;
>> product and regional mix; and
>> the macroeconomic environment.
– Nokia and Nokia Siemens Networks continue to target to reduce Nokia Siemens Networks’ non-IFRS annualized operating expenses and production overheads by more than EUR 1 billion by the end of 2013, compared to the end of 2011.
In the first quarter 2013, we received a quarterly platform support payment of USD 250 million (approximately EUR 188 million) from Microsoft. Our agreement with Microsoft includes platform support payments from Microsoft to us as well as software royalty payments from us to Microsoft. Under the terms of the agreement governing the platform support payments, the amount of each quarterly platform support payment is USD 250 million. We have a competitive software royalty structure, which includes annual minimum software royalty commitments that vary over the life of the agreement. Software royalty payments, with minimum commitments are paid quarterly. Over the life of the agreement, both the platform support payments and the minimum software royalty commitments are expected to measure in the billions of US dollars. Over the life of the agreement the total amount of the platform support payments is expected to slightly exceed the total amount of the minimum software royalty commitment payments. In accordance with the terms of the agreement, the platform support payments and annual minimum software royalty commitment payments continue for a corresponding period of time.
The following table sets forth the mobile device volumes for our Devices & Services business for the periods indicated, as well as the year-on-year and sequential growth rates, by geographic area.

DEVICES & SERVICES MOBILE DEVICE VOLUMES BY GEOGRAPHIC AREA

million units
Q1/2013
Q1/2012
YoY
Change

Q4/2012
QoQ
Change

Europe
11.8
15.8
-25%
19.4
-39%
Middle East & Africa
15.5
21.4
-28%
21.8
-29%
Greater China
3.4
9.2
-63%
4.6
-26%
Asia-Pacific
23.1
26.1
-11%
28.7
-20%
North America
0.4
0.6
-33%
0.7
-43%
Latin America
7.7
9.6
-20%
11.1
-31%
Total
61.9
82.7
-25%
86.3
-28%
On a year-on-year basis, net sales decreased in all regions except North America where the increase was primarily due to our Smart Devices business unit. The largest relative year-on-year decline in net sales was in Greater China followed by Europe and Middle East and Africa. In Greater China and Europe the net sales declines were primarily due to our Smart Devices business unit whereas in the Middle East and Africa the net sales decline was primarily due to our Mobile Phones business unit.
On a sequential basis, net sales decreased in all regions except Greater China where the increase was primarily due to our Smart Devices business unit. The largest relative sequential declines in net sales were in North America followed by Middle East and Africa and Europe. The sequential net sales decline in North America was primarily due to our Smart Devices business unit, whereas in Middle East and Africa and Europe the net sales declines were primarily due to our Mobile Phones business unit.
At constant currency Devices & Services’ net sales would have decreased 33% year-on-year and 23% sequentially.

Volume
During the first quarter 2013 we shipped 55.8 million Mobile Phones units, of which 5.0 million were Asha full touch smartphones.
On a year-on-year basis, our Mobile Phones volumes in the first quarter 2013 were negatively affected by competitive industry dynamics, including intense smartphone competition at increasingly lower price points and intense competition at the low end of our product portfolio as well as an estimated higher than normal seasonal decline in the market addressable by Mobile Phones. Compared to the first quarter 2012, our Mobile Phones volumes declined across our portfolio, most notably for our non-full touch devices that we sell to our customers for above EUR 30. These declines were partially offset by sales volumes of Asha full touch smartphones in the first quarter 2013 that were not part of our portfolio in the first quarter 2012.
On a sequential basis, our Mobile Phones volumes in the first quarter 2013 were negatively affected by competitive industry dynamics, including intense competition at the low end of our product portfolio and smartphone competition at increasingly lower price points affecting the rest of our Mobile Phones portfolio, as well as estimated higher than normal seasonal decline in the market addressable by Mobile Phones. Compared to the fourth quarter 2012 our Mobile Phones volumes declined across our portfolio, most notably for lower priced devices that we sell to our customers for below EUR 30.
Asha full touch smartphones Q1 volumes decreased 46% quarter-on-quarter to 5.0 million units, reflecting intense competitive industry dynamics as well as lower seasonal demand.
During the first quarter 2013, our Mobile Phones channel inventory declined in absolute unit volumes.
Average Selling Price
The year-on-year decline in our Mobile Phones ASP in the first quarter 2013 was primarily due to general price erosion and an increased proportion of sales of lower priced devices, partially offset by a net positive impact related to foreign currency fluctuations.
The sequential decline in our Mobile Phones ASP in the first quarter 2013 was primarily due to general price erosion, a net negative impact related to foreign currency fluctuations and a higher proportion of sales of lower priced devices.
Gross Margin

The year-on-year decline in our Mobile Phones gross margin in the first quarter 2013 was primarily due to a negative product mix shift towards lower gross margin devices, as well as the net negative impact related to foreign currency fluctuations, partially offset by lower freight costs.
On a sequential basis, the increase in our Mobile Phones gross margin in the first quarter 2013 was primarily due to lower warranty costs, partially offset by higher price erosion than cost erosion and higher fixed costs per unit because of lower sales volumes.
Q1 OPERATING HIGHLIGHTS
DEVICES & SERVICES OPERATING HIGHLIGHTS

SMART DEVICES
– Nokia started shipping the Nokia Lumia 620, a compact smartphone with a colorful design that brings Windows Phone 8 to a more youthful audience.
– Nokia announced the Lumia 520, its most affordable Windows Phone 8 smartphone, delivering experiences normally found only in high-end smartphones, such as the same digital camera lenses found on the flagship Nokia Lumia 920, Nokia Music for free music out of the box and even offline, and the HERE location suite.
– Nokia announced and started shipping the Nokia Lumia 720, a midrange Windows 8 smartphone with high-end camera performance featuring a large f/1.9 aperture and exclusive Carl Zeiss optics designed to deliver clear pictures day and night. The sleek and stylish smartphone comes with the latest high-end Nokia Lumia experiences, including Nokia Music, the HERE location suite, and the option to add wireless charging with a snap-on wireless charging cover.
– Nokia’s Lumia range of smartphones continued to attract businesses, including Foxtons, London’s leading estate agent, which has chosen the Nokia Lumia 820 as its business smartphone; Mall of America, the United States’ largest retail and entertainment complex, which is switching from BlackBerry to the Nokia Lumia 920 because of the tight integration with Microsoft services and built-in Microsoft Office suite; and The Coca-Cola Company, whose sales associates in Vietnam and Cambodia are using Nokia Lumia smartphones for order processing, equipment validation and market execution improvement.
– The Windows Phone Store continued to strengthen in terms of the quantity and quality of applications. Windows Phone offers more than 135 000 applications and games. Key new applications that arrived in Store during the quarter included Pandora, United Airlines and Temple Run.
MOBILE PHONES
– Nokia announced the Nokia 301, the most affordable Nokia device to offer video streaming; it also comes with new smart camera features inspired by the digital camera lenses on Nokia’s Lumia smartphones.
– Nokia announced the Nokia Asha 310, which provides Dual SIM and Wi-Fi in the same device, a first for Nokia smartphones.
– Nokia announced the Nokia 105, its most affordable phone to date, retailing at a recommended price of EUR 15. The Nokia 105 is the ideal device for the first-time phone buyer, featuring a bright color screen with clear menus and essentials like FM radio, multiple alarm clocks, speaking clock and flashlight. The dust- and splash-proof, pillowed keymat and battery life of up to 35 days also make it ideal for people in search of a reliable back-up phone.     
HERE OPERATING HIGHLIGHTS
In the first quarter 2013, HERE continued to strengthen its offering on Nokia’s Lumia range as well as broaden the experiences available across the Windows Phone 8 ecosystem:
– HERE further integrated its location-based experiences to enable people to seamlessly transition from driving to walking to public transit thanks to improved app-to-app linking and syncing of favorites from here.com to any HERE experience. HERE now also offers unique capabilities for users to customize their home screen as a personal location dashboard.
– With LiveSight technology, HERE introduced innovation that is aimed at changing the way people interact with maps, and their world. After first showcasing the technology in the HERE City Lens application, HERE also announced that it is extending LiveSight to HERE Maps. LiveSight recognizes what people see through their phone’s camera and layers that view with relevant, place-based information.
– HERE further strengthened the Windows Phone 8 ecosystem by making its suite of location-based experiences available for non-Nokia Windows Phone 8 devices. HERE offers HERE Drive, HERE Maps and HERE Transit to owners of non-Nokia Windows Phone 8 devices in Canada, France, Germany, Italy, Mexico, Spain, the United Kingdom and the United States. HERE also continued to broaden access to its maps content and the HERE Platform through several new partnerships, including:
Mozilla, which as a first collaborative step with HERE now has HTML5-based HERE Maps for the new Firefox OS.
Toyota Motor Europe, which selected the HERE platform’s Local Search for Automotive to power its next generation Touch & Go navigation and infotainment systems. Local Search for Automotive is a specifically designed solution developed to fulfill the requirements of the automotive industry. This announcement marks a significant advancement in our longstanding partnership with Toyota and includes plans to collaborate with Nokia to study more services that leverage the HERE Location Platform.
– More than 10 companies decided to adopt the HERE Location platform, including Terra in Brazil and Tiscali and SEAT Pagine Gialle in Italy, demonstrating that the platform is gaining momentum across industries.
– Wetter.com, Europe’s largest German language weather portal with 13 million unique visitors, which is laying information from radar stations and satellite imagery on top of their HERE-powered map. For instance, this enables people to pinpoint where it is raining with great precision.
– Garmin, which is the first customer to launch Natural Guidance in the U.S. market and did so at the Consumer Electronics Show. Natural Guidance provides directions in a more humanized way with recognizable landmarks, buildings, traffic lights and stop signs, such as “turn right after the church” or “turn left at the traffic light.”
– HERE continued to strengthen its long lasting relationships within the automotive industry, with a number of companies deciding that they would continue to benefit from our automotive grade quality maps by selecting HERE as their partner for Map Updates. These included FujitsuTEN Australia Limited, KIA Europe, Mitsubishi Motor Corporation (MMC), Nissan Mexico, Subaru Canada and Volkswagen Europe.

Software defined server without Microsoft: HP Moonshot

Updates as of Dec 6, 2013 (8 months after the original post):

image

Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

This Cloud, Social, Big Data and Mobile we are referring to as this “New Style of IT” [when talking about the slide shown above]

Through the Telescope: 3 Minutes on HP Moonshot [HewlettPackardVideos YouTube channel, July 24, 2013]

Steven Hagler (Senior Director, HP Americas Moonshot) provides insight on Moonshot, why it’s right for the market, and what it means for your business. http://hp.com/go/moonshot

HIGHLY RECOMMENDED READING:
HP Offers Exclusive Peek Inside Impending Moonshot Servers [Enterprise Tech, Nov 26, 2013]: “The company is getting ready to launch a bunch of new server nodes for Moonshot in a few weeks”.
– So far, the most simple and understandable info is serviced in Visual Configuration Moonshot diagram set: http://www.goldeneggs.fi/documents/GE-HP-MOONSHOT-A.pdf  This site includes also full visualisation for all x86 rack, desktop and blade servers.

From HP Launches Investment Solutions to Ease Organizations’ Transitions to “New Style of IT” [press release, Dec 6, 2013]

The HP accelerated migration program for cloud—helps …

The HP Pre-Provisioning Solution—lets …

New investment solutions for HP Moonshot servers and HP Converged Systems—provide customers and channel partners with quick access to the latest HP products through a simple, scalable and predictable monthly payment that aligns technology and financial requirements to business needs.   

Access the world’s first software defined server [HP offering, Nov 27, 2013]
With predictable and scalable monthly payments

HP Moonshot Financing
Cloud, Mobility, Security and Big Data require a different level of technology efficiency and scalability. Traditional systems may no longer be able to handle the increasing internet workloads with optimal performance. Having and investment strategy that gives you access to newer technology such as HP Moonshot allows you to meet the requirements for the New Style of IT.
A simple and flexible payment structure can help you access the latest technology on your terms.
Why leverage a predictable monthly payment?
• Provides financial flexibility to scale up your business
• May help mitigate the financial risk of your IT transformation
Enables IT refresh cycles to keep up with latest technology
• May help improve your cash flow
• Offers predictable monthly payments which can help you stay within budget
How does it work?
• Talk to your HP Sales Rep about acquiring HP Moonshot using a predictable monthly payment
Expand your capacity easily with a simple add-on payment
• Add spare capacity needed for even greater agility
• Set your payment terms based on your business needs
• After an agreed term, you’ll be able to refresh your technology

From The HP Moonshot team provides answers to your questions about the datacenter of the future [The HP Blog Hub, as of Aug 29, 2013]

Q: WHAT IS THE FUNDAMENTAL IDEA BEHIND THE HP MOONSHOT SYSTEM?

A: The idea is simple—use energy-efficient CPU’s attuned to a particular application to achieve radical power, space and cost savings. Stated another way; creating software defined servers for specific applications that run at scale.

Q: WHAT IS INNOVATIVE ABOUT THE HP MOONSHOT ARCHITECTURE?

A: The most innovative characteristic of HP Moonshot is the architecture. Everything that is a common resource in a traditional server has been converged into the chassis. The power, cooling, management, fabric, switches and uplinks are all shared across 45 hot-pluggable cartridges in a 4.3U chassis.

Q: EXPLAIN WHAT IS MEANT BY “SOFTWARE DEFINED” SERVER

A: Software defined servers achieve optimal useful work per watt by specializing for a given workload: matching a software application with available technology that can provide the most optimal performance. For example, the firstMoonshot server is tuned for the web front end LAMP (Linux/Apache/MySQL/PHP) stack. In the most extreme case of a future FPGA (Field Programmable Gate Array) cartridge, the hardware truly reflects the exact algorithm required.

Q: DESCRIBE THE FABRIC THAT HAS BEEN INTEGRATED INTO THE CHASSIS

A: The HP Moonshot 1500 Chassis has been built for future SOC designs that will require a range of network capabilities including cartridge to cartridge interconnect. Additionally, different workloads will have a range of storage needs. 

There are four separate and independent fabrics that support a range of current and future capabilities; 8 lanes of Ethernet; storage fabric (6Gb SATA) that enable shared storage amongst cartridges or storage expansion to a single cartridge; a dedicated iLO management network to manage all the servers as one; a cluster fabric with point to point connectivity and low latency interconnect between servers.

image

Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:

We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]

Calxeda Midway in HP Moonshot [Janet Bartleson YouTube channel, Oct 28, 2013]

HP’s Paul Santeler encourages you to test Calxeda’s Midway-based Moonshot server cartridges in the HP Discovery Labs. http://www.hp.com/go/moonshot http://www.calxeda.com

Details about the latest and future Calxeda SoCs see in the closing part of this Dec 7 update

@SC13: HP Moonshot ProLiant m800 Server Cartridge with Texas Instruments [Janet Bartleson YouTube channel, Nov 26, 2013]

@SC13, Texas Instruments’ Arnon Friedmann shows the HP ProLiant m800 Server Cartridge with 4 66K2H12 Keystone II SoCs each with 4 ARM Cortex A15 cores and 8 C66x DSP cores–alltogether providing 500 gigaflops of DSP performance and 8Gigabytes of data on the server cartridge. It’s lower power, lower cost than traditional servers.

Details about the latest Texas Instruments DSP+ARM SoCs see after the Calxeda section in the closing part of this Dec 7 update

The New Style of IT & HP Moonshot: Keynote by HP’s Martin Fink at ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 29, published on Nov 11, 2013]

Keynote Presentation: The New Style of IT Speaker: Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company It’s an exciting time to be in technology. The IT industry is at a major inflection point driven by four generation-defining trends: the cloud, social, Big Data, and mobile. These trends are forever changing how consumers and businesses communicate, collaborate, and access information. And to accommodate these changes, enterprises, governments and fast growing companies desperately need a “New Style of IT.” Shaping the future of IT starts with a radically different approach to how we think about compute — for example, in servers, HP has a game-changing new category that requires 80% less space, uses 89% less energy, costs 77% less–and is 97% less complex. There’s never been a better time to be part of the ecosystem and usher in the next-generation of innovation.

From Big Data and the future of computing – A conversation with John Sontag [HP Enterprise 20/20 Blog, October 28, 2013]

20/20 Team: Where is HP today in terms of helping everyone become a data scientist?
John Sontag: For that to happen we need a set of tools that allow us to be data scientists in more than the ad hoc way I just described. These tools should let us operate productively and repeatably, using vocabulary that we can share – so that each of us doesn’t have to learn the same lessons over and over again. Currently at HP, we’re building a software tool set that is helping people find value in the data they’re already surrounded by. We have HAVEn for data management, which includes the Vertica data store, and Autonomy for analysis. For enterprise security we have ArcSight and ThreatCentral. We have our work around StoreOnce to compress things, and Express Query to allow us to consume data in huge volumes. Then we have hardware initiatives like Moonshot, which is bringing different kinds of accelerators to bear so we can actually change how fast – and how effectively – we can chew on data.
20/20 Team: And how is HP Labs helping shape where we are going?
John Sontag: One thing we’re doing on the software front is creating new ways to interrogate data in real time through an interface that doesn’t require you to be a computer scientist.  We’re also looking at how we present the answers you get in a way that brings attention to the things you most need to be aware of. And then we’re thinking about how to let people who don’t have massive compute resources at their disposal also become data scientists.
20/20 Team: What’s the answer to that?
John Sontag: For that, we need to rethink the nature of the computer itself. If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data. Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways. And then we’re thinking about how to package these re-imagined computers into boxes of different sizes that match the needs of everyone from the individual to the massive, multinational entity. On top of all that, we need to reduce costs – if we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you, your colleagues, and partners across the world can conduct experiments on this data literally as fast as we can think them up.
About John Sontag:
John Sontag is vice president and director of systems research at HP Labs. The systems research organization is responsible for research in memristor, photonics, physical and system architectures, storing data at high volume, velocity and variety, and operating systems. Together with HP business units and partners, the team reaches from basic research to advanced development of key technologies.
With more than 30 years of experience at HP in systems and operating system design and research, Sontag has had a variety of leadership roles in the development of HP-UX on PA-RISC and IPF, including 64-bit systems, support for multiple input/output systems, multi-system availability and Symmetric Multi-Processing scaling for OLTP and web servers.
Sontag received a bachelor of science degree in electrical engineering from Carnegie Mellon University.

Meet the Innovators [HewlettPackardVideos YouTube channel, May 23, 2013]

Meet those behind the innovative technology that is HP Project Moonshot http://www.hp.com/go/moonshot

From Meet the innovators behind the design and development of Project Moonshot [The HP Blog Hub, June 6, 2013]

This video introduces you to key HP team members who were part of the team that brings you the innovative technology that fundamentally changes how hyperscale servers are built and operated such as:
• Chandrakant Patel – HP Senior Fellow and HP Labs Chief Engineer
• Paul Santeler  – Senior Vice President and General Manager of the HyperScale Business Unit
• Kelly Pracht – Moonshot Hardware Platform Manager, HyperScale Business Unit
• Dwight Barron – HP Fellow, Chief Technologist, HyperScale Business Unit

From Six IT technologies to watch [HP Enterprise 20/20 Blog, Sept 5, 2013]

1. Software-defined everything
Over the last couple of years we have heard a lot about software defined networks (SDN) and more recently, software defined data center (SDDC). There are fundamentally two ways to implement a cloud. Either you take the approach of the major public cloud providers, combining low-cost skinless servers with commodity storage, linked through cheap networking. You establish racks and racks of them. It’s probably the cheapest solution, but you have to implement all the management and optimization yourself. You can use software tools to do so, but you will have to develop the policies, the workflows and the automation.
Alternatively you can use what is becoming known as “converged infrastructure,” a term originally coined by HP, but now used by all our competitors. Servers, storage and networking are integrated in a single rack, or a series of interconnected ones, and the management and orchestration software included in the offering, provides an optimal use of the environment. You get increased flexibility and are able to respond faster to requests and opportunities.
We all know that different workloads require different characteristics. Infrastructures are typically implemented using general purpose configurations that have been optimized to address a very large variety of workloads. So, they do an average job for each. What if we could change the configuration automatically whenever the workload changes to ensure optimal usage of the infrastructure for each workload? This is precisely the concept of software defined environments. Configurations are no longer stored in the hardware, but adapted as and when required. Obviously this requires more advanced software that is capable of reconfiguring the resources.
A software-defined data center is described as a data center where the infrastructure is virtualized and also delivered as a service. Control of the data center is automated by software – meaning hardware configuration is maintained through intelligent software systems. Three core components comprise the SDDC, server virtualization, network virtualization and storage virtualization. It remains to be said that some workloads still require physical systems (often referred to as bare metal), hence the importance of projects such as OpenStack’s Ironic which could be defined as a hypervisor for physical environments.

2. Specialized servers

As I mentioned, all workloads are not equal, but run on the same, general purpose servers (typically x86). What if we create servers that are optimized for specific workloads? In particular, when developing cloud environments delivering multi-tenant SaaS services, one could well envisage the use of servers specialized for a specific task, for example video manipulation, dynamic web service management. Developing efficient, low energy specialized servers that can be configured through software is what HP’s Project Moonshot is all about. Today, although still in its infancy, there is much more to come. Imagine about 45 server/storage cartridges linked through three fabrics (for networking, storage and high speed cartridge to cartridge interconnections), sharing common elements such as network controllers, management functions and power management. If you then build the cartridges using low energy servers, you reduce energy consumption by nearly 90%. If you build SaaS type environments, using multi-tenant application modules, do you still need virtualization? This simplifies the environment, reduces the cost of running it and optimizes the use of server technology for every workload.

Particularly for environments that constantly run certain types of workloads, such as analyzing social or sensor data, the use of specialized servers can make the difference. This is definitely an evolution to watch.

3. Photonics

Let’s now complement those specialized servers with photonic based connections enabling flat, hyper-efficient networks boosting bandwidth, and we have an environment that is optimized to deliver the complex tasks of analyzing and acting upon signals provided by the environment in its largest sense.

But technology is going even further. I talked about the three fabrics, over time; why not use photonics to improve the speed of the fabrics themselves, increasing the overall compute speed. We are not there yet, but early experiments with photonic backplanes for blade systems have shown overall compute speed increased up to a factor seven. That should be the second step.

The third step takes things further. The specialized servers I talked about are typically system on a chip (SoC) servers, in other words, complete computers on a single chip. Why not use photonics to link those chips with their outside world? On-chip lasers have been developed in prototypes, so we are not that far out. We could even bring things one step further and use photonics within the chip itself, but that is still a little further out. I can’t tell you the increase in compute power that such evolutions will provide you, but I would expect it to be huge.

4. Storage
Storage is at a crossroads. On the one hand, hard disk drives (HDD) have improved drastically over the last 20 years, both in reading speed and in density. I still remember the 20MB hard disk drive, weighing 125Kg of the early 80’s. When I compare that with the 3TB drive I bought a couple months ago for my home PC, I can easily depict this evolution. But then the SSD (solid state disk) has appeared. Where a HDD read will take you 4 ms, the SDD read is down at 0.05 ms.

Using nanotechnologies, HP Labs did develop prototypes of the Memristor, a new approach to data storage, faster than Flash memory and consumes way less energy. Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.


Details about the latest and future Calxeda SoCs:

Calxeda EnergyCore ECX-2000 family – ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 30, 2013]

Calxeda tells us about their new EnergyCore ECX-2000 product line based on ARM Cortex-A15. http://www.calxeda.com/ecx-2000-family/

From ECX-2000 Product Brief [October, 2013]

The Calxeda EnergyCore ECX-2000 Series is a family of SoC (Server-on-Chip) products that delivers the power efficiency of ARM® processors, and the OpenStack, Linux, and virtualization software needed for modern cloud infrastructures. Using the ARM Cortex A15 quad-core processor, the ECX-2000 delivers roughly twice the performance, three times the memory bandwidth, and four times the memory capacity of the ground-breaking ECX-1000. It is extremely scalable due to the integrated Fleet Fabric Switch, while the embedded Fleet Engine simultaneously provides out-of-band control and intelligence for autonomic operation.

In addition to enhanced performance, the ECX-2000 provides hardware virtualization support via KVM and Xen hypervisors. Coupled with certified support for Ubuntu 13.10 and the Havana Openstack release, this marks the first time an ARM SoC is ready for Cloud computing. The Fleet Fabric enables the highest network and interconnect bandwidth in the MicroServer space, making this an ideal platform for streaming media and network-intensive applications.

The net result of the EnergyCore SoC architecture is a dramatic reduction in power and space requirements, allowing rapidly growing data centers to quickly realize operating and capital cost savings.

image

Scalability you can grow into. An integrated EnergyCore Fabric Switch within every SoC provides up to five 10 Gigabit lanes for connecting thousands of ECX-2000 server nodes into clusters capable of handling distributed applications at extreme scale. Completely topology agnostic, each SoC can be deployed to work in a variety of mesh, grid, or tree network structures, providing opportunities to find the right balance of network throughput and fault resiliency for any given workload.

Fleet Fabric Switch
• Integrated 80Gb (8×8) crossbar switch with through-traffic support
• Five (5) 10Gb external channels, three (3) 10Gb internal channels
• Configurable topology capable of connecting up to 4096 nodes
• Dynamic Link Speed Control from 1Gb to 10Gb to minimize power and maximize performance
• Network Proxy Support maintains network presence even with node powered off
• In-order flow delivery
• MAC learning provider support for virtualization

ARM Servers and Xen — Hypervisor Support at Hyperscale – Larry Wikelius, [Co-Founder of] Calxeda [TheLinuxFoundation YouTube channel, Oct 1, 2013]

[Xen User Summit 2013] The emergence of power optimized hyperscale servers is leading to a revolution in Data Center design. The intersection of this revolution with the growth of Cloud Computing, Big Data and Scale Out Storage solutions is resulting in innovation at rate and pace in the Server Industry that has not been seen for years. One particular example of this innovation is the deployment of ARM based servers in the Data Center and the impact these servers have on Power, Density and Scale. In this presentation we will look at the role that Xen is playing in the Revolution of ARM based server design and deployment and the impact on applications, systems management and provisioning.

Calxeda Launches Midway ARM Server Chips, Extends Roadmap [EnterpriseTech, Oct 28, 2013]

ARM server chip supplier Calxeda is just about to ship its second generation of EnergyCore processors for hyperscale systems and most of its competitors are still working on their first products. Calxeda is also tweaking its roadmap to add a new chip to its lineup, which will bridge between the current 32-bit ARM chips and its future 64-bit processors.
There is going to be a lot of talk about server-class ARM processors this week, particularly with ARM Holdings hosting its TechCon conference in Santa Clara.
A month ago, EnterpriseTech told you about the “Midway” chip that Calxeda had in the works and as well as its roadmap to get beefier 64-bit cores and extend its Fleet Services fabric to allow for more than 100,000 nodes to be linked together.
The details were a little thin on the Midway chip, but we now know that it will be commercialized as the ECX-2000, and that Calxeda is sending out samples to server makers right now. The plan is to have the ECX-2000 generally available by the end of the year, and that is why company is ready to talk about some feeds and speeds. Karl Freund, vice president of marketing at Calxeda, walked EnterpriseTech through the details.

image

The Midway chip is fabricated in the same 40 nanometer process as the existing “High Bank” ECX-1000 chip that Calxeda first put into the field in November 2011 in the experimental “Redstone” hyperscale servers from Hewlett-Packard. That 32-bit chip, based on the ARM Cortex-A9 core, was subsequently adopted in systems from Penguin Computing, Boston, and a number of other hyperscale datacenter operators who did proofs of concept with the chips. The ECX-1000 has four cores and was somewhat limited in its performance and was definitely limited in its main memory, which topped out at 4 GB across the four-core processor. But the ECX-2000 addresses these issues.
The ECX-2000 is based on ARM Holding’s Cortex-A15 core and has the 40-bit physical memory extensions, which allows for up to 16 GB of memory to be physically attached to each socket. With the 40-bit physical addressing added with the Cortex-A15, the memory controller can, in theory, address up to 1 TB of main memory; this is called Large Physical Address Extension (LPAE) in the ARM lingo, and it maps the 32-bit physical addressing on the core to a 40-bit virtual address space. Each core on the ECX-2000 has 32 KB of L1 instruction cache and 32 KB of L1 data cache, and ARM licensees are allowed to scale the L2 cache as they see fit. The ECX-2000 has 4 MB of L2 cache shared across the four cores on the die. These are exactly the same L1 and L2 cache sizes as used in the prior ECX-1000 chips.
The Cortex-A15 design was created to scale to 2.5 GHz, but as you crank up the clocks on any chip, the amount of energy consumed and heat radiated grows progressively larger as clock speeds go up. At a certain point, it just doesn’t make sense to push clock speeds. Moreover, every drop in clock speed gives a proportionately larger increase in thermal efficiency, and this is why, says Freund, Calxeda is making its implementation of the Cortex-A15 top out at 1.8 GHz. The company will offer lower-speed parts running at 1.1 GHz and 1.4 GHz for customers that need an even better thermal profile or a cheaper part where low cost is more important than raw performance or thermals.
What Calxeda and its server and storage array customers are focused on is the fact that the Midway chip running at 1.8 GHz has twice the integer, floating point, and Java performance of a 1.1 GHz High Bank chip. That is possible, in part, because the new chip has four times the main memory and three times the memory bandwidth as the old chip in addition to a 64 percent boost in clock speed. Calxeda is not yet done benchmarking systems using the chips to get a measure of their thermal efficiency, but is saying that there is as much as a 33 percent boost in performance per watt comparing old to new ECX chips.
The new ECX-2000 chip has a dual-core Cortex-A7 chip on the die that is used as a controller for the system BIOS as well as a baseboard management controller and a power management controller for the servers that use them. These Fleet Engines, as Calxeda calls them, eliminate yet another set of components, and therefore their cost, in the system. These engines also control the topology of the Fleet Services fabric, which can be set up in 2D torus, mesh, butterfly tree, and fat tree network configurations.
The Fleet Services fabric has 80 Gb/sec of aggregate bandwidth and offers multiple 10 Gb/sec Ethernet links coming off the die to interconnect server nodes on a single card, multiple cards in an enclosure, multiple enclosures in a rack, and multiple racks in a data center. The Ethernet links are also used to allow users to get to applications running on the machines.
Freund says that the ECX-2000 chip is aimed at distributed, stateless server workloads, such as web server front ends, caching servers, and content distribution. It is also suitable for analytics workloads like Hadoop and distributed NoSQL data stores like Cassandra, all of which tend to run on Linux. Both Red Hat and Canonical are cooking up commercial-grade Linuxes for the Calxeda chips, and SUSE Linux is probably not going to be far behind. The new chips are also expected to see action in scale-out storage systems such as OpenStack Swift object storage or the more elaborate Gluster and Ceph clustered file systems. The OpenStack cloud controller embedded in the just-announced Ubuntu Server 13.10 is also certified to run on the Midway chip.
Hewlett-Packard has confirmed that it is creating a quad-node server cartridge for its “Moonshot” hyperscale servers, which should ship to customers sometime in the first or second quarter of 2014. (It all depends on how long HP takes to certify the system board.) Penguin Computing, Foxconn, Aaeon, and Boston are expected to get beta systems out the door this year using the Midway chip and will have them in production in the first half of next year. Yes, that’s pretty vague, but that is the server business, and vagueness is to be expected in such a young market as the ARM server market is.
Looking ahead, Calxeda is adding a new processor to its roadmap, code-named “Sarita.” Here’s what the latest system-on-chip roadmap looks like now:

image

The future “Lago” chip is the first 64-bit chip that will come out of Calxeda, and it is based on the Cortex-A57 design from ARM Holdings –one of several ARMv8 designs, in fact. (The existing Calxeda chips are based on the ARMv7 architecture.)
Both Sarita and Lago will be implemented in TSMC’s 28 nanometer processes, and that shrink from the current 40 nanometer to 28 nanometer processes is going to allow for a lot more cores and other features to be added to the die and also likely a decent jump in clock speed, too. Freund is not saying at the moment which way it will go.
But what Freund will confirm is that Sarita will be pin-compatible with the existing Midway chip, meaning that server makers who adopt Midway will have a processor bump they can offer in a relatively easy fashion. It will also be based on the Cortex-A57 cores from ARM Holdings, and will sport four cores on a die that deliver about a 50 percent performance increase compared to the Midway chips.
The Lago chips, we now know, will scale to eight cores on a die and deliver about twice the performance of the Midway chips. Both Lago and Sarita are on the same schedule, in fact, and they are expected to tape out this quarter. Calxeda expects to start sampling them to customers in the second quarter of 2014, with production quantities being available at the end of 2014.
Not Just Compute, But Networking, Too
As important as the processing is to a system, the Fleet Services fabric interconnect is perhaps the key differentiator in its design. The current iteration of that interconnect, which is a distributed Layer 2 switch fabric that is spread across each chip in a cluster, can scale across 4,096 nodes without requiring top-of-rack and aggregation switches.

image

Both of the Lago and Sarita chips will be using the Fleet Services 2.0 intehttp://www.ti.com/product/66ak2h12rconnect that is now being launched with Midway. This iteration of the interconnect has all kinds of tweaks and nips and tucks but no scalability enhancements beyond the 4,096 nodes in the original fabric.
Freund says that the Fleet Services 3.0 fabric, which allows the distributed switch architecture to scale above 100,000 nodes in a flat network, will probably now come with the “Ratamosa” chips in 2015. It was originally – and loosely – scheduled for Lago next year. The circuits that do the fabric interconnect is not substantially different, says Freund, but the scalability is enabled through software. It could be that customers are not going to need such scalability as rapidly as Calxeda originally thought.
The “Navarro” kicker to the Ratamosa chip is presumably based on the ARMv9 architecture, and Calxeda is not saying anything about when we might see that and what properties it might have. All that it has said thus far is that it is aimed at the “enterprise server era.”


Details about the latest Texas Instruments DSP+ARM SoCs:

A Better Way to Cloud [MultiVuOnlineVideo YouTube channel, Nov 13, 2012]

To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. To view Multimedia News Release, go to http://www.multivu.com/mnr/54044-texas-instruments-keystone-multicore-socs-revitalize-cloud-applications

Infinite Scalability in Multicore Processors [Texas Instruments YouTube channel, Aug 27, 2012]

Over the years, our industry has preached how different types of end equipments and applications are best served by distinctive multicore architectures tailored to each. There are even those applications, such as high performance computing, which can be addressed by more than one type of multicore architecture. Yet most multicore devices today tend to be suited for a specific approach or a particular set of markets. This keynote address, from the 2012 Multicore Developer’s Conferece, touches upon why the market needs an “infinitely scalable” multicore architecture which is both scalable and flexible enough to support disparate markets and the varied ways in which certain applications are addressed. The speaker presents examples of how a single multicore architecture can be scalable enough to address the needs of various high performance markets, including cloud RAN, networking, imaging and high performance computing. Ramesh Kumar manages the worldwide business for TI’s multicore growth markets organization. The organization develops multicore processors and software that are targeted for the communication infrastructure space, including multimedia and networking infrastructure equipment, as well as end equipment that requires multicore processors like public safety, medical imaging, high performance computing and test and measurement. Ramesh is a graduate of Northeastern University, where he obtained an executive MBA, and Purdue University where he received a master of science in electrical engineering.

From Imagine the impact…TI’s KeyStone SoC + HP Moonshot [TI’s Multicore Mix Blog, April 19, 2013]

TI’s participation in HP’s Pathfinder Innovation Ecosystem is the first step towards arming HP’s customers with optimized server systems that are ideally suited for workloads such as oil and gas exploration, Cloud Radio Access Networks (C-RAN), voice over LTE and video transcoding. This collaboration between TI and HP is a bold step forward, enabling flexible, optimized servers to bring differentiated technologies, such as TI’s DSPs, to a broader set of application providers. TI’s KeyStone II-based SoCs, which integrate fixed- and floating- point DSP cores with multiple ARM® Cortex™A-15 MPCore processors, packet and security processing, and high speed interconnect, give customers the performance, scalability and programmability needed to build software-defined servers. HP’s Moonshot system integrates storage, networking and compute cards with a flexible interconnect, allowing customers to choose the optimized ratio enabling the industry’s first software-defined server platform. Bringing TI’s KeyStone II-based SoCs into HP’s Moonshot system opens up several tantalizing possibilities for the future. Let’s look at a few examples:
Think about the number of voice conversations happening over mobile devices every day. These conversations are independent of each other, and each will need transcoding from one voice format to another as voice travels from one mobile device, through the network infrastructure and to the other mobile device. The sheer number of such conversations demand that the servers used for voice transcoding be optimized for this function. Voice is just one example. Now think about video and music, and you can imagine the vast amount of processing required. Using TI’s KeyStone II-based SoCs with DSP technology provides optimized server architecture for these applications because our SoCs are specifically tuned for signal processing workloads.
Another example can be with C-RAN. We have seen a huge push for mobile operators to move most of the mobile radio processing to the data center. There are several approaches to achieve this goal, and each has pros and cons associated with them. But one thing is certain – each approach has to do wireless symbol processing to achieve optimum 3G or 4G communications with smart mobile devices. TI’s KeyStone II-based SoCs are leading the wireless communication infrastructure market and combine key accelerators such as BCP (Bit Rate Co-Processor), VCP (Viturbi Co-Processor) and others to enable 3G/4G standards compliant for wireless processing. These key accelerators offload standard-based wireless processing from the ARM and/or DSP cores, freeing the cores for value-added processing. The combination of ARM/DSP with these accelerators provides an optimum SoC for 3G/4G wireless processing. By combining TI’s KeyStone II-based SoC with HP’s Moonshot system, operators and network equipment providers can now build customized servers for C-RAN to achieve higher performance systems at lower cost and ultimately provide better experiences to their customers.

A better way to cloud: TI’s new KeyStone multicore SoCs [embeddednewstv YouTube channel, published on Jan 12,2013 (YouTube: Oct 21, 2013)]

Brian Glinsman, vice president of multicore processors at Texas Instruments, discusses TI’s new KeyStone multicore SoCs for cloud infrastructure applications. TI announced six new SoCs, based on their 28-nm KeyStone architecture, featuring the Industry’s first implementation of quad ARM Cortex-A15 MPCore processors and TMS320C66x DSPs for purpose built servers, networking, high performance computing, gaming and media processing applications.

Texas Instruments Offers System on a Chip for HPC Applications [RichReport YouTube channel, Nov 20, 2012]

In this video from SC12, Arnon Friedmann from Texas Instruments describes the company’s new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption. “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”

A better way to cloud: TI’s new KeyStone multicore SoCs revitalize cloud applications, enabling new capabilities and a quantum leap in performance at significantly reduced power consumption

    • Industry’s first implementation of quad ARM® Cortex™-A15 MPCore™ processors in infrastructure-class embedded SoC offers developers exceptional capacity & performance at significantly reduced power for networking, high performance computing and more
    • Unmatched combination of Cortex-A15 processors, C66x DSPs, packet processing, security processing and Ethernet switching, transforms the real-time cloud into an optimized high performance, power efficient processing platform
    • Scalable KeyStone architecture now features 20+ software compatible devices, enabling customers to more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices

ELECTRONICA – MUNICH (Nov.13, 2012) /PRNewswire/ — To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption.

To TI, a BETTER way to cloud means:

    • Safer communities thanks to enhanced weather modeling;
    • Higher returns from time sensitive financial analysis;
    • Improved productivity and safety in energy exploration;
    • Faster commuting on safer highways in safer cars;
    • Exceptional video on any screen, anywhere, any time;
    • More productive and environmentally friendly factories; and
    • An overall reduction in energy consumption for a greener planet.
    TI’s new KeyStone multicore SoCs are enabling this – and much more. These 28-nm devices integrate TI’s fixed-and floating-point TMS320C66x digital signal processor (DSP) generation cores – yielding the best performance per watt ratio in the DSP industry – with multiple ARM® Cortex™-A15 MPCore™ processors – delivering unprecedented processing capability combined with low power consumption – facilitating the development of a wide-range of infrastructure applications that can enable more efficient cloud experiences. The unique combination of Cortex-A15 processors and C66x DSPcores, with built-in packet processing and Ethernet switching, is designed to efficiently offload and enhance the cloud’s first generation general purpose servers; servers that struggle with big data applications like high performance computing and video processing.
    “Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”
    TI’s six new high-performance SoCs include the 66AK2E02, 66AK2E05, 66AK2H06, 66AK2H12, AM5K2E02 and AM5K2E04, all based on the KeyStone multicore architecture. With KeyStone’s low latency high bandwidth multicore shared memory controller (MSMC), these new SoCs yield 50 percent higher memory throughput when compared to other RISC-based SoCs. Together, these processing elements, with the integration of security processing, networking and switching, reduce system cost and power consumption, allowing developers to support the development of more cost-efficient, green applications and workloads, including high performance computing, video delivery and media and image processing. With the matchless combination TI has integrated into its newest multicore SoCs, developers of media and image processing applications will also create highly dense media solutions.

    image

    “Visionary and innovative are two words that come to mind when working with TI’s KeyStone devices,” said Joe Ye, CEO, CyWee. “Our goal is to offer solutions that merge the digital and physical worlds, and with TI’s new SoCs we are one step closer to making this a reality by pushing state-of-the-art video to virtualized server environments. Our collaboration with TI should enable developers to deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.”
    Simplified development with complete tools and support
    TI continues to ease development with its scalable KeyStone architecture, comprehensive software platform and low-cost tools. In the past two years, TI has developed over 20 software compatible multicore devices, including variations of DSP-based solutions, ARM-based solutions and hybrid solutions with both DSP and ARM-based processing, all based on two generations of the KeyStone architecture. With compatible platforms across TI’s multicore DSPs and SoCs, customers can more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices, starting at just $30 and operating at a clock rate of 850MHz all the way to 15GHz of total processing power.
    TI is also making it easier for developers to quickly get started with its KeyStone multicore solutions by offering easy-to-use, evaluation modules (EVMs) for less than $1K, reducing developers’ programming burdens and speeding development time with a robust ecosystem of multicore tools and software.
    In addition, TI’s Design Network features a worldwide community of respected and well established companies offering products and services that support TI multicore solutions. Companies offering supporting solutions to TI’s newest KeyStone-based multicore SoCs include 3L Ltd., 6WIND, Advantech, Aricent, Azcom Technology, Canonical, CriticalBlue Enea, Ittiam Systems, Mentor Graphics, mimoOn, MontaVista Software, Nash Technologies, PolyCore Software and Wind River.
    Availability and pricing
    TI’s 66AK2Hx SoCs are currently available for sampling, with broader device availability in 1Q13 and EVM availability in 2Q13. AM5K2Ex and 66AK2Ex samples and EVMs will be available in the second half of 2013. Pricing for these devices will start at $49 for 1 KU.

    66AK2H14 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, Nov 10, 2013]
    The same as below for 66AK2H12 SoC with addition of:

    More Literature:

    From that the below excerpt is essential to understand the added value above 66AK2H12 SoC:

    image

    Figure 1. TI’s KeyStone™ 66AK2H14 SoC

    The 66AK2H14 SoC shown in Figure 1, with the raw computing power of eight C66x processors and quad ARM Cortex-A15s at over 1GHz performance, enables applications such as very large fast fourier transforms (FFT) in radar and multiple camera image analytics where a 10Gbit/s networking connection is needed. There are, and have been, several sophisticated technologies that have offered the bandwidth and additional features to fill this role. Some such as Serial RapidIO® and Infiniband have been successful in application domains that Gigabit Ethernet could not address, and continue to make sense, but 10Gbit/s Ethernet will challenge their existence.

    66AK2H12 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, created on Nov 8, 2012]

    Datasheet manual [351 pages]:

    More Literature:

    Description

    The 66AK2Hx platform is TI’s first to combine the quad ARM® Cortex™-A15 MPCore™ processors with up to eight TMS320C66x high-performance DSPs using the KeyStone II architecture. Unlike previous ARM Cortex-A15 devices that were designed for consumer products, the 66AK2Hx platform provides up to 5.6 GHz of ARM and 11.2 GHz of DSP processing coupled with security and packet processing and Ethernet switching, all at lower power than multi-chip solutions making it optimal for embedded infrastructure applications like cloud computing, media processing, high-performance computing, transcoding, security, gaming, analytics and virtual desktop. Using TI’s heterogeneous programming runtime software and tools, customers can easily develop differentiated products with 66AK2Hx SoCs.

    image

    Taking Multicore to the Next Level: KeyStone II Architecture [Texas Instruments YouTube channel, Feb 26, 2012]

    TI’s scalable KeyStone II multicore architecture includes support for both TMS320C66x DSP cores and multiple cache coherent quad ARM Cortex™-A15 clusters, for a mixture of up to 32 DSP and RISC cores. With significant updates to its award-winning KeyStone architecture, TI is now paving the way for a new era of high performance 28-nm devices that meld signal processing, networking, security and control functionality, with KeyStone II. Ideal for applications that demand superior performance and low power, devices based on the KeyStone architecture are optimized for high performance markets including communications infrastructure, mission critical, test and automation, medical imaging and high performance and cloud computing. For more information, please visit http://www.ti.com/multicore.

    Introducing the EVMK2H [Texas Instruments YouTube channel, Nov 15, 2013]

    Introducing the EVMK2H evaluation module, the cost-efficient development tool from Texas Instruments that enables developers to quickly get started working on designs for the 66AK2H06, 66AK2H12, and 66AK2H14 multicore DSP + ARM devices based on the KeyStone architecture.

    Kick start development of high performance compute systems with TI’s new KeyStone™ SoC and evaluation module [TI press release, Nov 14, 2013]

    Combination of DSP + ARM® cores and high-speed peripherals offer developers an optimal compute solution at low power consumption

    DALLAS, Nov. 14, 2013 /PRNewswire/ — Further easing the development of processing-intensive applications, Texas Instruments (TI) (NASDAQ: TXN) is unveiling a new system-on-chip (SoC), the 66AK2H14, and evaluation module (EVM) for its KeyStoneTM-based 66AK2Hx family of SoCs. With the new 66AK2H14 device, developers designing high-performance compute systems now have access to a 10Gbps Ethernet switch-on-chip. The inclusion of the 10GigE switch, along with the other high-speed, on-chip interfaces, saves overall board space, reduces chip count and ultimately lowers system cost and power. The EVM enables developers to evaluate and benchmark faster and easier. The 66AK2H14 SoC provides industry-leading computational DSP performance at 307 GMACS/153 GFLOPS and 19600 DMIPS of ARM performance, making it ideal for a wide variety of applications such as video surveillance, radar processing, medical imaging, machine vision and geological exploration.

    “Customers today require increased performance to process compute-intensive workloads using less energy in a smaller footprint,” said Paul Santeler, vice president and general manager, Hyperscale Business, HP. “As a partner in HP’s Moonshot ecosystem dedicated to the rapid development of new Moonshot servers, we believe TI’s KeyStone design will provide new capabilities across multiple disciplines to accelerate the pace of telecommunication innovations and geological exploration.”

    Meet TI’s new 10Gbps Ethernet DSP + ARM SoC
    TI’s newest silicon variant, the 66AK2H14, is the latest addition to its high-performance 66AK2Hx SoC family which integrates multiple ARM Cortex™-A15 MPCore™ processors and TI’s fixed- and floating-point TMS320C66x digital signal processor (DSP) generation cores. The 66AK2H14 offers developers exceptional capacity and performance (up to 9.6 GHz of cumulative DSP processing) at industry-leading size, weight, and power. In addition, the new SoC features a wide array of unique high-speed interfaces, including PCIe, RapidIO, Hyperlink, 1Gbps and 10Gbps Ethernet, achieving total I/O throughput of up to 154Gbps. These interfaces are all distinct and not multiplexed, allowing designers tremendous flexibility with uncompromising performance in their designs.
    Ease development and debugging with TI’s tools and software
    TI helps simplify the design process by offering developers highly optimized software for embedded HPC systems along with development and debugging tools for the EVMK2H – all for under $1,000. The EVMK2H features a single 66AK2H14 SoC, a status LCD, two 1Gbps Ethernet RJ-45 interfaces and on-board emulation. An optional EVM breakout card (available separately) also provides two 10Gbps Ethernet optical interfaces for 20Gbps backplane connectivity and optional wire rate switching in high density systems.
    The EVMK2H is bundled with TI’s Multicore Software Development Kit (MCSDK), enabling faster development with production ready foundational software. The MCSDK eases development and reduces time to market by providing highly-optimized bundles of foundational, platform-specific drivers, optimized libraries and demos.
    Complementary analog products to increase system performance
    TI offers a wide range of power management and analog signal chain components to increase the system performance of 66AK2H14 SoC-based designs. For example, the TPS53xx integrated FET DC/DC converters provide the highest level of power conversion efficiency even at light loads, while the LM10011 VID converter with dynamic voltage control helps reduce system power consumption. The CDCM6208 low-jitter clock generator also eliminates the need for external buffers, jitter cleaners and level translators.
    Availability and pricing
    TI’s EVMK2H is available now through TI distribution partners or TI.com for $995. In addition to TI’s Linux distribution provided in the MCSDK, Wind River® Linux is available now for the 66AK2Hxx family of SoCs. Green Hills® INTEGRITY® RTOS and Wind River VxWorks® RTOS support will each be available before the end of the year. Pricing for the 66AK2H14 SoC will start at $330 for 1 KU. The 10Gbps Ethernet breakout card will be available from Mistral.

    Ask the Expert: How can developers accelerate scientific computing with TI’s multicore DSPs? [Texas Instruments YouTube channel, Feb 7, 2012]

    Dr. Arnon Friedmann is the business manager for TI’s high performance computing products in the multicore and media infrastructure business. In this video, he explains how TI’s multicore DSPs are well suited for computing applications in oil and gas exploration, financial modeling and molecular dynamics, where ultra- high performance, low power and easy programmability are critical requirements.

    Ask the Expert: Arnon Friedmann [Texas Instruments YouTube channel, Sept 6, 2012]

    How are TI’s latest multicore devices a fit for video surveillance and smart analytic camera applications? Dr. Arnon Friedmann, PhD, is a business manager for multicore processors at Texas Instruments. In this role, he is responsible for growing TI’s business in high performance computing, mission critical, test and measurement and imaging markets. Prior to his current role, Dr. Friedmann served as the marketing director for TI’s wireless base station infrastructure group, where he was responsible for all marketing and design activities. Throughout his 14 years of experience in digital communications research and development, Dr. Friedmann has accumulated patents in the areas of disk drive systems, ADSL modems and 3G/4G wireless communications. He holds a PhD in electrical engineering and bachelor of science in engineering physics, both from the University of California, San Diego.

    End of Updates as of Dec 6, 2013


    The original post (8 months ago):

    HP Moonshot: Designed for the Data Center, Built for the Planet [HP press kit, April 8, 2013]

    On April 8, 2013, HP unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

    For more details on the disruptive potential of HP Moonshot, visit TheDisruption.com

    Introducing HP Moonshot [HewlettPackardVideos April 11, 2013]

    See how HP is defining disruption with the introduction of HP Moonshot.

    HP’s Cutting Edge Data Center Innovation [Ramón Baez, Senior Vice President and Chief Information Officer (CIO) of HP, HP Next [launched on April 2], April 10, 2013]

    This is an exciting time to be in the IT industry right now. For those of you who have been around for a while — as I have — there have been dramatic shifts that have changed how businesses operate.
    From the early days of the mainframes, to the explosion of the Internet and now social networks, every so often very important game-changing innovation comes along. We’re in the midst of another sea change in technology.
    Inside HP IT, we are testing the company’s Moonshot servers. With these servers running the same chips found in smart phones and tablets, they are using incredibly less power, require considerably less cooling and have a smaller footprint.

    We currently are running some of our intensive hp.com applications on Moonshot and are seeing very encouraging results. Over half a billion people will visit hp.com this year, and the new Moonshot technology will run at a fraction of the space, power and cost – basically we expect to run HP.com off of the same amount of energy needed for a dozen 60-watt light bulbs.

    This technology will revolutionize data centers.
    Within HP IT, we are fortunate in that over the past several years we have built a solid data center foundation to run our company. Like many companies, we were a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States.
    With the addition of four new EcoPODs to our infrastructure and these new Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.
    Moonshot is just the beginning.The product roadmap for Moonshot is extremely promising and I am excited to see what we can do with it within HP IT, and what benefits our customers will see.

    What Calxeda is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013] which is best to start with for its simple and efficient message, as well as what Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] already contained on this blog earlier:

    Calxeda discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

    Then we can turn to the Moonshot product launch by HP 2 days ago:

    Note that the first three videos following here were released 3 days later, so don’t be surpised by YouTube dates, in fact the same 3 videos (as well as the “Introducing HP Moonshot” embedded above) were delivered on April 8 live webcast, see the first 18 minutes of that, and then follow according HP’s flow of the presentation if you like. I would certainly recommend my own presentation compiled here.

    HP president and CEO Meg Whitman on the emergence of a new style of IT [HewlettPackardVideos YouTube channel, April 11, 2013]

    HP president and CEO Meg Whitman outlines the four megatrends causing strain on current infrastructure and how HP Project Moonshot servers are built to withstand data center challenges.

    EVP and GM of HP’s Enterprise Group Dave Donatelli discusses HP Moonshot [HewlettPackardVideos YouTube channel, April 11, 2013]

    EVP and GM of HP’s Enterprise Group Dave Donatelli details how HP Moonshot redefines the server market.

    Tour the Houston Discovery Lab — where the next generation of innovation is created [HewlettPackardVideos YouTube channel, April 11, 2013]

    SVP and GM of HP’s Industry Standard Servers and Software Mark Potter and VP and GM of HP’s Hyperscale Business Unit Paul Santeler tour HP’s Discovery Lab in Houston, Texas. HP’s Discovery Lab allows customers to test, tune and port their applications on HP Moonshot servers in-person and remotely.

    A new era of accelerated innovation [HP Moonshot minisite, April 8, 2013]

    Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale.

    Watch the unveiling [link to HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’]image

    On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part  of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.

    image

    imageWith up to a 180 servers inside the box (45 now) it was necessary to integrate network switching. There are two sockets (see left) for the network switch so you can configure for redundancy. The downlink module which talks to the cartridges is on left of the below image. This module is paired with an uplink module (see on the middle of the below image as taken out, and then shown with the uplink module on the right) that is in the back of the server. There will be more options available.image

    More information:
    Enterprise Information Library for Moonshot
    HP Moonshot System [Technical white paper from HP, April 5, 2013] from which I will include here the following excerpts for more information:

    HP Moonshot 1500 Chassis

    The HP Moonshot 1500 Chassis is a 4.3U form factor and slides out of the rack on a set of rails like a file cabinet drawer. It supports 45 HP ProLiant Moonshot Servers and an HP Moonshot-45G Switch Module that are serviceable from the top.
    It is a modern architecture engineered for the new style of IT that can support server cartridges, server and storage cartridges, storage only cartridges and a range of x86, ARM or accelerator based processor technologies.
    As an initial offering, the HP Moonshot 1500 Chassis is fully populated 45 HP ProLiant Moonshot Servers and one HP Moonshot-45G Switch Module and a second HP Moonshot-45G Switch Module can be purchased as an option. Future offerings will include quad server cartridges and will result in up to 180 servers per chassis. The 4.3U form factor allows for 10 chassis per rack, which with the quad server cartridge amounts to 1800 servers in a single rack.
    The Moonshot 1500 Chassis simplifies management with four iLO processors that share management responsibility for the 45 servers, power, cooling, and switches.

    Highly flexible fabric

    Built into the HP Moonshot 1500 Chassis architecture are four separate and independent fabrics that support a range of current and future capabilities:
    • Network fabric
    • Storage fabric
    • Management fabric
    • Integrated cluster fabric
    Network fabric
    The Network fabric provides the primary external communication path for the HP Moonshot 1500 Chassis.
    For communication within the chassis, the network switch has four communication channels to each of the 45 servers. Each channel supports a 1-GbE or 10-GbE interface. Each HP Moonshot-45G Switch Module supports 6 channels of 10GbE interface to the HP Moonshot-6SFP network uplink modules located in the rear of the chassis.
    Storage fabric
    The Storage fabric provides dedicated SAS lanes between server and storage cartridges. We utilize HP Smart Storage firmware found in the ProLiant family of servers to enable multiple core to spindle ratios for specific solutions. A hard drive can be shared among multiple server cartridges to enable low cost boot, logging, or attached to a node to provide storage expansion.
    The current HP Moonshot System configuration targets light scale-out applications. To provide the best operating environment for these applications, it includes HP ProLiant Moonshot Servers with a hard disk drive (HDD) as part of the server architecture. Shared storage is not an advantage for these environments. Future releases of the servers thattarget different solutions will take advantage of the storage fabric.
    Management fabric
    We utilize the Integrated Lights-Out (iLO) application-specific integrated circuit (ASIC) standard in the HP ProLiant family of servers to provide the innovative management features in the HP Moonshot System. To handle the range of extreme low energy processors we provide a device neutral approach to management, which can be easily consumed by data center operators to deploy at scale.
    The Management fabric enables management of the HP Moonshot System components as one platform with a dedicated iLO network. Benefits of the management fabric include:
    • The iLO Chassis Manager aggregates data to a common set of management interfaces.
    • The HP Moonshot 1500 Chassis has a single Ethernet port gateway that is the single point of access for the Moonshot Chassis manager.
    • Intelligent Platform Management Interface (IPMI) and Serial Console for each server
    • True out-of-band firmware update services
    • SL-APM Rack Management spans rack or multiple racks
    Integrated Cluster fabric
    The Integrated Cluster fabric provides a high-speed interface among future server cartridge technologies that will benefit from high bandwidth node-to-node communication. North, south, east, and west lanes are provided between individual server cartridges.
    The current HP ProLiant Moonshot Servertargets light scale-out applications. These applications do not benefit from the node-to-node communications, so the Integrated Cluster fabric is not utilized. Future releases of the cartridges that target different workloads that require low latency interconnects will take advantage of the Integrated Cluster fabric.

    HP ProLiant Moonshot Server

    HP will bring a growing library of cartridges, utilizing cutting-edge technology from industry leading partners. Each server will target specific solutions that support emerging Web, Cloud, and Massive-Scale Environments, as well as Analytics and Telecommunications. We are continuing server development for other applications, including Big Data, High-Performance Computing, Gaming, Financial Services, Genomics, Facial Recognition, Video Analysis, and more.
    Figure 4. Cartridges target specific solutions

    image

    The first server cartridge now available is HP ProLiant Moonshot Server, which includes the Intel® Atom Processor S1260. This is a low power processor that is right-sized for the light workloads. It has dedicated memory and storage, with discrete resources. This server design is idealfor light scale-out applications. Light scale-out applications require relatively little processing but moderately high I/O and include environments that perform the following functions:
    • Dedicated web hosting
    • Simple content delivery
    The HP ProLiant Moonshot Server can hot plug in the HP Moonshot 1500 Chassis. If service is necessary, it can be removed without affecting the other servers in the chassis. Table 1 defines the HP ProLiant Moonshot Server specifications.
    Table 1. HP ProLiant Moonshot Server specifications

    Processor
    One Intel® Atom Processor S1260
    Memory
    8 GB DDR3 ECC 1333 MHz
    Networking
    Integrated dual-port 1Gb Ethernet NIC
    Storage
    500 GB or 1 TB HDD or SSD, non-hot-plug, small form factor
    Operating systems
    Canonical Ubuntu 12.04
    Red Hat Enterprise Linux 6.4
    SUSE Linux Enterprise Server 11 SP2

    imageWith that HP CEO Seeks Turnaround Unveiling ‘Moonshot’ Super-Server: Tech [Bloomberg, April, 2013] as well as HP Moonshot: Say Goodbye to the Vanilla Server [Forbes, April 8, 2013]. HP however is much more eyeing the ARM based Moonshot servers which are expected to come later, because of the trends reflected on the left (source: HP). The software defined server concept is very general. image

    There are a number of quite different server cartridges expected to come, all specialised by server software installed on it. Typical specialised servers, for example, are the ones on which CyWee from Taiwan is working on with Texas Instruments’ new KeyStone II architecture featuring both ARM Cortex-A15 CPU cores and TI’s own C66x DSP cores for a mixture of up to 32 DSP and RISC cores in TI’s new 66AK2Hx family of SoCs, first of which is the TMS320TCI6636 implemented in 28nm foundry technology. Based on that CyWee will deliver multimedia Moonshot server cartridges for cloud gaming, virtual office, video conferencing and remote education (see even the first Keystone announcement). This CyWee involvement in HP Moonshot effort is part of HP’s Pathfinder Partner Program which Texas Instruments also joined recently to exploit a larger opportunity as:

    TI’s 66AK2Hx family and its integrated c66x multicore DSPs are applicable for workloads ranging from high performance computing, media processing, video conferencing, off-line image processing & analytics, video recorders (DVR/NVR), gaming, virtual desktop infrastructure and medical imaging.

    But Intel was able to win the central piece of the Moonshot System launch (originally initiated by HP as the “Moonshot Project” in November 2011 for disruption in terms of power and TCO for servers, actually with a Calxeda board used for research and development with other partners), at least as it was productized just two days ago:
    Raejeanne Skillern from Intel – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel]

    Raejeanne Skillern, Intel Director of Marketing for Cloud Computing, at HP Moonshot 2013 with John Furrier and Dave Vellante

    However ARM was not left out either just relegated in the beginning to highly advanced and/or specialised server roles with its SoC partners, and coming later in the year:

    • Applied Micro with networking and connectivity background having now the X-Gene ARM 64-bit Server on a Chip platform as well which features 8 ARM 64-bit high-performance cores developed from scratch according to an architecture license (i.e. not ARM’s own Cortex-A50 series core), clocked at up to 2.4GHz and also has 4 smaller cores for network and storage offloads (see AppliedMicro on the X-Gene ARM Server Platform and HP Moonshot [SiliconANGLE blog [April 9, 2013]). Sample reference boards to key customers were shipped in March (see Applied Micro’s cloud chip is an ARM-based, switch-killing machine [GigaOM, April 3, 2013]). In the latest X-Gene Arrives in Silicon [Open Compute Summit Winter 2013 presentation, Jan 16, 2013] video you can have the most recent strategic details (upto 2014 with FinFET implementation of a “Software defined X-Gene based data center components”, should be assumed that at 16nm). Here I will include a more product-oriented AppliedMicro Shows ARM 64-bit X-Gene Server on a Chip Hardware and Software [Charbax YouTube channel, Nov 3, 2012] overview video:
      Vinay Ravuri, Vice President and General Manager, Server Products at AppliedMicro gives an update on the 64bit ARM X-Gene Server Platform. At ARM Techcon 2012, AppliedMicro, ARM and several open-source software providers gave updates on their support of the ARM 64-bit X-Gene Server on a Chip Platform.

      More information: A 2013 Resolution for the Data Center [Applied Micro on Smart Connected Devices blog from ARM, Feb 4, 2013] about “plans from Oracle, Red Hat, Citrix and Cloudera to support this revolutionary architecture … Dell’s “Iron” server concept with X-Gene … an X-Gene based ARM server managed by the Dell DCS Software suite …” etc.

    • Texas Instruments with digital signal processing (DSP) background, as it was already presented above. 
    • Calxeda with integration of storage fabric and Internet switching background, with details coming later, etc.:

    This is what is empasized by Lakshmi Mandyam from ARM – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

    Lakshmi Mandyam, Director of Server Systems and Ecosystems, ARM, at HP Moonshot 2013, with John Furrier and Dave Vellante

    She is also mentioning in the talk the achievements which could put ARM and its SoC partners into a role which Intel now has with its general Atom S1200 based server cartridge product fitting into the Moonshot system. Perspective information on that is already available on my ‘Experiencing the Cloud’ blog here:
    The state of big.LITTLE processing [April 7, 2013]
    The future of mobile gaming at GDC 2013 and elsewhere [April 6, 2013]
    TSMC’s 16nm FinFET process to be further optimised with Imagination’s PowerVR Series6 GPUs and Cadence design infrastructure [April 8, 2013]
    With 28nm non-exclusive in 2013 TSMC tested first tape-out of an ARM Cortex™-A57 processor on 16nm FinFET process technology [April 3, 2013]

    The absence of Microsoft is even more interesting as AMD is also on this Moonshot bandwagon: Suresh Gopalakrishnan from AMD – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]

    Suresh Gopalakrishnan, Vice President and General Manager, Server Business, AMD, at HP Moonshot 2013, with John Furrier and Dave Vellante

    already showing a Moonshot fitting server cartridge with AMD’s four next-generation SoCs (while Intel’s already productized cartridge is not yet at an SoC level). We know from CES 2013 that AMD Unveils Innovative New APUs and SoCs that Give Consumers a More Exciting and Immersive Experience [press release, Jan 7, 2013] with the:

    Temash” … elite low-power mobility processor for Windows 8 tablets and hybrids … to be the highest-performance SoC for tablets in the market, with 100 percent more graphics processing performance2 than its predecessor (codenamed “Hondo.”)
    Kabini” [SoC which] targets ultrathin notebooks with exceptional battery life and offers impressive levels of performance in both dual- and quad-core options. “Kabini” is expected to deliver an increase of more than 50 percent in performance3 over the previous generation of AMD essential computing APUs (codenamed “Brazos 2.0.”)
    Both APUs are scheduled to ship in the first half of 2013

    so AMD is really close to a server SoC to be delivered soon as well.

    The “more information” sections which follow her are:

    1. The Announcement
    2. Software Partners
    3. Hardware Partners


    1. The Announcement

    HP Moonshot [MultiVuOnlineVideo YouTube channel, April 8, 2013]

    HP today unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.

    HP Launches New Class of Server for Social, Mobile, Cloud and Big Data [press release, April 8, 2013]

    Software defined servers designed for the data center and built for the planet
    … Built from HP’s industry-leading server intellectual property (IP) and 10 years of extensive research from HP Labs, the company’s central research arm, HP Moonshot delivers a significant improvement in energy, space, cost and simplicity. …
    The HP Moonshot system consists of the HP Moonshot 1500 enclosure and application-optimized HP ProLiant Moonshot servers. These servers will offer processors from multiple HP partners, each targeting a specific workload.
    With support for up to 1,800 servers per rack, HP Moonshot servers occupy one-eighth of the space required by traditional servers. This offers a compelling solution to the problem of physical data center space.(3) Each chassis shares traditional components including the fabric, HP Integrated Lights-Out (iLo) management, power supply and cooling fans. These shared components reduce complexity as well as add to the reduction in energy use and space.  
    The first HP ProLiant Moonshot server is available with the Intel® Atom S1200 processor and supports web-hosting workloads. HP Moonshot 1500, a 4.3u server enclosure, is fully equipped with 45 Intel-based servers, one network switch and supporting components.
    HP also announced a comprehensive roadmap of workload-optimized HP ProLiant Moonshot servers incorporating processors from a broad ecosystem of HP partners including AMD, AppliedMicro, Calxeda, Intel and Texas Instruments Incorporated.

    Scheduled to be released in the second half of 2013, the new HP ProLiant Moonshot servers will support emerging web, cloud and massive scale environments, as well as analytics and telecommunications. Future servers will be delivered for big data, high-performance computing, gaming, financial services, genomics, facial recognition, video analysis and other applications.

    The HP Moonshot system is immediately available in the United States and Canada and will be available in Europe, Asia and Latin America beginning next month.
    Pricing begins at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch.(4)
    (4) Estimated U.S. street prices. Actual prices may vary.

    More information:
    HP Moonshot System [Family data sheet, April 8, 2013]
    HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’ with embedded video gallery, press kit and more, originally created on April 12, 2010, obviously updated for the April 8, 2013 event]

    Moonshot 101 [HewlettPackardVideos YouTube channel, April 8, 2013]

    Paul Santeler, Vice President & GM of Hyperscale Business Unit at HP, discusses how HP Project Moonshot creates the new style of IT.http://hp.com/go/moonshot

    Alert for Microsoft:

    [4:42] We defined the industry standard server market [reference to HP’s Compaq heritage] and we’ve been the leader for years. With Moonshot we bring to find the market and taking it to the next level. [4:53]

    People Behind HP Moonshot [HP YouTube channel, April 10, 2013]

    HP Moonshot is a groundbreaking new class of server that requires less energy, less space and less cost. Built from HP’s industry-leading server IP and 10 years of research from HP Labs, HP Moonshot is an example of the best of HP working together. In the video: Gerald Kleyn, Director of Platform Research and Development, Hyperscale Business Unit, Industry Standard Servers; Scott Herbel, Worldwide Product Marketing Manager, Hyperscale Business Unit, Industry Standard Servers; Ron Mann, Director of Engineering, Industry Standard Servers; Kelly Pracht, Hardware Platform Manager R&D, Hyperscale Business Unit, Industry Standard Servers; Mike Sabotta, Distinguished Technologist, Hyperscale Business Unit, Industry Standard Servers; Dwight Barron, HP Fellow, Chief Technologist, Hyperscale Business Unit, Industry Standard Servers. For more information, visit http://www.hpnext.com.

    HP Moonshot System Tour [HewlettPackardVideos YouTube channel, April 8, 2013]

    Kelly Pracht, Moonshot Hardware Platform Program Manager, HP, takes you on a private tour of the HP Moonshot System and introduces the foundational HW components of HP Project Moonshot. This video guides you around the entire system highlighting the cartridges and switches.http://hp.com/go/moonshot

    HP Moonshot System is Hot Pluggable [HewlettPackardVideos YouTube channel, April 8, 2013]

    “Show me around the HP Moonshot System!” Vicki Doehring, Moonshot Hardware Engineer, HP, shows us just how simple and intuitive it is to remove components in the HP Moonshot System. This video explains how HP’s hot pluggable technology works with the HP Moonshot System.http://hp.com/go/moonshot

    Alert for Microsoft: how and when will you have a system like this with all the bells and whistles as presented above, as well as the rich ecosystem of hardware and software partners given below 

    HP Pathfinder Innovation Ecosystem [HewlettPackardVideos YouTube channel, April 8, 2013]

    A key element of HP Moonshot, the HP Pathfinder Innovation Ecosystem brings together industry leading sofware and hardware partners to accelerate the development of workload optimized applications. http://hp.com/go/moonshot

    Software partners:

    What Linaro is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

    Linaro discusses HP’s Project Moonshot and the cost, space, and efficiency innovations being enabled through the Pathfinder Innovation Ecosystem. http://hp.com/go/moonshot

    Alert for Microsoft:

    [0:11] In HP approach Linaro is about forming an enterprise group. What they were hoping for, what’s happened is to get a bunch of companies who are interested in taking the ARM architecture into the server space. [0:26]

    Canonical joins Linaro Enterprise Group (LEG) and commits Ubuntu Hyperscale Availability for ARM V8 in 2013 [press release, Nov 1, 2012]

      • Canonical continues its leadership of commercial deployment for ARM-based servers through membership of Linaro Enterprise Group (LEG)
      • Ubuntu, the only commercially supported OS for ARM v7 today, commits to support ARM v8 server next year
      • Ubuntu extends its position as the natural choice for hyperscale  server computing with long term support

    … “Canonical has been supporting our work optimising and consolidating the Linux kernel since our founding in June 2010”, said George Grey, CEO of Linaro. “We’re very happy to welcome them as a member of the Linaro Enterprise Group, building on our relationship to help accelerate development of the ARM server software ecosystem.” …

    … “Calxeda has been thrilled with Canonical’s leadership in developing the ARM ecosystem”,  said Karl Freund, VP marketing at Calxeda. “These guys get it. They are driving hard and fast, already delivering enterprise-class code and support for Calxeda’s 32-bit product today to our mutual clients.  Working together in LEG will enable us to continue to build on the momentum we have already created.” …

    What Canonical is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]

    HP Moonshot and Ubuntu work together [Ubuntu partner site, April 9, 2013]

    … Ubuntu, as the lead operating system platform for x86 and ARM-based HP Moonshot Systems, featured extensively at the launch of the program in April 2013. …
    Ubuntu Server is the only OS fully operational today across HP Moonshot x86 and ARM servers, launched in April 2013.
    Ubuntu is recognised as the leader in scale out and Hyperscale. Together, Canonical and HP are delivering massive reductions in data-center energy, space and costs. …

    Canonical has been working with HP for the past two years
    on HP Moonshot
    , and with Ubuntu, customers can achieve higher performance with greater manageability across both x86 and ARM chip sets” Paul Santeler, VP & GM, Hyperscale Business Unit, HP

    Ubuntu & HP’s project Moonshot [Canonical blog, Nov 2, 2011]

    Today HP announced Project Moonshot  – a programme to accelerate the use of low power processors in the data centre.
    The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86),  the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.
    Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.
    imageThe HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.
    The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?
    Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.
    The good news is that Canonical has been working with ARM and Calxeda for several years now and we released the first version of Ubuntu Server ported for ARM Cortex A9 class  processors last month.
    The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM.  This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.
    As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.

    The biggest enterprise alert for Microsoft because of what was discussed in Will Microsoft Stand Out In the Big Data Fray? [Redmondmag.com, March 22, 2013]: What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013] especially as it is a brand new offering, see NuoDB Announces General Availability of Industry’s First & Only Cloud Data Management System at Live-Streamed Event [press release, Jan 15, 2013] now available in archive at this link: http://go.nuodb.com/cdms-2013-register-e.html

    Barry Morris, founder and CEO of NuoDB discusses HP’s Project Moonshot and the database innovations delivered by the combined offering

    Extreme density on HP’s Project Moonshot [NuoDB Techblog, April 9, 2013]

    A few months ago HP came to us with something very cool. It’s called Project Moonshot, and it’s a new way of thinking about how you design infrastructure. Essentially, it’s a composable system that gives you serious flexibility and density.

    A single Moonshot System is 4.3u tall and holds 45 independent servers connected to each other via 1-Gig Ethernet. There’s a 10-Gig Ethernet interface to the system as a whole, and management interfaces for the system and each individual server. The long-term design is to have servers that provide specific capabilities (compute, storage, memory, etc.) and can scale to up to 180 nodes in a single 4.3u chassis.
    The initial system, announced this week, comes with a single server configuration: an Intel Atom S1260 processor, 8 Gigabytes of memory and either a 200GB SSD or a 500GB HDD. On its own, that’s not a powerful server, but when you put 45 of these into a 4.3 rack-unit space you get something in aggregate that has a lot of capacity while still drawing very little power (see below). The challenge, then, is how to really take advantage of this collection of servers.

    NuoDB on Project Moonshot: Density and Efficiency

    We’ve shown how NuoDB can scale a single database to large transaction rates. For this new system, however, we decided to try a different approach. Rather than make a single database scale to large volume we decided to see how many individual, smaller databases we could support at the same time. Essentially, could we take a fully-configured HP Project Moonshot System and turn it into a high-density, low-power, easy to manage hosting appliance.

    To put this in context, think about a web site that hosts blogs. Typically, each blog is going to have a single database supporting it (just like this blog you’re reading). The problem is that while a few blogs will be active all the time, most of them see relatively light traffic. This is known as a long-tail pattern. Still, because the blogs always need to be available, so too the backing databases always need to be running.

    This leads to a design trade-off. Do you map the blogs to a single database (breaking isolation and making management harder) or somehow try to juggle multiple database instances (which is hard to automate, expensive in resource-usage and makes migration difficult)? And what happens when a blog suddenly takes off in popularity? In other words, how do you make it easy to manage the databases and make resource-utilization as efficient as possible so you don’t over-spend on hardware?

    As I’ve discussed on this blog NuoDB is a multi-tenant system that manages individual databases dynamically and efficiently. That should mean that we’re a perfect fit for this very cool (pun intended) new system from HP.

    The Design

    After some initial profiling on a single server, we came up with a goal: support 7,200 active databases. You can read all about how we did the math, but essentially this was a balance between available CPU, Memory, Disk and bandwidth. In this case a “database” is a single Transaction Engine and Storage Manager pair, running on one of the 45 available servers.

    When we need to start a database, we pick the server that’s least-utilized. We choose this based on local monitoring at each server that is rolled up through the management tier to the Connection Brokers. It’s simple to do given all that NuoDB already provides, and because we know what each server supports it lets us calculate a single capacity percentage.
    It gets better. Because a NuoDB database is made of an agile collection of processes, it’s very inexpensive to start or stop a database. So, in addition to monitoring for server capacity we also watch what’s going on inside each database, and if we think it’s been idle long enough that something else could use the associated resources more effectively we shut it down. In other words, if a database isn’t doing anything active we stop it to make room for other databases.
    When an SQL client needs to access that database, we simply re-start it where there are available resources. We call this mechanism hibernating and waking a database. This on-demand resource management means that while there are some number of databases actively running, we can really support a much larger in total (remember, we’re talking about applications that exhibit a long-tail access pattern). With this capability, our original goal of 7,200 active databases translates into 72,000 total supported databases. On a single 4.3u System.
    The final piece we added is what we call database bursting. If a single database gets really popular it will start to take up too many resources on a single server. If you provision another server, separate from the Moonshot System, then we’ll temporarily “burst” a high-activity database to that new host until activity dies down. It’s automatic, quick and gives you on-demand capacity support when something gets suddenly hot.
    The Tests
    I’m not going to repeat too much here about how we drove our tests. That’s already covered in the discussion on how we’re trying to design a new kind of benchmark focused on density and efficiency. You should go check that out … it’s pretty neat. Suffice it say, the really critical thing to us in all of this was that we were demonstrating something that solves a real-world problem under real-world load.
    You should also go read about how we setup and ran on a Moonshot System. The bottom-line is that the system worked just like you’d expect, and gave us the kinds of management and monitoring features to go beyond basic load testing.
    The Results
    We were really lucky to be given access to a full Moonshot System. It gave us a chance to test out our ideas, and we actually were able to do better than our target. You can see this in the view from our management interface running against a real system under our benchmark load. You can see there that when we hit 7200 active databases we were only at about 70% utilization, so there was a lot more room to grow. Huge thanks to HP for giving us time on a real Moonshot System to see all those idea work!

    Something that’s easy to lose track of in all this discussion is the question of power. Part of the value proposition from Project Moonshot is in energy efficiency, and we saw that in spades. Under load a single server only draws 18 Watts, and the system infrastructure is closer to 250 Watts. Taken together, that’s a seriously dense system that is using very little energy for each database.

    Bottom Line
    We were psyched to have the chance to test on a Moonshot System. It gave us the chance to prove out ideas around automation and efficiency that we’ll be folding into NuoDB over the next few releases. It also gave us the perfect platform to put our architecture through its paces and validate a lot about the flexibility of our core architecture.
    We’re also seriously impressed by what we experienced from Project Moonshot itself. We were able to create something self-contained and easy to manage that solves a real-world problem. Couple that with the fact that a Moonshot System draws so little power, the Total Cost of Ownership is impressively low.  That’s probably the last point to make about all this: the combination of our two technologies gave us something where we could talk concretely about capacity and TCO, something that’s usually hard to do in such clear terms.
    In case it’s not obvious, we’re excited. We’ve already been posting this week about some ideas that came out of this work, and we’ll keep posting as the week goes on. Look for the moonshot tag and please follow-up with comments if you’re curious about anything specific and would like to hear more!

    Project Moonshot by the Numbers [NuoDB Techblog, April 9, 2013]

    To really understand the value from HP Project Moonshot you need to think beyond the list price of one system and focus instead on the Total Cost of Ownership. Figuring out the TCO for a server running arbitrary software is often a hard (and thankless?) task, so one of the things we’ve tried to do is not just demonstrate great technology but something that naturally lets you think about TCO in a simple way. We think the final metrics are pretty simple, but to get there requires a little math.

    Executive Summary

    If you’re a CIO, and just want to know the bottom line, then we’ll ruin the suspense and cut to the chase. It will cost you about $70,500 up-front, $1,800 in your first year’s electricity bills and take 8.3 rack-units to support the web-front end and database back-end for 72,000 blogs under real-world load.

    Cost of a Single Database
    Recall that we set the goal at 72,000 databases within a single system. At launch the list price for a fully-configured Moonshot System is around $60,000, so we start out at 83 cents per-database. In practice were seeing much higher capacity in our tests, but let’s start with this conservative number.
    Now consider the power used by the system. From what we’ve measured through the iLO interfaces a single server draws no more than 18 Watts at peak load (measured against CPU and IO activity). The System itself (fans, switches etc.) draws around 250 Watts in our tests. That means that under full load each database is drawing about .015 Watts.
    NuoDB is a commercial software offering, which means that you pay up-front to deploy the software (and get support as part of that fee). For anyone who wants to run a Moonshot System in production as a super-dense NuoDB appliance we’ll offer you a flat-rate license.
    Put together, we can say that the cost per database-watt is 1.22 cents. That’s on a 4.3 rack-unit system. Awesome.
    Quantify the Supported Load
    As we discussed in our post on benchmarking, we’re trying to test under real-world load. As a simple starting-point we chose a profile based on WordPress because it’s fairly ubiquitous and has somewhat serious transactional requirements. In our benchmarking discussion we explain that a typical application action (post, read, comment) does around 20 SQL operations.
    Given 72,000 databases most of these are fairly inactive, so on average we’ll say that each database gets about 250 hits a day (generous by most reports I’ve seen). That’s 18,000,000 hits a day or 208 hits per-second. 4,166 SQL statements a second isn’t much for a single database, but it’s pretty significant given that we’re spreading it across many databases some of which might have to be “woken” on-demand.
    HP was generous enough not only to give us time on a Moonshot System but also access to some co-located servers for driving our load tests. In this case, 16 lower-powered ARM-based Calxeda systems that all went through the same 1-Gig ethernet connection to our Moonshot System. These came from HP’s Discovery Lab; check out our post about working with the Moonshot System for more details.
    From these load-drivers we able to run our benchmark application with up to 16 threads per server, simulating 128 simultaneous clients. In this case a typical “client” would be a web server trying to respond to a web client request. We averaged around 320 hits per-second, well above the target of 208. From what we could observe, we expect that given more capable network and client drivers we would be able to get 3 or 4 times that rate easily.
    Tangible Cost
    We have the cost of the Moonshot System itself. We also know that it can support expected load from a fairly small collection of low-end servers. In our own labs we use systems that cost around $10,000, fit in 3 rack-units and would be able to drive at least the same kind of load we’re citing here. Add a single switch at around $500 and you have a full system ready to serve blogs. That’s $70,500 total in 8.3 rack units, still under $1 per database.
    I don’t know what power costs you have in your data center, but I’ve seen numbers ranging from 2.5 to 25 cents per Kilowatt-Hour. In our tests, where we saw .015 Watts per-database, if you assume an average rate of 13.75 cents per KwH that comes out to .00020625 cents per-hour per-database in energy costs. In one year, with no down-time, that would cost you $1,276.77 in total electricity fees.
    Just as an aside, according to the New York Times, Facebook uses around 60,000,000 Watts a year!
    One of the great things about a Moonshot System is that the 45 servers are already being switched inside the chassis. This means that you don’t need to buy switches & cabling, and you don’t need to allocate all the associated space in your racks. For our systems administrator that alone would make him very happy.
    Intangible Cost
    What I haven’t been talking about in all of this are the intangible costs. This is where figuring out TCO becomes harder.
    For instance, one of the value-propositions here is that the Moonshot System is a self-contained, automated component. That means that systems administrators are freed up from the tasks of figuring out how to allocate and monitor databases, and how to size the data-center for growth. Database developers can focus more easily on their target applications. CIOs can spend less time staring at spreadsheets … or, at least, can allocate more time to spreadsheets on different topics.
    Providing a single number in terms of capacity makes it easy to figure out what you need in your datacenter. When a single server within a Moonshot System fails you can simply replace it, and in the meantime you know that the system will still run smoothly just with slightly lower capacity. From a provisioning point of view, all you need to figure out is where your ceiling is and how much stand-by capacity you need to have at the ready.
    NuoDB by its nature is dynamic, even when you’re doing upgrades. This means that you can roll through a running Moonshot System applying patches or new versions with no down-time. I don’t know how you calculate the value in saved cost here, but you probably do!
    Comparisons and Planned Optimizations
    It’s hard to do an “apples-to-apples” comparison against other database software here. Mostly, this is because other databases aren’t designed to be dynamic enough to support hibernation, bursting and capacity-based automated balancing. So, you can’t really get the same levels of density, and a lot of the “intangible” cost benefits would go away.
    Still, to be fair, we tried running MySQL on the same system and under the same benchmarks. We could indeed run 7200 instances, although that was already hitting the upper-bounds of memory/swap. In order to get the same density you would need 10 Moonshot Systems, or you would need larger-powered expensive servers. Either way, the power, density, automation and efficiency savings go out the window, and obviously there’s no support for bursting to more capable systems on-demand.
    Unsurprisingly, the response time was faster on-average (about half the time) from MySQL instances. I say “unsurprisingly” for two reasons. First, we tried to use schema/queries directly from WordPress to be fair in our comparison, and these are doing things that are still known to be less-optimized in NuoDB. They’re also in the path of what we’re currently optimizing and expect to be much faster in the near-term.
    The second is that NuoDB clients were originally designed assuming longer-running connections (or pooled connections) to databases that always run with security & encryption enabled. We ran all of our tests in our default modes to be fair. That means we’re spending more time on each action setting up & tearing down a connection. We’ve already been working on optimizations here that would shrink the gap pretty substantially.
    In the end, however, our response time is still on the order of a few hundred milliseconds worst-case, and is less important than the overall density and efficiency metrics that we proved out. We think the value in terms of ease of use, density, flexibility on load spikes and low-cost speaks for itself. This setup is inexpensive by comparison to deploying multiple servers and supports what we believe is real-world load. Just wait until the next generation of HP Project Moonshot servers roll out and we can start scaling out individual databases at the same time!

    More information:
    Benchmarking Density & Efficiency [NuoDB Techblog, April 9, 2013]
    Database Hibernation and Bursting [NuoDB Techblog, April 8, 2013]
    An Enterprise Management UI for Project Moonshot [NuoDB Techblog, April 9, 2013]Regarding the cloud based version of NuoDB see:
    NuoDB Partners with Amazon [press release, March 26, 2013]
    NuoDB Extends Database Leadership in Scalability & Performance on a Private Cloud [press release, March 14, 2013] “… the industry’s first and only patented, elastically scalable Cloud Data Management System (CDMS), announced performance of 1.84 million transactions per second (TPS) running on 32 machines. … With NuoDB Starlings release 1.0.1, available as of March 1, 2013, the company has made advancements in performance and scalability and customers can now experience 26% improvement in TPS per machine.
    Google Compute Engine: interview with NuoDB [GoogleDevelopers YouTube channel, March 21, 2013]

    Meet engineers from NuoDB: an elastically scalable SQL database built for the cloud. We will learn about their approach to distributed SQL databases and get a live demo. We’ll cover the steps they took to get NuoDB running on Google Compute Engine, talk about how they evaluate infrastructure (both physical hardware and cloud), and reveal the results of their evaluation of Compute Engine performance.

    Actually Calxeda was best to explain the preeminence of software over the SoC itself:
    Karl Freund from Calxeda – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013], see also HP Moonshot: It’s a lot closer than it looks! [Calxeda’s ‘ARM Servers, Now!’ blog, April 8, 2013]

    Karl Freund, VP of Marketing, Calxeda, at HP Moonshot 2013 with John Furrier and Dave Vellante.

    as well as ending with Calxeda’s very practical, gradual approach to ARM based served market with things like:

    [16.03] Our 2nd generation platform called Midway, which will be out later this year [in the 2nd half of the year], that’s probably the target for Big Data. Our current product is great for web serving, it’s great for media serving, it’s great for storage. It doesn’t have enough memory for Big Data … in a large. So we’ll getting that 2nd generation product out, and that should be a really good Big Data platform. Why? Because it’s low power, it’s low cost, but it’s also got a lot of I/O. Big Data is all that moving a lot of data around. And if you do that more cost effectively you save a lot of money. [16:38]

    mentioning also that their strategy is using standard ARM cores like the Cortex-A57 for their H1 2014 product, and focus on things like the fabric and the management, which actually allows them to work with a streamlined staff of around 150 people.

    Detailed background about Calxeda in a concise form:
    Redefining Datacenter Efficiency: An Overview of Calxeda’s architecture and early performance measurements [Karl Freund, Nov 12, 2012] from where the core info is:

      • Founded in 2008   
      • $103M Funding       
      • 1st Product Announced with HP,  Nov  2011   
      • Initial Shipments in Q2 2012   
      • Volume production in Q4 2012

    image

    image* The power consumed under normal operating conditions
    under full application load (ie, 100% CPU utilization)

    imageA small Calxeda Cluster: a Simple Example
    • Start with four ServerNodes
    • Consumes only 20W total power   
    • Connected via distributed fabric switches   
    • Connect up to 4 SATA drives per node   
    • Then scale this to thousands of ServerNodes

    EnergyCard: a Quad-Node Reference Design

      • Four-node reference platform from Calxeda
      • Available as product and/or design
      • Plugs into OEM system board with passive fabric, no additional switch HW
        EnergyCard delivers 80Gb Bandwidth to the system board. (8 x 10Gb links)

    image

    image

    It is also important to have a look at what were the Open Source Software Packages for Initial Calxeda Shipments [Calxeda’s ‘ARM Servers, Now!’ blog, May 24, 2012]

    We are often asked what open-source software packages are available for initial shipments of Calxeda-based servers.

    Here’s the current list (changing frequently).  Let us know what else you need!

    image

    Then Perspectives From Linaro Connect [Calxeda’s ‘ARM Servers, Now!’ blog, March 20, 2013] sheds more light on the recent software alliances which make Calxeda to deliver:

    – From Larry Wikelius,   Co-Founder and VP Ecosystems,  Calxeda:

    The most recent Linaro Connect (Linaro Connect Asia 2013 – LCA), held in Hong Kong the first week of March, really put a spotlight on the incredible momentum around ARM based technology and products moving into the Data Center.  Yes – you read that correctly – the DATA CENTER!

    When Linaro was originally launched almost three years ago the focus was exclusively on the mobile and client market – where ARM has and continues to be dominant.  However, as Calxeda has demonstrated, the opportunity for the ARM architecture goes well beyond devices that you carry in your pocket.  Calxeda was a key driver in the formation of the Linaro Enterprise Group (LEG), which was publicly launched at the previous LinaroConnect event in Copenhagen in early November, 2012.

    LEG has been an exciting development for Linaro and now has 13 member companies that include server vendors such as Calxeda, Linux distribution companies Red Hat and Canonical, OEM representation from HP and even Hyperscale Data Center end user Facebook.  There were many sessions throughout the week that focused on Server specific topics such as UEFI, ACPI, Virtualization, Hyperscale Testing with LAVA and Distributed Storage.  Calxeda was very active throughout the week with the team participating directly in a number of roadmap definition sessions, presenting on Server RAS and providing guidance in key areas such as application optimization and compiler focus for Servers.

    Linaro Connect is proving to be a tremendous catalyst for the the growing eco-system around the ARM software community as a whole and the server segment in particular.  A great example of this was the keynote presentation given jointly by Mark Heath and Lars Kurth from Citrix on Tuesday morning.  Mark is the VP of XenServer at Citirix and Lars is well know in the OpenSource community for his work with Xen.  The most exciting announcement coming out of Mark’s presentation is that Citrix will be joining Linaro as a member of LEG.  Citrix will be certainly prove to be another valuable member of the Linaro team and during the week attendees were able to appreciate how serious Citrix is about supporting ARM servers.  The Xen team has not only added full support for ARM V7 systems in the Xen 4.3 release but they have accomplished some very impressive optimizations for the ARM platform.  The Xen team has leveraged Device Tree for optimal device discovery.  Combined with a number of other code optimizations they showed a dramatically smaller code base for the ARM platform.  We at Calxeda are thrilled to welcome Citrix into LEG!

    As an indication of the draw that the Linaro Connect conference is already having on the broader industry the Open Compute Project (OCP) held their first International Event co-incident with LCA at the same venue.  The synergy between Linaro and OCP is significant with the emphasis on both organizations around Open Source development (one software and one hardware) along with the dramatically changing design points for today’s Hyperscale Data Center.  In fact the keynote at LCA on Wednesday morning really put a spotlight on how significant this is likely to be.  Jason Taylor, Director of Capacity Engineering and Analysis at Facebook, presented on Facebook’s approach to ARM based servers.   Facebook’s consumption of Data Center equipment is quite stunning – Jason quoted from Facebook’s 10-Q filed in October 2012 which stated that “The first nine months of 2012 … $1.0 billion for capital expenditures” related to data center equipment and infrastructure.  Clearly with this level of investment Facebook is extremely motivated to optimize where possible.  Jason focused on the strategic opportunity for ARM based severs in a disaggregated Data Center of the future to provide lower cost computing capabilities with much greater flexibility.

    Calxeda has been very active in building the Server Eco-System for ARM based servers.  This week in Hong Kong really underscored how important that investment has become – not just for Calxeda but for the industry as a whole. Our commitment to Open Source software development in general and Linaro in particular has resulted in a thriving Linux Infrastructure for ARM servers that allows Calxeda to leverage and focus on key differentiation for our end users.  The Open Compute Project, which we are an active member in and have contributed to key projects such as the Knockout Storage design as well as the Open Slot Specification, demonstrates how the combination of an Open Source approach for both Software and Hardware can compliment each other and can drive Data Center innovation.  We are early in this journey but it is very exciting!

    Calxeda will continue to invest aggressively in forums and industry groups such as these to drive the ARM based server market.  We look forward to continue to work with the incredibly innovative partners that are members in these groups and we are confident that more will join this exciting revolution.  If you are interested in more information on these events and activities please reach out to us directly at info@calxeda.com.

    The next Linaro Connnect is scheduled for early July in Dublin. We expect more exciting events and topics there and hope to see you there!

    They are also referring on their blog to Mobile, cloud computing spur tripling of micro server shipments this year [IHS iSuppli press release, Feb 6, 2013] which showing the general market situation well into the future as:

    Driven by booming demand for new data center services for mobile platforms and cloud computing, shipments of micro servers are expected to more than triple this year, according to an IHS iSuppli Compute Platforms Topical Report from information and analytics provider IHS (NYSE: IHS).
    Shipments this year of micro servers are forecast to reach 291,000 units, up 230 percent from 88,000 units in 2012. Shipments of micro servers commenced in 2011 with just 19,000 units. However, shipments by the end of 2016 will rise to some 1.2 million units, as shown in the attached figure.

    image

    The penetration of micro servers compared to total server shipments amounted to a negligible 0.2 percent in 2011. But by 2016, the machines will claim a penetration rate of more than 10 percent—a stunning fiftyfold jump.
    Micro servers are general-purpose computers, housing single or multiple low-power microprocessors and usually consuming less than 45 watts in a single motherboard. The machines employ shared infrastructure such as power, cooling and cabling with other similar devices, allowing for an extremely dense configuration when micro servers are cascaded together.
    “Micro servers provide a solution to the challenge of increasing data-center usage driven by mobile platforms,” said Peter Lin, senior analyst for compute platforms at IHS. “With cloud computing and data centers in high demand in order to serve more smartphones, tablets and mobile PCs online, specific aspects of server design are becoming increasingly important, including maintenance, expandability, energy efficiency and low cost. Such factors are among the advantages delivered by micro servers compared to higher-end machines like mainframes, supercomputers and enterprise servers—all of which emphasize performance and reliability instead.”
    Server Salad Days
    Micro servers are not the only type of server that will experience rapid expansion in 2013 and the years to come. Other high-growth segments of the server market are cloud servers, blade servers and virtualization servers.
    The distinction of fastest-growing server segment, however, belongs solely to micro servers.
    The compound annual growth rate for micro servers from 2011 to 2016 stands at a remarkable 130 percent—higher than that of the entire server market by a factor of 26. Shipments will rise by double- and even triple-digit percentages for each year during the period.
    Key Players Stand to Benefit
    Given the dazzling outlook for micro servers, makers with strong product portfolios of the machines will be well-positioned during the next five years—as will their component suppliers and contract manufacturers.
    A slew of hardware providers are in line to reap benefits, including microprocessor vendors like Intel, ARM and AMD; server original equipment manufacturers such as Dell and Hewlett-Packard; and server original development manufacturers including Taiwanese firms Quanta Computer and Wistron.
    Among software providers, the list of potential beneficiaries from the micro server boom extends to Microsoft, Red Hat, Citrix and Oracle. For the group of application or service providers that offer micro servers to the public, entities like Amazon, eBay, Google and Yahoo are foremost.
    The most aggressive bid for the micro server space comes from Intel and ARM.
    Intel first unveiled the micro server concept and reference design in 2009, ostensibly to block rival ARM from entering the field.
    ARM, the leader for many years in the mobile world with smartphone and tablet chips because of the low-power design of its central processing units, has been just as eager to enter the server arena—dominated by x86 chip architecture from the likes of Intel and a third chip player, AMD. ARM faces an uphill battle, as the majority of server software is written for x86 architecture. Shifting from x86 to ARM will also be difficult for legacy products.
    ARM, however, is gaining greater support from software and OS vendors, which could potentially put pressure on Intel in the coming years.
    Read More > Micro Servers: When Small is the Next Big Thing

    Then there are a number of Intel competitive posts on Calxeda’s ‘ARM Servers, Now!’ blog:
    What is a “Server-Class” SOC? [Dec 12, 2012]
    Comparing Calxeda ECX1000 to Intel’s new S1200 Centerton chip [Dec 11, 2012]
    which you can also find in my Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] with significantly wider additional information upto binary translation from x86 to ARM with Linux

    See also:
    ARM Powered Servers: 2013 is off to a great start & it is only March! [Smart Connected Devices blog of ARM, March 6, 2013]
    Moonshot – a shot in the ARM for the 21st century data center [Smart Connected Devices blog of ARM, April 9, 2013]
    Are you running out of data center space? It may be time for a new server architecture: HP Moonshot [Hyperscale Computing Blog of HP, April 8, 2013]
    HP Moonshot: the HP Labs team that did some of the groundbreaking research [Innovation @ HP Labs blog of HP, April 9, 2013]
    HP Moonshot: An Accelerator for Hyperscale Workloads [Moor Insights White Paper, April 8, 2013]
    Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop [HP Vertica blog, April 9, 2013] by team of HP Labs researchers show how the Vertica Analytics Platform can be used to find patterns from a billion records in a couple of minutes, about 9x faster than Hadoop.
    PCs and cloud clients are not parts of Hewlett-Packard’s strategy anymore [‘Experiencing the Cloud’, Aug 11, 2011 – Jan 17, 2012] see the Autonomy IDOL related content there
    ENCO Systems Selects HP Autonomy for Audio and Video Processing [HP Autonomy press release, April 8, 2013]

    HP Autonomy today announced that ENCO Systems, a global provider of radio automation and live television audio solutions, has selected Autonomy’s Intelligent Data Operating Layer (IDOL) to upgrade ENCO’s latest-generation enCaption product.

    ENCO Systems provides live automated captioning solutions to the broadcast industry, leveraging technology to deliver closed captioning by taking live audio data and turning it into text. ENCO Systems is capitalizing on IDOL’s unique ability to understand meaning, concepts and patterns within massive volumes of spoken and visual content to deliver more accurate speech analytics as part of enCaption3.

    “Many television stations count on ENCO to provide real-time closed captioning so that all of their viewers get news and information as it happens, regardless of their auditory limitations,” said Ken Frommert, director, Marketing, ENCO Systems. “Autonomy IDOL helps us provide industry-leading automated closed captioning for a fraction of the cost of traditional services.”
    enCaption3 is the only fully automated speech recognition-based closed captioning system for live television that does not require speaker training. It gives broadcasters the ability to caption their programming, including breaking news and weather, any time, day or night, since it is always on and always available. enCaption3 provides captioning in near real time-with only a 3 to 6 second delay-in nearly 30 languages.
    “Television networks are under increasing pressure to provide real-time closed captioning services-they face fines if they don’t, and their growing and diverse viewers demand it,” said Rohit de Souza, general manager, Power, HP Autonomy. “This is another example of a technology company integrating Autonomy IDOL to create a stronger, faster and more accurate product offering, and demonstrates yet another powerful way in which IDOL can be applied to help organizations succeed in the human information era.”

    Using Big Data to change the game in the Energy industry [Enterprise Services Blog of HP, Oct 24, 2012]

    … Tools like HP’s Autonomy that analyzes the unstructured data found in call recordings, survey responses, chat logs, e-mails, social media posts and more. Autonomy’s Intelligent Data Operating Layer (IDOL) technology uses sophisticated pattern-matching techniques and probabilistic modeling to interpret information in much the same way that humans do. …

    Stouffer Egan turns the tables on computers in keynote address at HP Discover [Enterprise Services Blog of HP, June 8, 2012]

    For decades now, the human mind has adjusted itself to computers by providing and retrieving structured data in two-dimensional worksheets with constraints on format, data types, list of values, etc. But, this is not the way the human mind has been architected to work. Our minds have the uncanny ability to capture the essence of what is being conveyed in a facial expression in a photograph, the tone of voice or inflection in an audio and the body language in a video. At the HP Discover conference, Autonomy VP for United States, Stouffer Egan showed the audience how software can begin to do what the human mind has being doing since the dawn of time. In a demonstration where Iron Man came live out of a two-dimensional photograph, Egan turned the tables on computers. It is about time computers started thinking like us rather than us forcing us to think like them.
    Egan states that the “I” in IT is where the change is happening. We have a newfound wealth of data through various channels including video, social, click stream, audio, etc. However, data unprocessed without any analysis is just that — raw data. For enterprises to realize business value from this unstructured data, we need tools that can process it across multiple media. Imagine software that recognizes the picture in a photograph and searches for a video matching the person in the picture. The cover page of a newspaper showing a basketball star doing a slam dunk suddenly turns live pulling up the video of this superstar’s winning shot in last night’s game. …


    2. Software Partners

    image
    HP Moonshot is setting the roadmap for next generation data centers by changing the model for density, power, cost and innovation. Ubuntu has been designed to meet the needs of Hyperscale customers and, combined with its management tools, is ideally suited be the operating system platform for HP Moonshot. Canonical has been working with HP since the beginning of the Moonshot Project, and Ubuntu is the only OS integrated and fully operational across the complete Moonshot System covering x86 and ARM chip technologies.
    What Canonical is saying about HP Moonshot
    image
    As mobile workstyles become the norm, the scalability needs of today’s applications and devices are increasingly challenging what traditional infrastructures can support. With HP’s Moonshot System, customers will be able to rapidly deploy, scale, and manage any workload with dramatically lower space and energy constraints. The HP Pathfinder Innovation Ecosystem is a prime opportunity for Citrix to help accelerate the development of innovative solutions that will benefit our enterprise cloud, virtualization and mobility customers.
    image
    We’re committed to helping enterprises achieve the most from their Big Data initiatives. Our partnership with HP enables joint customers to keep and query their data at scale so they can ask bigger questions and get bigger answers. By using HP’s Moonshot System, our customers can benefit from the improved resource utilization of next generation data center solutions that are workload optimized for specific applications.
     
    imageToday’s interactive applications are accessed 24×365 by millions of web and mobile users, and the volume and velocity of data they generate is growing at an unprecedented rate. Traditional technologies are hard pressed to keep up with the scalability and performance demands of these new applications. Couchbase NoSQL database technology combined with HP’s Moonshot System is a powerful offering for customers who want to easily develop interactive web and mobile applications and run them reliably at scale. image
    Our partnership with HP facilitates CyWee’s goal of offering solutions that merge the digital and physical worlds. With TI’s new SoCs, we are one step closer to making this a reality by pushing state-of-the-art video to specialized server environments. Together, CyWee and HP will deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.
    image
    HP’s new Moonshot System will enable organizations to increase the energy efficiency of their data centers while reducing costs. Our Cassandra-based database platform provides the massive scalability and multi-datacenter capabilities that are a perfect complement to this initiative, and we are excited to be working with HP to bring this solution to a wide range of customers.
    image
    Big data comes in a wide range for formats and types and is a result of the connected everything world we live in. Through Project Moonshot, HP has enabled a new class of infrastructure to run more efficient workloads, like Apache Hadoop, and meet the market demand of more performance for less.
    image
    The unprecedented volume and variety of data introduces unique challenges to organizations today… By combining the HP Moonshot system with Autonomy IDOL’s unique ability to understand concepts in information, organizations can dramatically reduce the cost, space, and energy requirements for their big data initiatives, and at the same time gain insights that grow revenue, reduce risk, and increase their overall Return on Information.
    image
    Big Data is not just for Big Companies – or Big Servers – anymore – it’s affecting all sectors of the market. At HP Vertica we’re very excited about the work we’ve been doing with the Moonshot team on innovative configurations and types of analytic appliances which will allow us to bring the benefits of real-time Big Data analytics to new segments of the market. The combination of the HP Vertica Analytics Platform and Moonshot is going to be a game-changer for many.
    image
    HP worked closely with Linaro to establish the Linaro Enterprise Group (LEG). This will help accelerate the development of the software ecosystem around ARM Powered servers. HP’s Moonshot System is a great platform for innovation – encouraging a wide range of silicon vendors to offer competing ‘plug-and-play’ server solutions, which will give end users maximum choice for all their different workloads.
    What Linaro is saying about HP Moonshot[HewlettPackardVideos YouTube channel, April 8, 2013]
    image
    Organizations are looking for ways to rapidly deploy, scale, and manage their infrastructure, with an architecture that is optimized for today’s application workloads. HP Moonshot System is an energy efficient, space saving, workload-optimized solution to meet these needs, and HP has partnered with MapR Technologies, a Hadoop technology leader, to accelerate innovation and deployment of Big Data solutions.
    image
    NuoDB and HP are shattering the scalability and density barriers of a traditional database server. NuoDB on the HP Moonshot System delivers unparalleled database density, where customers can now run their applications across thousands of databases on a single box, significantly reducing the total cost across hardware, software, and power consumption. The flexible architecture of HP Moonshot coupled with NuoDB’s hyper-pluggable database design and its innovative “database hibernation” technology makes it possible to bring this unprecedented hardware and software combination to market.
    What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013]
    image
    As the leading solution provider for the hosting market, Parallels is excited to be collaborating in the HP Pathfinder Innovation Ecosystem. The HP Moonshot System in concert with Parallels Plesk Panel and Parallels Containers provides a flexible and efficient solution for cloud computing and hosting.
    image
    Red Hat Enterprise Linux on HP’s converged infrastructure means predictability, consistency and stability. Companies around the globe rely on these attributes when deploying applications every day, and our value proposition is just as important in the Hyperscale segment. When customers require a standard operating environment based on Red Hat Enterprise Linux, I believe they will look to the HP Moonshot System as a strong platform for high-density Hyperscale implementations.
    What Red Hat is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
    image
    HP Project Moonshot’s promise of extreme low-energy servers is a game changer, and SUSE is pleased to partner with HP to bring this new innovation to market. For more than twenty years, SUSE has adapted its enterprise-grade Linux operating system to achieve ever-increasing performance needs that succeed both today and tomorrow in areas such as Big Data and cloud computing.
    What SUSE is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]


    3. Hardware Partners

    image
    AMD is excited to continue our deep collaboration with HP to bring extreme low-energy, ultra dense, specialized server solutions to the market. Both companies share a passion to bring innovative workload optimized solutions to the market, enabling customers to scale-out to new levels within existing energy and space constraints. The new low-power x86 AMD Opteron™ APU is optimized in the HP Moonshot System to dramatically lower TCO in quickly emerging media oriented workloads.
    What AMD is saying about HP Moonshot
    image

    It is exciting to see HP take the lead in innovating low-energy servers for the cloud. Applied Micro’s ARM 64-bit X-Gene Server on a Chip will enable performance levels seen in today’s deployments while offering higher densities, greatly improved I/O, and substantial reductions in the total cost of ownership. Together, we will unleash innovation unlike anything we’ve seen in the server market for decades.

    What Applied Micro is saying about HP Moonshot

    image
    In the current economic and power realities, today’s server infrastructure cannot meet the needs of the next billion data users, or the evolving needs of currently supported users. Customers need innovative SoC solutions which deliver more integration and optimization than has historically been required by traditional enterprise workloads. HP’s Moonshot System is a departure from the one size fits all approach of traditional enterprise and embraces a range of ARM partner solutions that address different performance, workloads and cost points.
    What ARM is saying HP Moonshot
    image
    Calxeda and HP’s new Moonshot System are a powerful combination, and sets a new standard for ultra-efficient web and application serving. Fulfilling a journey started together in November 2011, Project Moonshot creates the foundation for the new age of application-specific computing.
    What Calxeda is saying about HP Moonshot
    image
    HP Moonshot System is a game changer for delivering optimized server solutions. It beautifully balances the need for mixing different processor solutions optimized for different workloads under a standard hardware and software framework. Cavium’s Project Thunder will provide a family of 64-bit ARM v8 processors with dense and scalable sever class performance at extremely attractive power and cost metrics. We are doing this by blending performance and power efficient compute, high performance memory and networking into a single, highly integrated SoC.
    What Cavium is saying about HP Moonshot
    image
    Intel is proud to deliver the only server class, 64-bit SoC technology that powers the first and only production shipping HP ProLiant Moonshot Server today. 64-bit Intel Atom processor S1200 family features extreme low power combined with required datacenter class capabilities for lightweight web scale workloads, such as low end dedicated hosting and static web serving. In collaboration with HP, we have a strong roadmap of additional server solutions shipping later this year, including Intel’s 2nd generation 64-bit SoC, “Avoton” based on leading 22nm manufacturing technology, that will deliver best in class energy efficiency and density for HP Moonshot System.
    What Intel is saying about HP Moonshot
    image What Marvell is saying about HP Moonshot
    image
    HP Moonshot System’s high density packaging coupled with integrated network capability provides the perfect platform to enable HP Pathfinder Innovation Ecosystem partners to deliver cutting edge technology to the hyper-scale market. SRC Computers is excited to bring its history of delivering paradigm shifting high-performance, low-power, reconfigurable processors to HP Project Moonshot’s vision of optimizing hardware for maximum application performance at lowest TCO.
    What SRC Computers is saying about HP Moonshot
    image
    The scalability and high performance at low power offered through HP’s Moonshot System gives customers an unmatched ability to adapt their solutions to the ever-changing and demanding market needs in the high performance computing, cloud computing and communications infrastructure markets. The strong collaboration efforts between HP and TI through the HP Pathfinder Innovation Ecosystem ensure that customers understand and get the most benefit from the processors at a system-level.
    What TI is saying about HP Moonshot

    Nokia’s expanded, new risks and uncertainties for its Windows Phone strategy for 2013

    According to the Nokia SEC filing for the fiscal year ended December 31, 2012 (FY12) vs. that of the Nokia SEC filing for the fiscal year ended December 31, 2011 (FY11):

    As per the “Risks and Uncertainties” sections in both, there are the following expanded texts in the FY12 section vs. that of in the FY11 section (highlighted full text comparisons you can see in a PDF format downloadable from here):

    [We may not be able to make Nokia products with Windows Phone a competitive choice for consumers unless the Windows Phone ecosystem becomes a competitive and profitable global ecosystem that achieves sufficient scale, value and attractiveness to relevant market participants.]

    We believe that successful smartphone platforms require a successful ecosystem around them. … Today, industry participants are creating competing ecosystems of mutually beneficial partnerships to combine hardware, software, services and an application environment to create high-quality differentiated smartphones. Certain smartphone platforms and their related ecosystems have gained significant momentum and market share, specifically Google’s Android platform and Apple’s iOS platform, and are continuing apace, with Android-based smartphones continuing to gain significant market share during 2012 and also reaching lower price points.

    … Although Microsoft will continue to license Windows Phone to other mobile manufacturers, we believe we can differentiate Nokia smartphones from those of our competitors that also use the Windows Phone platform as well as other platforms. The first Nokia smartphones powered by Windows Phone were launched in October 2011 under the Lumia name. We launched additional Windows Phone 7 devices and the first Windows Phone 8 Lumia devices during 2012. See Item 4B. “Business Overview—Devices & Services—Smart Devices” for a more information.

    Microsoft has recently launched the Windows 8 operating system used to power personal computers and tablets, and the related Windows Phone 8 operating system is used in the latest Nokia smartphones. The success of Nokia’s Windows Phone 8 smartphones will be negatively affected if the Windows 8 platform does not achieve or retain broad or timely market acceptance or is not preferred by ecosystem participants, mobile operators and consumers.

    Other competitive major smartphone ecosystems, primarily Google’s Android and Apple’s iOS, have advantages that may be difficult for the Windows Phone ecosystem to overcome, such as first-mover advantage, momentum, a larger share of the smartphone market, engagement by developers, mobile operators and consumers and brand preference, and their advantages may become greater over time.

    [acknowledging that] We may not be able to develop sufficient quantities of high-quality differentiated Nokia products with Windows Phone in order to achieve the scale needed for a competitive global ecosystem in a timely manner, or at all. [vs. just “execute with speed” a year ago]

    Our competitors may use various technical and commercial means to make the Windows Phone ecosystem unattractive compared to other ecosystems, including for instance hindering application development, not providing tools to allow applications to be developed to industry standard or not allowing certain applications to work or work efficiently on the Windows Phone platform.
    [vs. just “Other competitive major smartphone ecosystems have advantages that may be difficult for us to overcome, such as first-mover advantage, momentum, engagement by developers, mobile operators and consumers and brand preference, and their advantages may become even greater before we complete our transition to the Windows Phone platform.” a year ago]

    The Windows Phone ecosystem is relatively small, and thus it may not be compelling for hardware and software suppliers and developers, which may for instance lead to our reliance on a limited number of suppliers, later availability of the latest innovations and increased cost of components and software.

    Mobile devices are increasingly used with other technical appliances, for instance speakers and car audio systems or have accessories and gadgets that can be used in conjunction with the mobile device. As the Windows Phone ecosystem is relatively small, it may not be compelling for third parties to design such technical appliances, accessories or gadgets to a similar extent as with other ecosystems.

    [As the recognition of the already observable effect of the “Other competitive major smartphone ecosystems, primarily Google’s Android and Apple’s iOS, have advantages that may be difficult for the Windows Phone ecosystem to overcome, such as …” vs. just a possible risk associated with “may not be able to attract developers and other participants to the Windows Phone ecosystem” a year ago]

    The frequency of Windows Phone operating system updates may be too slow and the platform may be too closed to address changing market and customer requirements in a timely manner, which may erode customer support and consumer attractiveness of the platform.

    Emergence of new alternative ecosystems and platforms could make the Windows Phone ecosystem less attractive to customers and consumers.

    As well as per:

    [Our success in the smartphone market depends on our ability to introduce and bring to market quantities of attractive, competitively priced Nokia products with Windows Phone that are positively differentiated from our competitors’ products, both outside and within the Windows Phone ecosystem, and receive broad market acceptance.]

    [despite of all the risks and uncertainties already given there is no change in the sense that]
    Our strategy is to compete in the smartphone market with Nokia products with Windows Phone.

    [but there are new warnings that]
    The Microsoft Windows Phone platform … may limit our ability to … bring certain hardware capabilities at the higher price points.

    we may not be able to introduce functionalities such as advanced imaging and sensor technology

    [as well as more intensive warnings by saying that there is]
    lack of proper training of sales personnel, insufficient marketing support and experience
    [vs. using just theinadequate attribute a year ago]
    still relatively unfamiliar Windows Phone platform in an otherwise highly competitive market.
    [vs. new and used a year ago]

    [Regarding “Microsoft may not be able to provide the software innovations and features we rely on for the Windows Phone operating system in a timely manner, if at all” it is now added that]
    Additionally, we are dependent on Microsoft for timely error corrections for customer and country variants as well as generic software releases.

    Other manufacturers also produce competing mobile products which are based on the Windows Phone operating system. We may face increased competition from other manufacturers, including Microsoft, who already produce or may produce competing Windows Phone based products. Increased competition within the Windows Phone ecosystem could result for instance in lower sales of our devices or lower potential for a profitable business model.

    We are aiming to expand our Windows Phone-based products to lower price points. The availability of Windows Phone-based products that we or our competitors offer at lower price points may have a negative effect on the sales of our higher priced Windows Phone-based products.

    With all that it is the case that

    [Our partnership with Microsoft is subject to risks and uncertainties.]

    In addition to the factors outlined above in connection with the Windows Phone ecosystem and sales of Nokia products with Windows Phone …

    [i.e. as the result of the above added risks there is an enhanced warning that]
    A further change in smartphone strategy either by Microsoft or Nokia could be costly and further adversely affect our market share, competitiveness and profitability.
    [vs. without that “either by Microsoft or Nokia” stated a year ago, meaning that on either side there is an increased risk in that regard vs. that of a year ago]
    [as well as adding now that]
    Microsoft could provide better support to another device manufacturer which produces devices that run on the Windows Phone platform

    We license from Microsoft the Windows Phone operating system as our primary smartphone platform. Microsoft may act independently of us with respect to decisions and communications on that operating system which may have a negative effect on us. Moreover, if Microsoft reduces investment in that operating system or discontinues it, our smartphone strategy would be directly negatively affected by such acts.

    Microsoft may make strategic decisions or changes that may be detrimental to us. For example, in addition to the Surface tablet, Microsoft may broaden its strategy to sell other mobile devices under its own brand, including smartphones. This could lead Microsoft to focus more on their own devices and less on mobile devices of other manufacturers that operate on the Windows Phone platform, including Nokia.

    We may not be able to sufficiently influence Microsoft in bringing the features or functionalities for the Windows Phone platform that we deem most important, or Microsoft may otherwise focus on other areas of its business leading to reduced resources devoted to the Windows Phone platform or failures to implement features or functionalities. This may be heightened if our position in the partnership deteriorates, for instance through other companies using leverage to influence Microsoft, or if Microsoft chooses to develop its own mobile devices, including smartphones, or if Microsoft otherwise develops interests that are contrary to ours.

    Applying 2-16 cores of ARM Cortex-A15 in ‘2014 vintage’ LSI Axxia SoCs that will power next-generation LTE basestations from macrocells to small cells opening upto 1000 times faster access to the cloud by 2020

    OR LSI Corporation’s ARM Cortex-A15 based 2-16 core SoCs with similar number of LSI’s specialized networking accelerators inside to drive the next-generation LTE base stations (from femto- through pico- and micro- to macro- and metrocells) boosting the cloud clients to get out of the current infancy of the mobile Internet OR Cooperation of LSI Corporation with ARM on highly scalable and energy efficient multicores and cache coherent interconnect for them within an SoC now enhanced with a joint LSI and Nokia-Siemens Network effort to improve real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs, also carried out within the newly setup Linaro* Networking Group OR How ARM’s Cortex-A15 to A57 (32-bit to 64-bit) micro-architecture roadmap is going to be enhanced by an upto 16 core SoC architecture developed by LSI Corporation now and with more than 16 cores in the future (with Cortex-A57) which will enable Nokia-Siemens Networks to fullfill its vision of “1 GB per day revolution by 2020” for which a 1000x increase** in traffic throughput will be needed

    * From p. 38 of ARM Annual Report 2012 [March 1, 2013] “In 2010 ARM helped launch Linaro, an open source software not-for-profit organisation which [among others] enriches the software toolkit for Android phones. By summer 2012 the results looked pretty impressive, with a reported 100% performance improvement for the Android 4 operating system. See Linaro Android is up to twice as fast as stock Android [AndroidAuthority.com, June 5, 2012]”
    ** Note that Qualcomm is also working along this vision as evidenced by its Products & Services: Wireless Networks Technology 1000x Data Challenge Overview [Aug 22, 2012], Spectrum [Sept 24, 2012], Small Cells [Oct 1, 2012] and Efficiency [Oct 1, 2012] pages of declaring its corporate intents. This was also one of the focus demos and presentations from Qualcomm on the MWC 2013 last week as evidenced by their The 1000x Mobile Data Challenge at Mobile World Congress [QUALCOMMVlog YouTube channel, Feb 22, 2013] video serving also as a good background intro here
    With Generation M on the rise, mobile data usage continues to climb. If we don’t step up to the 1000x challenge we will see lower speed, slower downloads and more congestion. We aren’t in the business of forecasting when 1000x will happen but we are focused on finding a solution that makes 1000x possible.

    First watch the LSI Axxia video report from MWC 2013:     Axxia Processor Familyimage

    LSI Axxia 5500 announced, 16-core ARM Cortex-A15 for network infrastructure [Charbax YouTube channel, Feb 28, 2013]

    Troy Bailey: [5:40] Will be sampling in early third quarter … Mass production typically is a six to nine months process after that (i.e. 2014) to validate and also to work with customers to get their products ready to go out. [5:54]

    LSI designs semiconductors and software that accelerate storage and networking in datacenters and mobile networks. At Mobile World Congress 2013, LSI is introducing the Axxia 5500 16-core ARM Cortex-A15 to provide scalability, performance and low power consumption to meet the growing demand for mobile broadband.

    Next watch 4G World 2012: The 1-Gigabyte Revolution [LightReadingTV YouTube, Nov 2, 2012] (on the same Oct 29 – Nov 1 conference)

    Bill Payne, Head of Advanced Technologies, CTO North America at Nokia Siemens Networks, speaks at 4G World in Chicago about “engagement economy” leading to the “1 GB per day revolution by 2020” for which there is the need to provide a 1000x increase in traffic throughput

    image

    For which it was announced at MWC 2013 that Nokia Siemens Networks and LSI Collaborate on Wireless Infrastructure Solutions [LSI press release, Feb 21, 2013]

    LSI® Axxia® platform and SoC capabilities contribute to
    higher-performance mobile broadband solutions
    Nokia Siemens Networks and LSI Corporation (NASDAQ: LSI) announced today a collaborative framework with ARM® processor based System-on-Chips (SoCs) that enable enhanced support for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
    Nokia Siemens Networks is increasing investment in technology development in mobile broadband business and actively participating in Linaro Networking Group and to ARM ecosystem in general to enable better use of Open Source Linux®software and tools. This will both enhance performance of forthcoming base station BTS products as well as drive towards lower power consumption.
    “LSI is very pleased to be collaborating with Nokia Siemens Networks on innovative mobile broadband solutions,” said Jim Anderson, general manager for LSI’s Networking Solutions Group. “The LSI Axxia line combines ARM processor cores with our unique Virtual Pipeline™ acceleration technology to create a platform for next-generation mobile broadband solutions and other applications. Our advanced software and emulation capabilities ensure accelerated time to market for our customers.”

    Complement this with the following two videos produced by Qualcomm for MWC 2013:
    Neighborhood Small Cells [QUALCOMMVlog YouTube channel, Feb 22, 2013]

    An innovative deployment model to enable extremely low-cost, plug-and-play, open, unplanned small cells networks. Qualcomm’s UltraSON suite of interference and mobility management techniques makes such models a reality by solving interference challenges and by offering seamless mobility. Neighborhood small cells is a key enabler to meet the 1000x data challenge

    LTE Advanced Opportunistic Small Cells [QUALCOMMVlog YouTube channel, Feb 22, 2013]

    Captures the 2013 MWC demonstration of small-cells that dynamically turn on/off based on proximate users, a feature important for the very dense small cell deployments as envisioned to meet the 1000x data challenge. The demo utilizes Qualcomm’s live over-the-air LTE Advanced small-cell network in San Diego. It also incorporates relay nodes and shows the coexistence of HetNets range expansion (eICIC-IC) and VoLTE service.

     


    LSI Axxia background:

    Axxia Communication Processor AXM5500 [LSI promotion site, Feb 19, 2013]

    Accelerating Next Generation Networks Mobile Networks

    • Enabling one architecture for heterogenous networks
    • Leveraging software and hardware investments
    • Accelerating time-to-revenue
    First 16 core ARM based Multicore Processor for Mobile Networks
    The Axxia® Communication Processor AXM5500 product family is designed to accelerate performance and increase power efficiency for mobile networks. The Axxia 5500 series combines 16 ARM cores with LSI’s specialized networking accelerators to offer networking service providers more capable and intelligent wireless infrastructure equipment, including multi-radio base stations, mobile backhaul equipment and gateways.
    Leading Technology
    The AXM5500 is the industry’s first multicore communication processor to be available with ARM’s new CoreLink™ CCN-504 interconnect technology, which provides the end-to-end quality of service needed for networking applications.
    Power Efficiency
    LSI’s latest semiconductor manufacturing technology combined with ARM’s power efficient cores more than double the amount of data that can be processed by the Axxia 5500 at the same power level.
    Extensive Scalability
    The Axxia 5500 platform architecture can scale to meet the performance required for 4G LTE and other data intensive networking applications.
    Networking Expertise
    LSI’s unique Virtual Pipeline technology efficiently accelerates mobile data processing to allow carriers to deploy next generation applications to support massive data growth.
    Software and Tools
    LSI’s robust development tools and production quality data plane software accelerate time-to-market. The Axxia architecture’s scalability allows OEM software investment to be reused across the entire mobile network.

    The Data Deluge: Mobile Network Challenges & Solutions [LSICorporation YouTube channel, Sept 18, 2012]

    In this video, LSI President and Chief Executive Officer Abhi Talwalkar discusses the role of intelligent silicon in solving mobile network challenges.

    Bridging the Data Deluge Gap–The Role of Smart Silicon in Networks [by Michael Merluzzi, LSI Corporation in EETimes Design, Feb 28, 2013]

    The proliferation of smart mobile devices, video, user-generated content and social networking, and the rising adoption of cloud services for both enterprise and consumer services are all driving explosive growth of wireless networking infrastructure. Globally, mobile data traffic is expected to grow 18-fold between 2011 and 2016, reaching 10.8 exabytes per month by 2016. Today, video traffic alone accounts for 40 percent of the wireless network load. The number of mobile devices connected to wireless networks will reach 25 billion, averaging 3.5 devices for every person on the planet, by 2015. That number is expected to double, to 50 billion, by 2020.This growth in storage capacity and network traffic is far outstripping the infrastructure build-out required to support it, a phenomenon known as the data deluge gap.
    To bridge this gap, the industry needs to leverage smarter silicon technology to scale datacenter infrastructures more cost effectively. Besides helping close the data deluge gap, smarter data processing offers potential dramatic improvements in application performance. A recent survey of 412 European datacenter managers conducted by LSI revealed that while 93 percent acknowledged the critical importance of improving application performance, a full 75% do not feel that they are achieving the desired results. This indicates that there is rising pressure on datacenter managers to find smarter ways to push systems to do much more work within the same power and cost profiles.
    Accelerating Networks
    Smart software running on general-purpose processors, increasingly with multiple cores, is pervasive in the datacenter. Processors have long inhabited switches and routers, firewalls and load-balancers, WAN accelerators and VPN gateways. None of these systems are fast enough, however, to keep pace with the data deluge on its own, for a basic reason: general-purpose processors must treat every byte equally. While such equality is perfectly acceptable for system-level versatility, it is inadequate for low-level, high-volume packet processing.
    This reality is driving the need for more intelligence in silicon that is purpose-built for specific networking applications to provide the right balance of performance, power consumption and programmability. Today’s smart silicon has reached a level of price/performance that makes it more cost-effective than adding general-purpose processors.
    The latest generation of smart silicon typically features multiple cores of general-purpose processors and multiple acceleration engines for common networking functions, such as packet classification with deep packet inspection, security processing, especially for encryption and decryption, and traffic management.
    Some of these acceleration engines are so powerful they can completely offload specialized network processing from general-purpose processors, making it easier to perform switching, routing and other networking functions entirely in smart line cards installed in servers and networking appliances to further accelerate overall network performance.
    In many organizations today, microseconds matter, driving strong demand for faster response times. For trading firms, latency can be measured in millions of dollars per millisecond. For others, such as online retailers, every millisecond of delay can mean lost sales and fading customer loyalty. Tomorrow’s datacenter networks will need to be both faster and flatter, and therefore, smarter than ever. To eliminate the data deluge gap and maximize performance, systems need to be smarter, and those smarts will increasingly need to take the form of purpose-built silicon.
    About the Author
    Michael Merluzzi is product marketing manager in the Networking Solutions Group of LSI Corporation. Focusing on mobile backhaul applications, Merluzzi is responsible for marketing of integrated platform solutions and application-enabling software for the LSI Axxia family of multicore communication processors. Previously, he held a variety of roles in technical marketing, applications engineering and software development. Merluzzi holds a bachelor’s degree in Electrical Engineering from The Pennsylvania State University and master’s degrees in Business Administration and Computer Engineering from Lehigh University.

    SoCs with more powerful cores need a more powerful interconnect [New Electronics, Jan 8, 2013]

    … Troy Bailey is director of marketing with LSI. He said the company is seeing a ‘data deluge. “There is more and more data driven by video and mobile use. By most projections, the amount of data will outstrip the capacity of the infrastructure in the future, so what’s needed is faster devices to handle more data.”
    Bailey also says there is a need for smarter devices. “We have to develop better ways to handle data; for instance, not moving data that doesn’t need to be moved. One of the ways we can do that is to add intelligence and processing throughout the network, rather than at gateways.”

    image

    The way to do this, in LSI’s opinion, is to add more and faster general purpose cores to network processors, but also to add acceleration engines to do those tasks with which general purpose cores struggle. “For example,” Bailey said, “there’s a lot of activity on a per packet basis – classifying, deep packet inspection. If you do these tasks with a general purpose processor, it will be slow and expensive.”
    He has an analogy: “A mechanic with a basic set of tools can fix your car, but a specialist who works on one part of the car will have special tools and special knowledge.”
    But, as he noted, traffic management is an important element in designing the architecture of a network processor. “If you can avoid sending data over the network, you’re better off and particularly so if you can cache it or put processing capacity closer to the network edge.”
    LSI has a range of devices either available or in the planning stage. “We have single and dual core devices that perform the same tasks,” Bailey explained, “but we also have devices with dozens of cores. We see a strong opportunity to handle data in special purpose hardware, so devices will have more engines and more cores. This will need a balance between general purpose and special hardware.”
    And the question of which cores to use has been under discussion. Until recently, LSI has based its network processors on PowerPC cores, but an announcement early in 2012 revealed ARM cores are now on the road map. “Some of these discussions are driven by customer requirements,” Bailey said. “The ARM architecture is strong and there’s a good ecosystem, so the move makes a lot of sense. LSI’s approach is based on hardware acceleration, which also makes sense, and we are not looking to use proprietary cores. So while a Cortex-A15 doesn’t necessarily bring more performance, it is more power efficient.”
    Then comes the challenge of linking all these cores together. And LSI has turned again to ARM, taking a lead license for ARM’s CCN-504 interconnect. “We have helped ARM to define what’s required in such an interconnect. As you add more cores – particularly accelerators – you end up with a lot of compute elements and when that happens, there’s opportunities for bottlenecks. You could end up adding more cores, but getting lower performance,” Bailey contended.

    image

    Neil Parris is ARM’s interconnect product manager. He said CCN-504 had been developed specifically to address the issue of more cores. “It’s about providing coherency between the cpus and the I/O and about using data on chip.”
    In some respects, it’s a consequence of integration. “There used to be a range of chips which needed to be connected,” Parris observed. “Now, it’s a single chip with multiple cores which is power critical and which needs to interface to the latest technology.”
    CCN-504 – CCN stands for cache coherent network – is the first in a family of interconnects being developed to support future complex devices. “It supports four cpu clusters,” Parris said, “and each cluster can comprise up to four cores. It also supports ARM’s 64bit architecture, which is important for those people building servers.
    “Each cpu cluster has an L2 cache, which is configurable to 2Mbyte, or 4Mbyte in the case of the Cortex-A15. The interconnect’s purpose is to join all the processors in a coherent manner, making sure all cores have a consistent view of memory.”
    But CCN-504 isn’t ARM’s first cache coherent network. “That was the CCI-400,” Parris said. “That’s aimed at mobile applications with two clusters, including Big.LITTLE.”
    Caching is an important element and one which supports Bailey’s view that you shouldn’t have to move data if you don’t have to. “Caches are important contributors to power efficiency and performance,” Parris pointed out. “The more data you have on chip, the fewer the accesses needed to external memory. It helps with power consumption and performance.”
    CCN-504 has also been built with cores other than ARM’s in mind. The network can support up to 18 AMBA interfaces, which allows designers to take advantage of such functions as 40Gbit Ethernet, USB and serial ATA links. But it also features PCI-Express connectivity. “Companies will use this facility to add their own IP into an SoC,” Parris explained. “For example, they may wish to add their own accelerator, and it’s our aim to provide them with a scalable platform on which they can build.”
    All 18 AMBA interfaces are connected to the cache coherent network through an I/O virtualisation block which provides unified system memory. “AMBA defines interconnect,” Parris said, “and CCN-504 builds on the AMBA interconnect. It has an integrated L3 cache, which can be configured from 8 to 16Mbyte, and a snoop filter.” The snoop filter basically keeps an eye on all caches to ensure coherency and reduce bus traffic.
    If the SoC does need to access external memory, ARM has developed the DMC-520 memory controller for 72bit wide DDR3/4. This supports a maximum bandwidth of 25.6Gbyte/s per channel and features buffering to optimise reads and writes. It’s the fifth generation DMC and includes error checking and correction features.
    Overall, CCN-504 supports a system bandwidth of around 1Tbit/s and operates up to the cpu clock rate. “This network scales the performance of the CCF400 significantly,” Parris noted, “with more ports and a larger cache. At the moment, it’s 128bit wide, but future devices will move up, including bandwidth,” he added.
    Bailey said LSI needed a strong technology partner for interconnect. “It’s not our point of differentiation,” he said, “so the licensing approach made sense. When you think of an SoC with 16 cores, there may be a total of 30 compute elements. It’s a complex design and that’s why it needs a robust networking solution.”

    4G World 2012: The Future of LTE [LightReadingTV YouTube, Nov 2, 2012] (on an Oct 29 – Nov 1 conference “where enterprises and operators met to discuss the state of the art of the mobile enterprise marketplace” [as per the announcement])

    A panel on what the next three years will look like: – Simon Stanley, Senior Analyst, Heavy Reading as moderator – Asok Chatterjee, Vice President, Ericsson; Chairman, 3GPP Project Coordination Group – Stephen Turnbull, Division Marketing Manager, Freescale Wireless Access Division, Freescale – Noy Kucuk, Vice President of Marketing, Networking Solutions Group, LSI at 4G World 2012, Oct 29 – Nov 1, Chicago

    4G World 2012: The 4G Opportunity [LightReadingTV YouTube, Nov 2, 2012] (on the same Oct 29 – Nov 1 conference)

    Neville Ray, CTO of T-Mobile USA, delivers keynote at 4G World in Chicago

    4G World 2012: Innovation: Strategy, Technology & Collaboration [LightReadingTV YouTube, Nov 2, 2012] (on the same Oct 29 – Nov 1 conference)

    Praveen Atreya, Verizon’s director of network technology and head of its LTE Innovation Center, delivers his keynote at 4G World in Chicago

    4G World 2012: The 1-Gigabyte Revolution [LightReadingTV YouTube, Nov 2, 2012] (on the same Oct 29 – Nov 1 conference)

    Bill Payne, Head of Advanced Technologies, CTO North America at Nokia Siemens Networks, speaks at 4G World in Chicago about “engagement economy” leading to the “1 GB per day revolution by 2020” for which there is the need to provide a 1000x increase in traffic throughput

    image

    Nokia Siemens Networks and LSI Collaborate on Wireless Infrastructure Solutions [LSI press release, Feb 21, 2013]

    LSI® Axxia® platform and SoC capabilities contribute to
    higher-performance mobile broadband solutions
    Nokia Siemens Networks and LSI Corporation (NASDAQ: LSI) announced today a collaborative framework with ARM® processor based System-on-Chips (SoCs) that enable enhanced support for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
    Nokia Siemens Networks is increasing investment in technology development in mobile broadband business and actively participating in Linaro Networking Group and to ARM ecosystem in general to enable better use of Open Source Linux®software and tools. This will both enhance performance of forthcoming base station BTS products as well as drive towards lower power consumption.
    “LSI is very pleased to be collaborating with Nokia Siemens Networks on innovative mobile broadband solutions,” said Jim Anderson, general manager for LSI’s Networking Solutions Group. “The LSI Axxia line combines ARM processor cores with our unique Virtual Pipeline™ acceleration technology to create a platform for next-generation mobile broadband solutions and other applications. Our advanced software and emulation capabilities ensure accelerated time to market for our customers.”

    Networking Leaders Collaborate to Maximize Choice, Performance and Power Efficiency [Linaro press release, Feb 20, 2013]

    Industry leaders including AppliedMicro, ARM, Enea, Freescale®, LSI, MontaVista, Nokia Siemens Networks and Texas Instruments (TI) have formed a new group focused on accelerating Linux development for ARM processors in cloud and mobile infrastructure.
    Linaro, the not-for-profit engineering organization developing open source software for the ARM® architecture, today announced the formation of the Linaro Networking Group (LNG) with twelve founding member companies including … <see above> … at the Embedded Linux Conference (ELC).
    With ARM-based SoCs at the heart of the transformation occurring in cloud and mobile infrastructure applications such as switching, routing, base-stations and security, Linaro’s members are collaborating on fundamental software platforms to enable rapid deployment of new services across a range of converged infrastructure platforms. Developing the base platform for diverse and complex networking applications requires a significant amount of software that addresses common challenges. LNG will deliver this as an enhanced core Linux platform for networking equipment. …
    Networking infrastructure is undergoing a transformation driven by the ramp in diverse data being moved through disparate networks to and from billions of diverse devices. The industry needs to simplify the management of the network as well as create new applications that will enable cloud service providers, carriers and others to reliably provide a great user experience across expanded mobility use cases and the increasing globally-connected intelligence of devices. Enterprises need to scale their networks and their network management capabilities to cope with these demands and also enable the rapid evolution of applications for new revenue-generating business models. LNG will accelerate this transformation through its initial focus on fundamental optimizations for use across all ARM-based networking infrastructure equipment.
    An interim steering committee for LNG has been meeting since the end of 2012 and has agreed on four initial areas of work:
    1. Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
    2. Real-time operations and the Linux kernel optimizations for the control and data plane.
    3. Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
    4. Dealing with legacy software and mixed-endian issues prevalent in the networking space.
    Linaro expects initial software deliveries from the Linaro Networking Group during the first half of 2013 with on-going monthly releases thereafter.

    LSI hopes to power mobile networks with ARM-based processors [CIO, Feb 19, 2013]

    Chipmaker LSI is taking ARM-based processors to new frontiers with its upcoming AXM5500 family, which will be used in mobile base stations of all sizes.
    From today’s smartphones, tablets and thin clients to tomorrow’s servers, ARM-based processors are powering a growing number of different devices, and if LSI is successful, mobile networks will be added to that list. The company’s AXM5500 family of processors will use up to 16 Cortex-A15 cores to power base stations for mobile networks.
    The Cortex-A15 is ARM’s most powerful processor to date, and is used in products like the Nexus 10 tablet from Google and Samsung Electronics.
    “The intention is to provide high-performance and good efficiency on a scalable platform,” said Troy Bailey, director of marketing at LSI.
    LSI’s processors for wireless infrastructure have historically been based on PowerPC processors, but because of increased demand for different size base stations in so-called heterogeneous networks, it decided to add ARM-based products.
    In addition to achieving a new level of efficiency, working with ARM allows LSI to build a processor family that can be used in anything from a macro cell down to a pico cell, which means lower development costs, because software can be reused, according to Bailey. Pico cells are used to provide coverage for areas such as offices and shops. Installation and management becomes easier, as well, Bailey said.
    The first two products are AXM5516 and AXM 5512, which have 16 and 12 cores, respectively. They are intended for use is large base stations. LSI will in the future add processors with fewer cores that are a better fit for small cells.
    The product family also uses ARM’s new CoreLink CCN-504 Cache Coherent Network interconnect, which was announced in October last year. It can prioritize time-sensitive traffic and offers up to one terabit of usable system bandwidth per second, according to ARM.
    “It is a very good and scalable interconnect. One of the challenges when building high core count processors is making sure you have no bottlenecks and waste the cores,” Bailey said.
    The company is also looking at ARM’s new big.LITTLE processing architecture, which in its first generation combines the powerful Cortex-A15 and the energy-efficient Cortex-A7 on one die.
    “There certainly are some tasks that need a very strong single thread performance, and there are some tasks that don’t, and it doesn’t make sense to light up a big A15 if it can be done on an A7, so we think it makes sense,” Bailey said.
    The company will start sampling the first processors during the third quarter. Because the products aren’t being sampled yet, LSI will have to make to with visual demos showing the performance and power savings at next week’s Mobile World Congress.
    LSI will have to convince equipment vendors that using ARM in their base stations is a good move and at least one company is open to the idea. Ericsson isn’t currently using ARM-based processors in its base stations. But “as we continue to expand and develop our base station portfolio, we always evaluate what possibilities are available from the general ICT industry and we might use ARM based processors in the future,” a spokeswoman said via email.
    “We definitely have some major customers that are going in the ARM direction, and we have built this product for them” Bailey said.

    More media reports for general briefings:
    ARM Chips Take on New Cellular Chores, Aided by LSI [Digits blog of WSJ, Feb 19, 2013]
    ARM is already the brains of your smartphone. Now it wants to run the network too  [GIGAOM, Feb 20, 2013]

    image

    Understanding LSI
    [LSICorporation YouTube channel, Dec 19, 2012]
    [10:43] In the case of wireless infrastructure we are engaged with all the system providers . But what’s also exciting we’re engaged deeply with the top two players. This is Ericsson as well as Nokia-Siemens who have between the two of them 50 to 60% share in the wireless infrastructure market. [11:02]

    Learn how LSI is helping companies address and take advantage of today’s constantly growing data volumes.

    See also: Investor Relations Update [LSI Corporation, Jan 23, 2013] from which the following three slides provide the latest relevant information:

    Note: SAM (Served Available Market or Segmented Addressable Market) is a term that is typically used to reference the customers that can actually be reached out of the TAM (Total Addressable or Available Market). More on that: Estimate Addressable Market, Defining your TAM, Total Addressable Market and  The importance of TAM, SAM and SOM in your plan imageimage
    LSI Management Discusses Q4 2012 Results – Earnings Call Transcript [Jan 23, 2013]: We began ramping our standard product Axxia multi-core communication processor at the leading base station OEM and expect continued growth as we move through this year. We have multi-generational engagements in the wireless space that we believe will enable LSI to have in excess of 50% share in data and control plane processing silicon in a few years. In addition to standard products like Axxia, we have custom silicon wins in the baseband function of base stations with multiple OEMs, further broadening LSI’s footprint in base station infrastructure. … We feel very good about our growth initiatives across networking. Axxia and our multi-core solution there continues to be adopted more and more across base station system vendors.image
    LSI’s CEO Presents at Morgan Stanley Technology, Media & Telecom Conference (Transcript)
    [Feb 26, 2013]: We made a major announcement last week with Nokia Siemens. Nokia Siemens and LSI are collaborating on LSI’s next-generation Axxia processor. It’s the industry’s first 16-core ARM network processor. We believe we’ll have at least a 9- to 12-month advantage relative to time to market, as well as the attributes that we have in our product. This is a pretty significant achievement, with Nokia Siemens. It adds to the other major player that we’re also shipping Axxia into. And it’s a proof-point to what we said a year ago. We said a year ago that we would have over 50% share of the data plane and control plane, basically the CPU in the base station, that we would have 50% share over the course of the next several years. So we have 2 of the top 3 companies now adopting Axxia, and we’re well on our way to achieve that share position.
    We also can extend that commentary into the baseband where we are also going to ship baseband silicon into these 2 companies, which we also believe will amount to at least 40% share. These share levels are up from 10% today. So we’re very excited about what’s happening in our networking business.

    LSI transforming itself again [Brian Bailey on EETimes, Feb 20, 2013] (with the illustrations replaced by equivalent images from the AXM2502 Product Brief [March 8, 2012] and AXM5500 Product Brief [Feb 19, 2013] respectively which I recommend to read for further technical details)

    LSI is a company that has been through a lot of changes over the years. I can remember when they had their own fabs, did custom design for their customers, and had their own suite of design tools. In short, they were a complete vertically integrated ASIC house. They have evolved many times and become fabless, transitioned to semi-custom design and today is building standard parts for markets such as storage and wireless networking.

    The other day we learned about another big change in LSI future and it all has to do with their Axxia line of communications processors. Take a look at the block diagram for the AXM2502 product. It was powered by a pair of PowerPC processors connected to custom accelerators using their Virtual Pipeline technology. This is a 28nm product.

    image

    This week, LSI introduced the Axxia 5500 product family of communication processors designed to accelerate performance and increase power efficiency for multi-radio base stations and 4G/LTE-capable wireless networks. The LSI Axxia 5500 product family features 12 or 16 ARM cores.

    image

    This switch in processing core brings about a 4X control plane performance improvement and 2,5X data plane improvement and reduced power – something that is becoming important for all applications from battery powered handhelds to datacenters.

    This chip not only makes advances for LSI, but ARM also. The two companies partnered to create this 16 core solution utilizing ARMs new CoreLink CCN-504 interconnect. CoreLink CCN-504 can deliver up to one Terabit of system bandwidth per second.

    LSI also provides much of the software necessary to power this chip including high-performance layer two through four software packages that provide a complete wireless transport solution for networking OEMs.

    More 3d party reports for further technical briefings:
    LSI and 6WIND team up for high performance networking [SemiAccurate, Oct 16, 2012]
    How does LSI envision the next generation of ARM networking SoCs? [SemiAccurate, Nov 28, 2012]
    LSI launches a 16-core ARM A15 cell phone chip [SemiAccurate, Feb 19, 2013]

    The HetNet problem as LSI sees it

    Until recently, you needed very different devices from top to bottom, the hardware on a pico basestation was nothing like that of the vastly larger long distance stations. This mandates some very different software stacks, management tools, and all sorts of other things that bring problems to the poor network trolls running the plumbing 24/7. Heterogeneity is not a good thing here, but there really wasn’t a choice, no hardware was suited for all of the tasks at hand. See what LSI is aiming for now?

    Since the Axxia 5500 line can scale from 4 to 32 cores, it can meet all of the demands of basestations large and small. If the pico basestation needs a digitial front end and DSP setup that the big ones don’t, no problem, slap them on. If there are things that the little ones don’t need, pull them out and save die area. LSI hopes to be able to service all of a carrier’s needs from large to small with a single hardware family and the attendant software stack. Carriers like this, it saves them time, money, and headaches, speeds deployment, and makes life easier by simplifying everything. And that is exactly what LSI is aiming for with the Axxia 5500 family.

    Corresponding LSI press releases with more information:
    LSI Announces Availability of Family of Network Accelerator Cards for Enterprise, Data Centers and Service Providers [July 20, 2011] “Complete platform built on industry-leading silicon with software protocols to provide high performance and deterministic features for networking OEMs
    LSI Begins Shipping 28nm Custom Silicon for Datacenter and Mobile Network Applications [Nov 16, 2011] “Custom silicon enables networking and storage OEMs to build highly differentiated silicon solutions; demonstrates LSI leadership
    LSI Expands Strategic Relationship with ARM to Offer Energy-Efficient Multicore Processors for Networking Applications [Jan 23, 2012] “Enhances industry’s most powerful networking silicon portfolio … LSI will gain access to:

    • The broad family of ARM processors, including the ARM Cortex-A15 processor with virtualization support and future ARM processor
    • ARM on-chip interconnect technology, including CoreLink™ cache coherent interconnect system IP, for use in multicore applications”

    LSI Introduces Highly Integrated Axxia Communication Processor to Accelerate Mobile Broadband [Feb 21, 2012] “AXM2500 reduces power consumption and physical space requirements; helps service providers seamlessly deploy heterogeneous networks and contend with data growth
    LSI Expands Axxia Platform to Deliver Power-Efficient Mobile Networks [Feb 23, 2012] “Addition of ARM’s latest multicore technology will provide scalability, performance and low power consumption to meet growing demand for mobile broadband
    LSI Expands Networking Ecosystem to Accelerate Implementation of 4G Networks [Feb 24, 2012] “Partner solutions accelerate time to market and reduce software investment for wireless manufacturers
    LSI Collaborates with Vineyard Networks to Accelerate Mobile and Datacenter Networks [May 7, 2012] “Vineyard joins LSI networking ecosystem; combined solution delivers real-time application recognition to improve user experience
    LSI and Microsemi Collaborate to Reduce Costs and Increase Performance of Mobile Networks [Sept 19, 2012] “Integration of Microsemi timing protocol into LSI® Axxia®communication processors provides networking equipment manufacturers with increased interoperability, reduced customer investment and faster time-to-market
    LSI and 6WIND Team Up to Accelerate Mobile Infrastructure and Datacenter Network Performance [Oct 16, 2012] “6WINDGate packet processing software for LSI® Axxia®platform allows network OEMs to benefit from performance-optimized software that reduces time-to-market and lowers development costs
    LSI Summit Convenes Technology Leaders to Unlock Opportunities in the New Innovation Era of Devices, Datacenters and Mobile Networks [Nov 13, 2012] “5th annual ‘Accelerating Innovation Summit’ attracts storage and networking experts to collaborate on solving the challenges of the data deluge
    LSI Introduces Axxia® 5500 Communication Processors with ARM Technology for High-Performance, Power-Efficient Networks [Feb 19, 2013] “LSI scalable architecture with ARM multicore processors and interconnect to improve multi-radio base station and 4G/LTE-capable wireless network performance

    Future Coherent Interconnect Technology for Networking Applications [ARM’s Smart Connected Devices blog, Dec 11, 2012]

    Coherent interconnects will be at the core of next-generation network systems and system-on-chip (SoC) devices. To meet the rapidly growing processing requirements of wireless infrastructure systems and servers, network equipment manufacturers need highly integrated SoCs with a heterogeneous mix of CPU cores. These cores need to handle a mix of general-purpose processing, packet processing and digital signal processing (DSP) functions. The interconnect at the center of these solutions must maintain cache coherency between cores and provide a low-latency path between the cores, caches, external memory and networking I/O.
    We are seeing dramatic growth in the data bandwidth in both mobile and fixed line networks. Cloud computing and video services are key applications driving this growth. 4G/LTE networks are transforming the wireless network experience for high-volume data users. Larger data centers with many virtualized servers allow large content providers such as Facebook, Google and Amazon to support many millions of users. To meet these demands, carriers are making significant investments in enhanced 4G networks and infrastructure required to support the huge growth in data traffic.
    The latest silicon technology allows the integration of many cores onto a single SoC, dramatically reducing the number of components in a system. The processor cores can be closely coupled to hardware acceleration engines, external memory interfaces and high-speed networking I/O. The level of integration presents significant challenges to developers, who must ensure the use of shared resources does not reduce system performance. The key to this integration is the interconnect between the different cores and the other functional blocks.
    Standard RISC cores, licensed from vendors such as ARM, have allowed system OEMs to quickly develop new solutions using third-party tools for software development. The introduction of licensed IP for a low-latency coherent interconnect will allow OEMs to develop more easily new solutions integrating multiple general purpose CPU and other cores. By working with well-established IP and SoC vendors such as ARM and LSI, system developers will have access to next-generation networking SoCs with a mix of CPU cores, hardware accelerators and, if required, their own hardware blocks.
    The “Future Coherent Interconnect Technology for Networking Applications,” by Heavy Reading for LSI and ARM, explores the benefits of using a low-latency, coherent interconnect at the core of a next-generation networking SoC and reviews the market demand for next-generation network SoCs with multiple CPU cores and hardware accelerators. It details the technical challenges and one solution that is available to system developers for a coherent interconnect with integrated cache and support for DDR3 and DDR4 memories. The paper also describes a next-generation networking SoC architecture that is built around a coherent interconnect and available to OEMs as a standard product or custom solution.
    Guest Partner Blogger:
    imageMichael Merluzzi is a Sr. Marketing Manager in the Networking Solutions Group at LSI Corporation. He has product marketing responsibilities for integrated platform solutions and application-enabling software for LSI’s Axxia® family of multicore communications processors. Previously, he has held a variety of roles in technical marketing, applications engineering and software development.
    Michael holds a bachelor’s degree in Electrical Engineering from the Pennsylvania State University and master’s degrees in Business Administration and Computer Engineering from Lehigh University.

    Next-Generation Multicore ARM Architectures for Intelligent Networks:

    image

    image

    image

    image

    image

    Next-generation multicore SoC architectures for tomorrow’s communications networks [by David Sonnier, LSI Corporation on Embedded Computing Design, Dec 11, 2012]

    IT managers are under increasing pressure to boost network capacity and performance to cope with the data deluge. Networking systems are under a similar form of stress with their performance degrading as new capabilities are added in software. The solution to both needs is next-generation System-on-Chip (SoC) communications processors that combine multiple cores with multiple hardware acceleration engines.
    The data deluge, with its massive growth in both mobile and enterprise network traffic, is driving substantial changes in the architectures of base stations, routers, gateways, and other networking systems. To maintain high performance as traffic volume and velocity continue to grow, next-generation communications processors combine multicore processors with specialized hardware acceleration engines in SoC ICs.
    The following discussion examines the role of the SoC in today’s network infrastructures, as well as how the SoC will evolve in coming years. Before doing so, it is instructive to consider some of the trends driving this need.
    Networks under increasing stress
    In mobile networks, per-user access bandwidth is increasing by more than an order of magnitude from 200-300 Mbps in 3G networks to 3-5 Gbps in 4G Long-Term Evolution (LTE) networks. Advanced LTE technology will double bandwidth again to 5-10 Gbps. Higher-speed access networks will need more and smaller cells to deliver these data rates reliably to a growing number of mobile devices.
    In response to these and other trends, mobile base station features are changing significantly. Multiple radios are being used in cloud[]-like distributed antenna systems. Network topologies are flattening. Operators are offering advanced Quality of Service (QoS) and location-based services and moving to application-aware billing. The increased volume of traffic will begin to place considerable stress on both the access and backhaul portions of the network.
    Traffic is similarly exploding within data center networks. Organizations are pursuing limitless-scale computing workloads on virtual machines, which is breaking many of the traditional networking protocols and procedures. The network itself is also becoming virtual and shifting to a Network-as-a-Service (NaaS) paradigm, which is driving organizations to a more flexible Software-Defined Networking (SDN) architecture.
    These trends will transform the data center into a private cloud with a service-oriented network. This private cloud will need to interact more seamlessly and securely with public cloud offerings in hybrid arrangements. The result will be the need for greater intelligence, scalability, and flexibility throughout the network.
    Moore’s Law not keeping pace
    Once upon a time, Moore’s Law – the doubling of processor performance every 18 months or so – was sufficient to keep pace with computing and networking requirements. Hardware and software advanced in lockstep in both computers and networking equipment. As software added more features with greater sophistication, advances in processors maintained satisfactory levels of performance. But then along came the data deluge.
    In mobile networks, for example, traffic volume is growing by some 78 percent per year, owing mostly to the increase in video traffic. This is already causing considerable congestion, and the problem will only get worse when an estimated 50 billion mobile devices are in use by 2016 and the total volume of traffic grows by a factor of 50 in the coming decade.
    In data centers, data volume and velocity are also growing exponentially. According to IDC, digital data creation is rising 60 percent per year. The research firm’s Digital Universe Study predicts that annual data creation will grow 44-fold between 2009 and 2020 to 35 zettabytes (35 trillion gigabytes). All of this data must be moved, stored, and analyzed, making Big Data a big problem for most organizations today.
    With the data deluge demanding more from network infrastructures, vendors have applied a Band-Aid to the problem by adding new software-based features and functions in networking equipment. Software has now grown so complex that hardware has fallen behind. One way for hardware to catch up is to use processors with multiple cores. If one general-purpose processor is not enough, try two, four, 16, or more.
    Another way to improve hardware performance is to combine something new – multiple cores – with something old – Reduced Instruction Set Computing (RISC) technology. With RISC, less is more based on the uniform register file load/store architecture and simple addressing modes. ARM, for example, has made some enhancements to the basic RISC architecture to achieve a better balance of high performance, small code size, low power consumption, and small silicon area, with the last two factors being important to increasing the core count.
    Hardware acceleration necessary, but …
    General-purpose processors, regardless of the number of cores, are simply too slow for functions that must operate deep inside every packet, such as packet classification, cryptographic security, and traffic management, which is needed for intelligent QoS. Because these functions must often be performed in serial fashion, there is limited opportunity to process them simultaneously in multiple cores. For these reasons, such functions have long been performed in hardware, and it is increasingly common to have these hardware accelerators integrated with multicore processors in specialized SoC communications processors.
    The number of function-specific acceleration engines available also continues to grow, and more engines (along with more cores) can now be placed on a single SoC. Examples of acceleration engines include packet classification, deep packet inspection, encryption/decryption, digital signal processing, transcoding, and traffic management. It is even possible now to integrate a system vendor’s unique intellectual property into a custom acceleration engine within an SoC. Taken together, these advances make it possible to replace multiple SoCs with a single SoC in many networking systems (see Figure 1).
    imageFigure 1: SoC communications processors combine multiple general-purpose processor cores with multiple task-specific acceleration engines to deliver higher performance with a lower component count and lower power consumption.
    In addition to delivering higher throughput, SoCs reduce the cost of equipment, resulting in a significant price/performance improvement. Furthermore, the ability to tightly couple multiple acceleration engines makes it easier to satisfy end-to-end QoS and service-level agreement requirements. The SoC also offers a distinct advantage when it comes to power consumption, which is an increasingly important consideration in network infrastructures, by providing the ability to replace multiple discrete components in a single energy-efficient IC.
    The powerful capabilities of today’s SoCs make it possible to offload packet processing entirely to system line cards such as a router or switch. In distributed architectures like the IP Multimedia System and SDN, the offload can similarly be distributed among multiple systems, including servers.
    Although hardware acceleration is necessary, the way it is implemented in some SoCs today may no longer be sufficient in applications requiring deterministic performance. The problem is caused by the workflow within the SoC itself when packets must pass through several hardware accelerators, which is increasingly the case for systems tasked with inspecting, transforming, securing, and otherwise manipulating traffic.
    If traffic must be handled by a general-purpose processor each time it passes through a different acceleration engine, latency can increase dramatically, and deterministic performance cannot be guaranteed under all circumstances. This problem will get worse as data rates increase in Ethernet networks from 1 Gbps to 10 Gbps, and in mobile networks from 300 Mbps in 3G networks to 5 Gbps in 4G networks.
    Next-generation multicore SoCs
    LSI addresses the data path problem in its Axxia SoCs with Virtual Pipeline technology. The Virtual Pipeline creates a message-passing control path that enables system designers to dynamically specify different packet-processing flows that require different combinations of multiple acceleration engines. Each traffic flow is then processed directly through any engine in any desired sequence without intervention from a general-purpose processor (see Figure 2). This design natively supports connecting different heterogeneous cores together, enabling more flexibility and better power optimization.
    imageFigure 2: To maximize performance, next-generation SoC communications processors process packets directly and sequentially in multiple acceleration engines without intermediate intervention from the CPU cores.
    In addition to faster, more efficient packet processing, next-generation SoCs also include more general-purpose processor cores (to 32, 64, and beyond), highly scalable and lower-latency interconnects, nonblocking switching, and a wider choice of standard interfaces (Serial RapidIO, PCI Express, USB, I2C, and SATA) and higher-speed Ethernet interfaces (1G, 2.5G, 10G, and 40G+). To easily integrate these increasingly sophisticated capabilities into a system’s design, software development kits are enhanced with tools that simplify development, testing, debugging, and optimization tasks.
    Next-generation SoC ICs accelerate time to market for new products while lowering both manufacturing costs and power consumption. With deterministic performance for data rates in excess of 40 Gbps, embedded hardware is once again poised to accommodate any additional capabilities required by the data deluge for another three to four years.
    David Sonnier is a technical fellow in system architecture for the Networking Solutions Group of LSI Corporation.
    LSI Corporation david.sonnier@lsi.com www.lsi.com

    See how innovative is the Axxia SoC networking platform for mobile broadband speed [LSI China, Oct 15, 2012] as translated from Chinese by Google (except the introductory summary which manual corrections as well) because there is no English equivalent

    What is Axxia? A communication processor? Not only that! It represents a unique innovation network SoC solutions platform, flexible and energy efficient and scalable architecture! It contains the complete network chip and software combination:
    1. Communication processor, using state-of-the-art multi-core technology which can achieve fast path acceleration with deterministic performance and highly programmable;
    2. Highest density, lowest cost and fully programmable media and baseband processor;
    3. The TDM (time-division multiplex) transport of multiservice processor, high-density, low-power and fully trust through the packet network;
    4. Customizazion based on Axxia processors, flexible customer management, industry-leading delivery times.
    The Axxia Network SoC solutions for mobile and enterprise networks determine performance, customized solutions to achieve a high degree of differentiation.
    Flexible and highly scalable platform
    <from here on raw Google translation>
    “With the mass deployment of EVS, SAE and LTE-Advanced, the new architecture of the system on the network bandwidth requirements will far exceed the processing capacity of the current infrastructure, which requires the SoC architecture with excellent expansion capability, in order to effectively control costs , while responding to the ever-rising demand for bandwidth. “LSI corporate network components, marketing director Tareq Bustami has pointed out.
    Axxia the use of energy-efficient central processing unit (CPU) platform and instruction set architecture (ISA) multicore architecture-independent, allowing the general-purpose processor with flexible, determined by the Virtual Pipeline high-speed path integration, for according to the specific needs of the OEM manufacturers extend and customize Axxia platform to choose their own silicon design, such as ASIC, CSSP or ASSPs. Unique business model is the LSI customers widely hailed (as shown in Figure 1).
    image
    Figure 1. The LSI Axxia network platform has an original mixed approach of the general and specific types of processing, and has the ability to integrate custom IP.
    This flexible business model is built on the LSI IP accumulation basis. Customers comply with the standards provided by LSI pre-verified IP cores to reduce costs and accelerate time-to-market.
    CoreWare IP in LSI the proven complex IP functionality, and to achieve specific LSI design integrity, ease of use, reusability, supportability, quality, range of standard deliverables and supporting infrastructure. , CoreWare IP address program leading storage industry standard interfaces and components are synergistic, ensure compliance, and customer resources to focus on product differentiation and competitive advantage.
    Since 1990 LSI IP solutions already contains a high-level, pre-packaged chip components and complete delivery, the LSI leading design tools and methods to achieve integration. LSI IP support for system-level design considerations for end-use applications, including simulation and signal integrity requirements.
    LSI also provides a flexible IP subscription model, allowing customers to choose a wide range of solutions from LSI, from the the SerDes or I / O unit to complete the I / O controller, processor subsystem processor core to complete. Specifically including but not limited to: 1. (Key storage and network interface) from the current and next generation serial standards support, such as XFI (10G), Fibre Channel (8.5G), SAS (6G), PCI Express (5G) as well as many other interfaces; 2. the latest parallel storage interface supports DDR3 SDRAM, QDR, DDR II + SRAM and RLDRAM; memory chip-to-chip interface, such as SGMII and SPI4; 3. supports the industry’s most popular integrated processor products, such as PowerPC, ARM, MIPS and ZSP.
    Collaboration with ARM, significantly improve mobile network performance / power ratio
    Particularly worth mentioning is the integration, in addition to continue to develop Axxia communication processors, high-performance multi-core PowerPC-based ASSP products, LSI also recently a new high-performance multi-core ARM Cortex A-15 processor and LSI hardware accelerators certainty The Axxia processor program series ASSPs ideal for mobile access, backhaul and gateway. The series offers a variety of pin-compatible configuration, suitable for a variety of network applications, NodeB and eNodeB 3G/4G mobile access to the system, the mobile broadband radio network controller (RNC) applications as well as enterprise gateway. The Axxia Series provides a comprehensive software development environment, evaluation board, as well as the industry’s leading supplier, launched a series of hardware / software solutions.
    ARM CPU and LSI hardware accelerators with the the Virtual Pipeline patented technology to achieve the best performance and flexibility. This partnership to provide customers with a proven scalable, multi-function software platform to support the extended multi-generation wireless infrastructure. LSI has been a long history of cooperation with ARM kernel currently has shipped over one billion ARM core integrated into LSI flagship the Axxia platform is a natural choice. The platform enables wireless manufacturers to develop a solution that contains all base station processing functions.
    Add the ARM energy saving core the Axxia platform can base stations and wireless infrastructure to provide energy-efficient, low-power multi-core processors; provide scalable performance to meet smart phones, tablet PCs and cloud services bring massive data growth needs; embedded intelligence, can be used to determine flow, identification applications, provide appropriate traffic and at the right time in order to achieve real-time services such as mobile video. Through the the ARM Community and LSI network ecosystem, customers also have access to a wealth of third-party tools and support.
    The innovative Virtual Pipeline patented technology to determine performance
    Any routing combination of LSI Virtual Pipeline (Virtual Pipeline) patented technology developed for each packet classification decisions, each data packet or communications media stream prior to leaving the ACP can after engine CPU core. This flexibility is very powerful and convenient, and is conducive to the design of traffic flowing through the device.
    Using patented technology hardware scheduler function with any-to-any data packet streaming combined, and thus needed to route traffic on-chip, able to achieve in the the acceleration engine multicore Commonwealth SoC subsystem components between smooth Communication chip. The flow from the input port is routed directly to the hardware acceleration engine, and then routed to the next acceleration engine, transmission path depends entirely on the processing requirements of specific traffic, regardless of whether or not to use the CPU core. Can achieve up to 20Gbps or more deterministic data throughput, and to achieve deterministic transmission and L2 performance in a longer transition period may be well suited for multi-protocol processing applications.
    image
    Figure 2. Virtual Pipeline message transmission of the highly innovative patented technologies.
    For example, through the Virtual Pipeline patented technology can be first Ethernet interface receives traffic sent to the decryption engine to decrypt the encrypted traffic, and then routed directly to the content inspection engine contain viruses / spam or other malicious content traffic filtering. If the flow is considered safe, can be directly transmitted to the rear panel ports, without going through the CPU core. In addition, we also needed to flow from the input port or accelerate the engine is routed to the CPU core for further processing.
    Wireless eco-system platform to reduce costs, accelerate time to market
    Mark Hung, research director at Gartner, said: “built under the premise of controlling costs to meet the growing demand for mobile broadband network, which is a big challenge. Operators will benefit from a higher degree of integration of the mobile infrastructure system IC solve program, because they cost the transition to 4G networks. “
    As mentioned earlier, Axxia platform provide highly differentiated multi-core chip and software architecture can achieve scalability and identify performance benefits to customers include: capital expenditures can be reduced by 50%; while the power consumption improvements reduce power consumption by 50% reduction of operating costs; simplified software architecture cocoa to help customers reduce software development work; the active industrial environment can be enhanced debugging capabilities, thus speeding up the listing process; while a high level of system integration can reduce the cost of materials (such as memory and switch) ; uncertainty platform can also shorten the delay, to improve the user experience, and so on.
    image
    Figure 3: the Axxia wireless platform ecosystem including IP cores, OS platform and developer tools for ISVs, ODMs, CMs and other partners, providing a strong industrial environment of support.
    As service providers compete to deploy 4G wireless infrastructure, LSI is bound growth expected customer demand for reliable integration solutions, was founded a few years ago a the Axxia wireless ecological system, combined with IP cores, operating systems and tools various partners, developers, independent software developers (ISVs), ODM, CM, to the advantage of price, performance, and flexible power range of products both Axxia series to help wireless equipment manufacturers to accelerate time to market, reduce software investment.
    LSI and its ecosystem partners recently announced the plans of the wireless platform, pre-integrated solutions to provide optimal performance for their own the Axxia wireless platform to provide stronger support industrial environment. LSI ecosystem partners are encouraged to focus on the benefits of development features, and performance differentiation, rather than using a dedicated interface.
    This wireless ecosystem platform provides a proven, scalable, multi-function software platform to support multiple generations of Axxia product line; consistent software architecture to simplify the customer transplantation of the the Axxia products enable customers to choose the best. Framework API LSI architecture oversight functions allow open access to different ecosystem partners.Ecosystem partners committed LSI architecture to optimize performance and support the strategic objectives of the wireless platform.
    For example, LSI is in cooperation with the main partners in the the Axxia wireless industry environment Radisys Trillium 3G and LTE wireless protocol software specifically for the LSI Axxia platform integration and optimization. Axxia platform so that you can take advantage of a unique, scalable capacity and performance, simple and convenient to deploy the Radisys wireless software.
    Such wireless ecosystem platform plans to help the wireless OEM manufacturers and their suppliers ecosystem response to earnings challenges, accelerate time to market, lower total cost of ownership.Reduce software investment to achieve higher performance of OTB. At the same time minimizing the cost of the non-differentiated products. Reduce technical barriers, you can choose more cost effective, lower power consumption optimal product!
    (This article is LSI Corporation feed)

    Windows Azure Media Services OR Intel & Microsoft going together in the consumer space (again)?

    With Intel Media: 10-20 year leap in television this year [Feb 16, 2013] and Microsoft entertainment as an affordable premium offering to be built on the basis of the Xbox console and Xbox LIVE services [Feb 13, 2013] this is a highly probable assumption.

    There is other evidence as well. In fact plenty of them. Especially from Microsoft side:

    image

    The Entertainment and Devices Division (EDD) of Microsof is currently the place where all of Microsoft consumer-only activities are concentrated. EDD revenue, however, was 11% down for the latest quarter vs. that of a year ago. Moreover, it was just 17.6% of the overall Microsoft revenue vs. 20.3% in the quarter a year ago.

    In addition:
    – in Microsoft Reports Record Revenue of $21.5 Billion in Second Quarter [Microsoft press  release, Jan 24, 2013] great progress was reported in the non-consumer segments of Microsoft:

    “Our big, bold ambition to reimagine Windows as well as launch Surface and Windows Phone 8 has sparked growing enthusiasm with our customers and unprecedented opportunity and creativity with our partners and developers,” said Steve Ballmer, chief executive officer at Microsoft. “With new Windows devices, including Surface Pro, and the new Office on the horizon, we’ll continue to drive excitement for the Windows ecosystem and deliver our software through devices and services people love and businesses need.”
    The Windows Division posted revenue of $5.88 billion, a 24% increase from the prior year period. Adjusting for the net deferral of revenue for the Windows Upgrade Offer and the recognition of the previously deferred revenue from Windows 8 Pre-sales, Windows Division non-GAAP revenue increased 11% for the second quarter. Microsoft has sold over 60 million Windows 8 licenses to date.
    “We saw strong growth in our enterprise business driven by multi-year commitments to the Microsoft platform, which positions us well for long-term growth,” said Peter Klein, chief financial officer at Microsoft. “Multi-year licensing revenue grew double-digits across Windows, Server & Tools, and the Microsoft Business Division.”
    The Server & Tools business reported $5.19 billion of revenue, a 9% increase from the prior year period, driven by double-digit percentage revenue growth in SQL Server and System Center.
    “We see strong momentum in our enterprise business. With the launch of SQL Server 2012 and Windows Server 2012, we continue to see healthy growth in our data platform and infrastructure businesses and win share from our competitors,” said Kevin Turner, chief operating officer at Microsoft. “With the coming launch of the new Office, we will provide a cloud-enabled suite of products that will deliver unparalleled productivity and flexibility.”
    The Microsoft Business Division posted $5.69 billion of revenue, a 10% decrease from the prior year period. Adjusting for the impact of the Office Upgrade Offer and Pre-sales, Microsoft Business Division non-GAAP revenue increased 3% for the second quarter. Revenue from Microsoft’s productivity server offerings – collectively including Lync, SharePoint, and Exchange – continued double-digit percentage growth.

    – while Entertainment and Devices Division Performance and KPIs for Earnings Release FY13 Q2 [Microsoft Investor Relations, Jan 24, 2013] were reported as:

    Continued leadership position in console market

    • 5.9M consoles sold, down 28%
    • Halo 4 best-selling title of gaming franchise
    • Xbox LIVE members >40 million
    • Windows Phone sales were over 4 times greater than last year
    • 138 billion minutes of calls on Skype in quarter, up 59%

    EDD revenue decreased, primarily due to lower Xbox 360 platform revenue, offset in part by higher Windows Phone revenue. Xbox 360 platform revenue decreased $1.1 billion or 29%, due mainly to lower volumes of consoles sold and lower video game revenue, offset in part by higher Xbox LIVE revenue. We shipped 5.9 million Xbox 360 consoles during the second quarter of fiscal year 2013, compared with 8.2 million Xbox 360 consoles during the second quarter of fiscal year 2012. Video game revenue decreased, primarily due to $380 million of revenue deferred associated with the Video Game Deferral. Windows Phone revenue increased $546 million, including patent licensing revenue and increased sales of Windows Phone licenses.

    EDD operating income increased, due mainly to lower cost of revenue and sales and marketing expenses, offset in part by decreased revenue and increased research and development expenses. Cost of revenue decreased $544 million or 19%, mainly due to decreased sales of Xbox 360 consoles, offset in part by payments made to Nokia related to joint strategic initiatives and increased royalties on Xbox LIVE content and video games. Sales and marketing expenses decreased $92 million or 21%, primarily reflecting decreased Xbox 360 platform marketing. Research and development expenses increased $98 million or 25%, primarily reflecting higher headcount-related expenses.

    – and here we should consider the following Segment Information for the Entertainment & Devices Division excerpted on Feb 17, 2013:

    Entertainment and Devices Division (“EDD”) develops and markets products and services designed to entertain and connect people. EDD offerings include the Xbox 360 entertainment platform (which includes the Xbox 360 gaming and entertainment console, Kinect for Xbox 360, Xbox 360 video games, Xbox LIVE, and Xbox 360 accessories), Mediaroom (our Internet protocol television software), Skype, and Windows Phone, including related patent licensing revenue. We acquired Skype on October 13, 2011, and its results of operations from that date are reflected in our results.

    Note here the inclusion of Mediaroom (MS IPTV platform) into the portfolio which was not in the FY12 portfolio as per Microsoft 2012 Annual Report [Microsoft Investor Relations, Oct 9, 2012]. Mediaroom is described by the Microsoft Mediaroom Newsroom [excerpt as of Feb 17, 2013] as:

    Microsoft Mediaroom powers multi-screen entertainment services for consumers in partnership with operators. Visit: Mediaroom Website
    Microsoft Mediaroom is the world’s most deployed IPTV platform. Mediaroom-powered TV services are being offered by more than 40 of the world’s leading operators, delivering services to more than eleven million consumer households equaling more than 22 million set top boxes deployed throughout the Americas, EMEA and APAC. Operator partners including AT&T, Deutsche Telekom and TELUS are already giving their subscribers the freedom to watch TV how they want, while gaining the most innovative ways to reach them wherever they are.

    As another notable change according to Announcing the Windows 8 Editions [Building Windows 8 blog, April 16, 2012]

    Windows Media Center will be available as an economical “media pack” add-on to Windows 8 Pro. If you are an enthusiast or you want to use your PC in a business environment, you will want Windows 8 Pro.

    With further details provided in Making Windows Media Center available in Windows 8 [Building Windows 8 blog, May 4, 2012]

    On the PC, … online sources [such as YouTube, Hulu, Netflix] are growing much faster than DVD & broadcast TV consumption, which are in sharp decline (no matter how you measure—unique users, minutes, percentage of sources, etc.). Globally, DVD sales have declined significantly year over year and Blu-ray on PCs is losing momentum as well. Watching broadcast TV on PCs, while incredibly important for some of you, has also declined steadily. These traditional media playback scenarios, optical media and broadcast TV, require a specialized set of decoders (and hardware) that cost a significant amount in royalties. With these decoders built into most Windows 7 editions, the industry has faced those costs broadly, regardless of whether or not a given device includes an optical drive or TV tuner.
    Our partners have shared clear concerns over the costs associated with codec licensing for traditional media playback, especially as Windows 8 enables an unprecedented variety of form factors. Windows has addressed these concerns in the past by limiting availability of these experiences to specialized “media” or “premium” editions. At the same time, we also heard clear feedback from customers and partners that led to our much simplified Windows 8 editions lineup.
    Given the changing landscape, the cost of decoder licensing, and the importance of a straight forward edition plan, we’ve decided to make Windows Media Center available to Windows 8 customers via the Add Features to Windows 8 control panel (formerly known as Windows Anytime Upgrade). This ensures that customers who are interested in Media Center have a convenient way to get it. Windows Media Player will continue to be available in all editions, but without DVD playback support. For optical discs playback on new Windows 8 devices, we are going to rely on the many quality solutions on the market, which provide great experiences for both DVD and Blu-ray.

    image

    Windows 8 Pro is designed to help tech enthusiasts obtain a broader set of Windows 8 technologies. Acquiring either the Windows 8 Media Center Pack or the Windows 8 Pro Pack gives you Media Center, including DVD playback (in Media Center, not in Media Player), broadcast TV recording and playback (DBV-T/S, ISDB-S/T, DMBH, and ATSC), and VOB file playback.

    According to Should I Upgrade to Windows 8 Media Center? [About.com Guide, Nov 23, 2012]

    The short answer? No. As of this writing, Media Center 8 is an exact duplicate of Media Center 7. No new features, no improvements, nothing.

    So with Windows 8 Microsoft was clearly placing the bet on the on-line video!

    Then we should consider also that Microsoft was just Announcing Release of Windows Azure Media Services [Scott Guthrie’s blog, Jan 22, 2013] supporting Xbox and IPTV (?i.e. when instead of Mediaroom –I would assume [to be verified!]– the content comes to the IPTV set-top boxes from Windows Azure Media Services?) as well:

    image
    with the following conceptual functionality (“architecture”) inside: image

    What was announced is the V1 of the cloud-based variety of the overall Microsoft Media Platform (built on foundations of Windows Azure, Internet Information Services, Smooth Streaming and PlayReady) as defined in Microsoft Media Platform: Encoding and Serving Choices and Migration Considerations [Microsoft whitepaper, Jan 2, 2013] (corrections, emphases and additions are mine):

    Two Microsoft Media Platform Technologies are on-premises (that is, they run on servers placed directly in an enterprise), while the latest, Windows Azure Media Services, is cloud-based as part of Microsoft’s Windows Azure cloud computing platform ( http://www.windowsazure.com/).

    On-premises media technologies:

    Cloud-based media technologies:

    The initial components of Windows Azure Media Services, including Ingest [Upload media], Encoding [encode assets using a range of standard codecs, including popular adaptive bitrate formats], Content Protection [store and deliver your content securely using Microsoft PlayReady DRM or Apple AES Encryption], and On-Demand [Streaming] [deliver a fast, smooth, and adaptive experience to users while leveraging format conversion on the fly], are available or shipping soon with this release. Advertising (Ad Insertion) is currently available through Client SDKs. Additional components, including Live Streaming and Analytics, will be rolled out as they become available. When all of the components are in place, Windows Azure Media Services will offer a complete end-to-end media services solution, including video ingest, encoding and conversion, content protection, on-demand streaming, live streaming, and analytics.

    The current environment for video streaming is experiencing new challenges. The video portion of Internet traffic today is significant and growing rapidly, as is the number of internet connected TVs and mobile devices. In this environment, video providers and broadcasters are switching to IP as the medium of choice to reach this wide diversity of endpoints.

    To address these challenges, Windows Azure Media Services is designed to become a one-stop platform for securely encoding, packaging, and delivering video content from Windows Azure or CDNs, thus offering the scalability and reach of the cloud.

    Some of advantages of migrating to Windows Azure Media Services are:

    • Windows Azure Media Services has the scalability and reliability of a cloud platform and can handle large bursts in demand for video applications.
    • It is widely available for a global audience and can use third-party CDNs like Akamai, Level3, or Limelight.
    • Windows Azure Media Services has cloud-based versions of familiar Microsoft Media Platform and media partner technologies.
    • As a Platform-as-a-Service (PaaS), Windows Azure Media Services is faster, cheaper, and lowers risk:
      • PaaS is faster because there is less work for developers. End-to-end solutions benefit from a single platform that solves integration issues. As a result, applications can go from idea to availability more quickly.
      • PaaS is cheaper because it offers less administration and management overhead, and greater economies of scale: you pay only for what you use, and large capital outlays for media servers and network infrastructure can be replaced by the more efficient operating expenses of cloud computing.
      • PaaS lowers risk. Because the platform does more for you, there are fewer opportunities for error.
    • Security Standards and Certifications: Windows Azure Media Services Security is working towards SOC2 (Service Organization Control 2) compliance and plans to complete a CDSA (Content Delivery and Security Association) certification process and an MPAA audit in 2013.

    Windows Azure Media Services have the flexibility and power to enable you to create whatever media services solution that you envision. Some key usage scenarios are:

    • Creating an end-to-end workflow in the cloud. For example, a content management service can use Windows Azure Media Services to process on-demand Smooth Streaming video and distribute it to a variety of mobile and desktop clients.
    • Developing hybrid workflows that incorporate pre-existing on-premises resources. For example, a video production house might upload its finished videos to Windows Azure Media Services for encoding into multiple formats, and then use the Windows Azure Media Services Origin Service and a third-party CDN to deliver video on demand.
    • Choosing to utilize built-in Media Services components, or mixing and matching your own custom components or components from third parties. Individual Windows Azure Media Services components can be called via standard REST APIs for easy integration with external applications and services.

    [see more detailed information in the whitepaper itself and in the announcement blog referred earlier]

    I should only highlight one particular additional feature with the V1 release from Announcing Release of Windows Azure Media Services [Scott Guthrie’s blog, Jan 22, 2013]

    … our on-demand streaming support also now gives you a cool new feature we call dynamic packaging.

    Traditionally, once content has been encoded, it needs to be packaged and stored for multiple targeted clients (iOS, XBox, PC, etc.).  This traditional packaging process converts multi-bitrate MP4 files into multi-bitrate HLS file-sets or multi-bitrate Smooth Streaming files.  This triples the storage requirements and adds significant processing cost and delay.
    With dynamic packaging, we now allow users to store a single file format and stream to many adaptive protocol formats automatically.  The packaging and conversion happens in real-time on the origin server which results in significant storage cost and time savings:

    image

    Today the source formats can be multi-bitrate MP4 or Smooth based, and these can be converted dynamically to either HLS or Smooth.  The pluggable nature of this architecture will allow us, over the next few months, to also add DASH Live Profile streaming of fragmented MP-4 segments using time-based indexing as well.  The support of HLS and the addition of DASH enables an ecosystem-friendly model based on common and standards-based streaming protocols, and ensures that you can target any type of device.

    ADDITIONAL MPEG DASH / MICROSOFT RELATED INFORMATION:
    Microsoft Announces Support for MPEG-DASH in Microsoft Media Platform [Microsoft Media Platform team blog, April 16, 2012]
    Alex Zambelli of Microsoft at Streaming Media West – held on Oct 30-31, 2012 [streamingmediavideo YouTube channel, published on Jan 2, 2013]

    Microsoft Sr. Tech Evangelist, Microsoft Media Platform discusses the ascendancy of MPEG DASH.

    as well as the quite universal aspect of multitargeting even in this V1:

    Consume

    Windows Azure Media Services provides a large set of client player SDKs for all major devices and platforms, and they let you not only reach any device with a format that’s best suited for that device – but also build a custom player experience that uniquely integrates into your product or service.

    Your users can consume media assets by building rich media applications rapidly on many platforms, such as Windows, iOS, XBox, etc.  At this time, we ship SDKs and player frameworks for:

    • Windows 8
    • iOS
    • Xbox
    • Flash Player (built using Adobe OSMF)
    • Silverlight
    • Windows Phone
    • Android
    • Embedded devices (Connected TV, IPTV)

    To complement all that here is a brief introduction into the whole Microsoft Media Platform (the on-premises varieties as well) followed in details with how HTML5 is fitting into that, from streamingmediavideo YouTube channel [May 9, 2012]:

    This session explores the role of HTML5 in the Microsoft Media Platform.

    In Streaming Servers 2012: New Features, New Opportunities [StreamingMedia.com, Oct 24, 2012] the latest features of the streaming server/platform solutions from Adobe, Anevia, CodeShop, Microsoft, and RealNetworks are overviewed, together with some upcoming features. This shows quite well how much the Microsoft Media Platform is advanced and hence could be the best platform for such an effort as that of Intel Media.

    There is a wortwhile comment as well from the same Microsoft specialist as already shown in the videos above:

    Alex Zambelli · Seattle, Washington

    Hi Tim,
    Just a few corrections: The latest version of IIS Media Services, known as IIS Media Services 5.0 Premium, targeting OTT linear TV scenarios is available exclusively to Mediaroom customers as part of Mediaroom Component Technologies.

    See also: How to Use Continuous Network DVR Feature in PlayReady Premium and IIS Media Services Premium? [PlayReady blog, Dec 29, 2012] “PlayReady 2.x Premium and IIS Media Services 5.0 Premium have enabled the following four key features which are needed for scalable live TV service:”

    This is showing that Mediaroom is using the latest technologies available in the Microsoft Media Platform along with Windows Azure Media Services.

    Finally Intel Media is heavily betting on the new H.265/HEVC standard. This is how the same Alex Zambelli (since January working for a premium video workflow services and products partner of Microsoft) is viewing this issue in his H.265/HEVC Ratification and 4K Video Streaming [Alex Zambelli’s Streaming Media Blog, Jan 28, 2013] post:

    The media world today is abuzz with news of H.265/HEVC approval by the ITU. In case you’ve been hiding from NAB/IBC/SM events for the past two years – or if you’re a WebM hermit – I will have you know that H.265 is the successor standard to H.264, aka MPEG-4 AVC. As was the case with its predecessor it is the product of years of collaboration between the ISO/IEC Moving Picture Experts Group (MPEG) and the International Telecommunications Union (ITU) Video Coding Experts Group (VCEG). The new video coding standard is important because it promises bandwidth savings of about 40-45% for the same quality as H.264. In a world where video is increasingly being delivered over-the-top and bandwidth is not free – that kind of savings is a big deal.
    What most media reports seem to have focused on is the potential effect that H.265 will have on bringing us closer to 4K video resolution in OTT delivery. Most reports speculate that H.265 will allow 4K video to be delivered over the Internet at bit rates between 20 and 30 Mbps. In comparison, my friend Bob Cowherd recently theorized on his blog that 4K delivery using the current H.264 video standard would require about 45 Mbps to deliver 4K video OTT.
    While I think the relative difference between those two estimates is in the ballpark of the 40% bandwidth savings that H.265 promises, I actually think that both estimates are somewhat pessimistic. Given the current state of video streaming technology, I think we’ll actually be able to deliver 4K video at lower bit rates when the time comes for 4K streaming.
    A common mistake that most people dealing with lossy video compression seem to make is to assume that the ratio between bit rate (bps) and picture size (pixels/sec) remains proportional and fixed as the values of both axis change. I don’t think that’s the case. I believe that the relationship between bit rate and picture size is not linear, but closer to a power function that looks like this:

    image

    In other words, I believe that as the pixel count gets higher a DCT-based video codec requires fewer bits to maintain the same level of visual quality. Here’s why:
    1. The size of a 16×16 macroblock, which is the smallest unit of DCT-based compression used in contemporary codecs such as H.264 and VC-1, grows smaller relative to the total size of the video image as the image resolution grows higher. For example,  in a 320×180 video the 16×16 macroblock represents 0.444% of the total image size, whereas in a 1920×1080 video the 16×16 macroblock represents only 0.0123% of the total image. A badly compressed macroblock in a 320×180 frame would therefore be more objectionable than a badly compressed macroblock in a 1920×1080 frame.
    2. As many studies have shown, the law of diminishing returns applies to video/image resolution too. If you sit at a fixed distance from your video display device eventually you will no longer be able to distinguish the difference between 720p, 1080p and 4K resolutions due to your eye’s inability to resolve tiny pixels from a certain distance. Ipso facto, as the video resolution goes up your eyes become less likely to distinguish compression artifacts too – which means the video compression can afford to get sloppier.
    3. Historically the bit rates used for OTT video delivery and streaming have been much lower than those used in broadcasting, consumer electronics and physical media. For example, digital broadcast HDTV typically averages ~19 Mbps for video (in CBR mode), while most Blu-ray 1080p videos average ~15-20 Mbps (in 2-pass VBR mode). Those kinds of bit rates are possible because those delivery channels have the luxury of either dedicated bandwidth or high-capacity physical media. However, in the OTT and streaming world video bit rate has always been shortchanged in comparison. Most 720p30 video streaming today, whether live or on-demand, is encoded at average 2.5-3.5 Mbps (depending on complexity and frame rate). 1080p30 video, when available, is usually streamed at 5-6 Mbps. Whereas Blu-ray tries to give us movies at a quality level approaching visual transparency, streaming/OTT is completely driven by the economics of bandwidth and consequently only gives us video at the minimum bit rate required to make the video look generally acceptable (and worthy of its HD moniker). To put it bluntly, streaming video is not yet a videophile’s medium.
    So taking those factors into consideration, what kind of bandwidth should we expect for 4K video OTT delivery? If 1080p video is currently being widely streamed online using H.264 compression at 6 Mbps, then 4K (4096×2304) video could probably be delivered at bit rates around 18-20 Mbps using the same codec at similar quality levels. Again, remember, we’re not comparing Blu-ray quality levels here – we’re comparing 2013 OTT quality levels which are “good enough” but not ideal. If we switch from H.264 to H.265 compression we could probably expect OTT delivery of 4K video at bit rates closer to 12-15 Mbps(assuming H.265′s 40% efficiency improvements do indeed come true). I should note that those estimates are only applicable to 24-30 fps video. If the dream of 4K OTT video also carries an implication of high frame rates – e.g. 48 to 120 fps – then the bandwidth requirements would certainly go up accordingly too. But if the goal is simply to stream a 4K version of “Lawrence of Arabia” into your home at 24 fps, that dream might be closer to reality than you think.
    One last thing: In his report about H.265 Ryan Lawler writes that “nearly every video publisher has standardized [H.264] after the release of the iPad and several other connected devices. It seems crazy now, but once upon a time, Apple’s adoption of H.264 and insistence on HTML5-based video players was controversial – especially since most video before the iPad was encoded in VP6 to play through Adobe’s proprietary Flash player.” Not so fast, Ryan. While Apple does deserve credit for backing H.264 against alternatives, they were hardly the pioneers of H.264 web streaming. H.264 was already a mandatory part of the HD-DVD and Blu-ray specifications when those formats launched in 2006 as symbols of the new HD video movement. Adobe added H.264 support to Flash 9 (“Moviestar”) in December 2007. Microsoft added H.264 support to Silverlight 3 and Windows 7 in July 2009. The Apple iPad did not launch until April 2010, which was also the same month Steve Jobs posted his infamous “Thoughts on Flash” blog post. So while Apple certainly did contribute to H.264′s success, they were hardly the controversial H.264 advocate Ryan makes them out to be. H.264 was already widely accepted at that point and its success was simply a matter of time.

    More information:
    What Is HEVC (H.265)? [StreamingMedia.com, Feb 14, 2013]
    Episode 99 – Windows Azure Media Services General Availibility [Microsoft Channel 9 video, Jan 25, 2013]

    In this episode Nick Harris and Nate Totten are joined by Mingfei Yan Program Manager II on Windows Azure Media Services.  With Windows Azure Media Services reaching General Availability Mingfei joined us to demonstrate how you can use it to build great, extremely scalable, end-to-end media solutions for streaming on-demand video to consumers on any device and in this particular demo shows off the portal, encoding and both a Windows Store app  and iOS device consuming encoded content.

    For more information visit the Windows Azure Media Services page to learn more about the capabilities, and visit the Windows Azure Media Service Dev Center for tutorials, how-to articles, blogs, and more information and get started building applications with it today!

    How to build customized Media Workflows using the Media Services .NET SDK – Part I [Microsoft Channel 9 video, Feb 5, 2013]

    In this two part video, Mingfei Yan will teach you how to use the Windows Azure Media Services .NET SDK to create your own media workflow including how to upload, encode, package and deliver your video assets.  In this part you will learn how to create media asset and upload a video file from local drive.

    After completing this part you can watch part II here. You can get started with Windows Azure Media Services today for free!

    How to build customized Media Workflows using the Media Services .NET SDK – Part II [Microsoft Channel 9 video, Feb 5, 2013]

    – IMPORTANT: Client Ecosystem for Windows Azure Media Services [Mingfei Yan blog, Jan 14, 2013]

    This blog gives an overview of what kind of client support Microsoft offers as part of Windows Azure media Services. On one side, you could create, manage, package and deliver media asset through Windows Azure media services. Many popular streaming formats are supported, such as Smooth Streaming, Http Llive Streaming and MPEG-dash. On the other hand, we provide various SDKs and frameworks for you to consume media asset by building rich media applications rapidly on many platforms, such as PC, XBox, mobile and etc.

    What is Windows Azure Media Services [Mingfei Yan blog, Aug 21, 2012]

    Introducing Microsoft Media Platform [Media & Entertainment Insights blog, April 12, 2011]
    Microsoft Media Platform – David Sayed interview [Quantel blog April 20, 2011]
    H2 2012 Media Platform Product Update Roundup [Alex Zambelli’s Streaming Media Blog, Nov 16, 2012]: “It’s been a busy summer with most of the team focused on Windows Azure Media Services, but I’d like to take a moment to highlight a few other Media Platform releases of the past few months:”
    Mediaroom 2.0 Unites Software and Cloud Services to Power New TV Experiences Across Three Screens [Media & Entertainment Insights blog, April 6, 2010]

    Intel Media: 10-20 year leap in television this year

    Updates: Coming Soon: Intel’s Must-See TV [Barrons.com, June 22, 2013]

    The chip giant readies a TV subscription service powered by a set-top box unlike any other.
    Full disclosure, dear readers—I’m not a TV viewer. I chucked the set years ago and mainly watch things on computers.
    But then, television hasn’t changed much in decades, so I feel I’m still qualified to opine on the boob tube’s future. And two weeks ago, I was fortunate enough to glimpse a possible part of that future at the Santa Clara, Calif., headquarters of Intel (ticker: INTC), where I saw a TV service that is novel, elegant, and highly desirable, even to a television Luddite like me. The service faces a number of hurdles, including potential obstruction by the cable and telephone industries, but what I witnessed could take Intel in a thrilling new direction.

    Sometime this year, the chip giant will offer a set-top box at retail, with a subscription service that brings you live television over your broadband Internet connection.

    It is, in industry argot, an “over the top” video connection, requiring no actual TV package from the four major “multiple system operators,” or MSOs, as they’re called—Comcast (CMCSA), Cablevision (CVC), Time Warner Cable (TWC), and Charter Communications (CHTR)—or from Verizon Communications (VZ) and AT&T(T).

    WITHOUT GIVING TOO MUCH AWAY, the user interface seemed to hover beautifully above the currently playing show. An elegant simple menu made it easy to switch between channels or to pick and rent a recent film. It was light years from the cumbersome garbage that takes up most of the screen when using a standard cable-channel picker.

    There was a wide array of popular channels to choose from that would be familiar to any couch potato, though the final lineup is still being formulated. Equally important, when you hit the button on the remote, the TV seemed to jump to the next channel faster than is typical on cable. There also is a time-shifting aspect that goes beyond DVR, allowing you to go back through recent episodes.

    One wonders: Why hasn’t TV always been this way?

    Others who’ve viewed the project are enthusiastic, too. “The No. 1 thing I noticed was speed,” says Patrick Moorhead of Moor Insights & Strategy. Intel’s horsepower in the set-top is partially responsible for this, but multiple data centers that Intel is building to serve video also were a factor. “A lot of the value comes from what they’ve done on the back end,” says Moorhead. “They have the highest-performance Intel servers and video-encoding technology.” And he notes, “This is live television,” unlike other over-the-top offerings, like those from the TV network consortium Hulu, Apple‘s (AAPL) AppleTV,Netflix (NFLX), or closely held Roku, which merely provide on-demand content from a back catalog. “It’s something I’ve never experienced before” in an Internet offering, Moorhead adds.

    No less thrilling is the fact that Intel, which makes $53 billion in yearly revenue from selling chips, and spends billions to make them, is becoming both a hardware and software vendor.

    The project is the effort of Erik Huggers and his staff of 350 people. Huggers, 40, won praise for developing the iPlayer for the BBC, a piece of video software that allows one to follow the channel’s TV and radio broadcasts. He came to Intel two years ago to advance efforts to sell chips to set-top makers. He made a bold move in telling his boss that the $4.5 billion TV-chip market wasn’t desirable. “The market was split up between 20 or more silicon providers, and it was a race to the bottom on prices,” says Huggers. “I said, ‘I don’t know how we ever turn that into a profitable market.’ ” Instead, he pleaded, “Release me to go after the $500 billion television market in a very different way.” He got his wish.

    Erik Huggers over het televisieproject van Intel (Erik Huggers ons the television project of Intel) [MT Management Team, May 13, 2013] the essence of which is summarized below (the quotes were translated from Dutch):

    imageHired in April 2011 and spending with 12 others 3 months on the plan Erik Huggers suggested that instead of manufacturing and selling chips for smart TVs and similar products Intel should take a different course of developing a complete TV solution. He got an approval for that in December 2011 and since he managed to grow the project to a 300+ people organisation working in tight separation from the rest of the Intel and in stealth mode typical of Silicon Valley. He used the so called acquihire approach (a novel thing although used by Facebook very much) to speed up the process when “you at once take a whole team of a company over, but not the company itself”. Intel Media bought in this way a number of “very targeted small businesses”. So it was only twelve months needed to arrive at “a device, the software, the user interface, design, packaging, branding, all services, the back-end and various deals.”

    Such urgency was essential “because I think the time for over-the-top live television has arrived.” The product will initially be launched in the U.S. only as it is the largest media market in the world which also happens to be the most difficult one as it is so saturated and “ if you come up with something new, you have to have something very good.” They are going to offer “live TV, catch-up TV, Video On Demand as a transaction model, an iTunes-like service so that you pay per viewing time. And in addition, we will offer what they call in the United States electronic selltrue. You can buy a digital copy of a film in the cloud, which is playable on various devices.” That is there will be various payment models behind the television service of the Intel Media.

    It was initially difficult to convince content providers to come on the board. He said that “there are a total of nine parties in America depositing content to consumers, plus another 5 or 6 providers like Comcast. It is a very concentrated market, where we now stand as a newcomer. The balance in this market is very, um, let me just say very interesting. The established parties are very close to each other and have a lot to do with each other. The idea to deliver video over the Internet television is as revolutionary. We will use the infrastructure of the cable company, which is the same infrastructure they sell to consumers. It was so difficult to explain that, now it’s purely for the execution.”

    With 300+ people Intel Media will compete in such an environment. It is even more unusual as “we compete with companies like Comcast, which has 80,000 employees, and it’s just one of the many parties that are active in this market” he said. They are going to launch before the end of the year and see the iPod as an example to follow. He said: “Look at the rise of the iPod. Around 2004, in 2005 there were hundreds of MP3 players, but none worked really well. Then came the iPod and was at one time game over for all other players. The current state of affairs of smart television and streaming boxes is similar to the mp3 world for the introduction of the iPod. It is interesting that a lot like Apple, to date at least, very disappointing in television. Apple TV works fine, but it is not revolutionary. ” 

    Regarding his first public announcements two months ago (which is detailed in this original post below) he said: “The reason I sit at All Things Digital on stage was because we had to show. Just 4 or 5 weeks before, during CES, senior officials from the media industry had seen our product. Sometimes more than 10 people per company. So between 100 to 200 people have seen the device, and all like to talk. … In addition, we were just starting to roll out the product to Intel employees. First it was tested by fifty families, but we wanted to test it on scale. So currently it is used by 1,000 families in Arizona, California and Oregon. Soon we will go to 5,000 families. They are all Intel household, so with people who are with us on the payroll. Now as it is seen slowly but surely seen by more people, it is better to put a story myself. You don’t have control, but so you can do a twist of your own. You don’t want it through the back door on some blog. But we still have very many details omitted. That we keep it that way, until we are ready.”

    End of updates

    Excerpts from Video – Dive Into Media: Intel’s Erik Huggers on What’s Next for Web TV [Feb 12, 2013] – the full transcript will follow in a separate section later on:

    [00:38] “We have been working for about a year now to set up a new group called Intel Media, … a new group focused on developing an Internet TV platform.”

    [~2:00] “For the first time … we will deliver a new consumer electronics product that consumers will buy directly from us or through retail under a new brand. This is obviously associated with Intel brand. It is an Intel powered device, it is a consumer electronics product with beautiful industrial design powered by, obviously, an Intel chip. That’s not where it ends, it’s not just the device. Where it really gets interesting is, we are working with the entire industry to figure out how do we get proper TV delivered via the Internet to consumers.”

    ADDITIONAL INSERT BBC iPlayer (Global) – Available for iPad, iPhone and iPod Touch! [BBCiplayerglobal YouTube channel, Sept 28, 2011]

    BBC iPlayer has launched on iPad, iPhone and iPod Touch! -http://bit.ly/SRjYWB – here’s the exclusive video guide to the BBC iPlayer app. Download the free app today with taster clips and episodes! Subscribe for unlimited access – even when you’re offline, and watch a selection of the best classic and contemporary British shows on demand. The BBC iPlayer app is currently available in the following countries: Austria, Australia, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Luxembourg, Netherlands, Portugal, Spain, Sweden and Switzerland. Please note, not all programming is available in all countries.

    [03:08] “At the BBC we launched the product in 2007 called BBC iPlayer. The iPlayer is the promise to consumers, to audiences that we would make the unmissible truly unmissible. And so what does that mean: basically all the BBC output, everything on radio and television networks, is available from transmission points plus seven days on over 650 different devices. And so this is not a cherry pick of a variety of products for output, this is literally everything. So if you missed something you don’t have to record it because it’s already there. It’s a cloud based service that offers you catch up. [03:50]

    ADDITIONAL INSERT BBC WiE: Daniel Danker on evolving the BBC iPlayer [TheBBCAcademy YouTube channel, Aug 9, 2012]

    Daniel Danker, general manger, BBC iPlayer, explains how the nation’s favourite catch-up service has evolved. http://www.bbc.co.uk/academy/news/view/wie_iplayer

    Android: An update [Dave Price, Head of BBC iPlayer on BBC Internet blog, Dec 12, 2012]: “Android as a platform is becoming increasingly complex and fragmented with a huge difference between video playback capabilities across the 1500+ Android devices.”
    BBC has new competitors, warns iPlayer boss Daniel Danker [BBC Ariel, Feb 8, 2013]

    Daniel Danker, the general manager for on-demand and iPlayer, said that the BBC’s fiercest and most nimble competitors are no longer likely to be Sky, Channel 4 and ITV.
    ‘I think we are measuring ourselves against the wrong competitors, because actually the companies that are most likely to be disruptive in what we do are Google through YouTube, Amazon through LoveFilm and Netflix,’ he told Ariel in an exclusive interview on Thursday.

    … Today, iPlayer reaches about 25% of the UK every month, he said, but YouTube reaches about 40%. …

    … American giant Netflix is leading the way by releasing its drama House of Cards online to its subscribers. It took a gamble by releasing the entire series, a remake of a BBC political drama from 1990, immediately.

    Danker, an Israeli-born American who spent 11 years at Microsoft, said the series is ‘pretty good’ and has attracted big names, including Kevin Spacey. He called Netflix ‘innovative and nimble’. …

    … The iPlayer manager said that in YouTube’s Soho offices, artists and content creators are being asked to ‘professionalise their content’ at specially built studios. He predicted that it won’t be ‘skateboarding chimpanzees’ for much longer, but high-quality content. …

    [~12:00] “I don’t believe that the industry’s ready for pure a’la cart [where you only pay for the channels you want]”, he stated and suggested that this would be a great opportunity to create offers that provide users with greater flexibility. “I believe there’s value in bundles, I believe that is a form of curation” he concluded.

    [~ 13:30] “This thing looks like a leap in time of 10-20 years compared to what you have today. That is much more personal, that learns about you, that actually cares about who you are.”

    [~14:00] “We think there’s real value in the ability to actually identify the various users. Today, television doesn’t really know anything about you. It’s the same television service for everyone in the household. In order to actually recognise who is there and to offer you your personal experience, rather than having to log in or put your fingerprint or do a retina scan whatever, to make it completely seamless you need a camera. If you don’t like the idea of a camera, you think it’s creepy there is a nice little shutter and you just close the camera” Huggers said.

    [~19:00-20:00] “Intel is very interested in getting into consumer businesses, having a direct relationship with those consumers. Intel as a company is making a big shift towards, what Intel calls, becoming an experience driven company.”

    [~22:00] “We have gone out of our way to bring a completely new class, new type of skill sets into this crew. And we’ve set the group up in such a way that it is run in its own building, complete own building, with our own security, we have our own culture. We are still proud to be part of Intel, don’t make any mistakes, but this is a new effort.”

    [~25:00] “It’s not a value play, it’s a quality play where we’ll create a superior experience for the end user.”

    [~35:00] “The model that we are envisioning is a model where live TV and catch up TV all live in the same paradigm. These are not different applications, and so if you’re a programmer why would you want your catch up programs to live in an app somewhere else? Why doesn’t that live under your brand?”


    Video – Dive Into Media:
    Intel’s Erik Huggers on What’s Next for Web TV – WSJ.com
    [Feb 12, 2013]

    image

    Intel’s Erik Huggers took the stage with Walt Mossberg at D: Dive Into Media on Tuesday [Feb 12, 2013] to talk about the company’s forthcoming TV device that he describes as revolutionary.

    My transcript (lead reporter is Walt Mossberg, his second in command is Peter Kafka):

    [00:38] We have been working for about a year now to set up a new group called Intel Media. It’s a completely new division with new people, sort of a mix of existing Intel people with a lot of people from outside the company. To give you an example we hired people from Apple, from Jawbone, from Microsoft, from the BBC, and the list goes on, even Netflix and Google. So it’s a new group focused on developing an Internet TV platform.
    There’s no other Internet TV platfom? 
    There’s quite a few out there but my opinion is that not many have yet actually cracked it, not many have truly delivered.
    Have any cracked it in your opinion?
    No, actually. That’s my opinion.
    Just to be clear. You are talking about becoming a pay TV service to deliver video over the web like instead of paying cable company for video I will pay you.
    That’s right. And so for the first time what we will do we will actually deliver a couple things to consumers. We will deliver a new consumer electronics product that consumers will buy directly from us or through retail under a new brand. This is obviously associated with Intel brand. [01:56]
    [02:12] It is an Intel powered device,  it is a consumer electronics product with beautiful industrial design powered by, obviously, an Intel chip. That’s not where it ends, it’s not just the device. Where it really gets interesting is, we are working with the entire industry to figure out how do we get proper TV delivered via the Internet to consumers.
    This is an over the top service where we will deliver both live television, broadcasts, cable nets and other output, but also have catch up TV and introduce that properly to this market Because I personally think catch up TV still really doesn’t exist here, not as it exists in Europe today. And will have on demand and a host of applications. [02:57]
    [03:08] At the BBC we launched the product in 2007 called BBC iPlayer. The iPlayer is the promise to consumers, to audiences that we would make the unmissible truly unmissible. And so what does that mean: basically all the BBC output, everything on radio and television networks, is available from transmission points plus seven days on over 650 different devices. And so this is not a cherry pick of a variety of products for output, this is literally everything. So if you missed something you don’t have to record it because it’s already there. It’s a cloud based service that offers you catch up. [03:50]
    In the UK with the BBC what happened was that iPlayer became sort of synonymous with on demand. Just like Xerox is copying and Kleenex is tissues. So I think that in this market we have yet to see a proper catch up TV service, like the one that I just described. [04:09]
    [04:45] When you say proper TV, I was struck by the term of that, I like that, proper TV, because you are talking about what I get through my cable box. And this is the key, because I have a Roku, I have an Apple TV on my big television at home. They give me a lot of interesting things but they don’t get me cable. … the cable box right up to pay a lot of money to the cable company and the effect of three things. You tell me you’re going to be one thing that would do almost three things? 
    Our ultimate vision is you need one …
    Ultimate vision? That means … 
    Ultimately we think there is an all in one solution. … Rome wasn’t built in a day. It takes time sometimes but where we will start, to be clear, is: we will have live TV, catch up TV, consistent. We will have on demand and we’ll have a set of applications. But the proper TV piece is something that I want to just pause for a second. Why do I say that?
    [05:57] I think that what we’ve seen so far in the industry are sort of … it’s like the interface is that you have to date on all these variety of television connected devices [the so called smart TVs] they all look like Web pages from the 1990s blown up to ten foot.  … They’re pretty  clumsy to use, they are hard to use and … When I say proper TV the other thing I think about is I think about when my five year old came to me over the weekend literally, and he is — like probably every other five year old — a wizard with iPads and Android devices, he is completely self sufficient. Yet he comes to me on Saturday morning and gives me two remote controls and says ‘Daddy can you please put up …’ some PBS stuff, I forgot what it was, because he literally doesn’t know how to use it. It is impossible. [06:56]
    [07:53] So are you doing something different or are you going to do what I am already getting?
    We are going to do a lot of things different.
    First of all, in the pay TV space today what you get from experience perspective is, let’s talk about the EPG (electronic program guide) for just a second. EPG today is equivalent to a spreadsheet, it’s basically columns and columns and rows and rows, and you have to go through and through and through it, until you’re blue in the face. It is not very friendly, it is not very easy to use, and it is sort of feels like, it reminds me of my days of my first computer, the Commodore 64. There’s a lot of room for improvement there. If you look at what people are used to today on iPads, on iPhones, on Apple TVs and other devices out there, there is a massive gap. So I think that’s one, the incredible user experience that is completely easy to use. [We have] the people that we brought over from the UK, the people that did the iPlayer UI. So we’ve got a team that really know how to do this sort of stuff.
    [8:00] The second thing is that we’ll have a scenario where you don’t need a lot of different inputs anymore. You do not need to have all these different HDMI inputs. Today in my home I had to buy an HDMI switch because I have too many devices. [09:22]
    [10:15] The bundle thing that Peter [Kafka] mentioned is really really important. It’s a piece of the puzzle. There’s loads of people who deeply resent their cable and satellite company, for a number of reasons, but one of them is this bundle. I have to buy all these channels, I don’t watch all these channels, I don’t want all these channels. … You are going to revolutionize TV, you are going to bring us this box, but a minute ago I heard you say you’re still going to have these bundles. Is that right? 
    [11:17] I agree with you that what consumers want is choice, control and convenience. I do believe that there is value in bundles actually. The whole world talks about curation because there’s such a mass of information out there. In a way if bundles are done right, bundles are bundled right, for the lack of a better way to explain it, then there’s real value in that. … I think there are opportunities out there to create a much more flexible environment where the end user has more control than what they have today. I don’t believe that the industry’s ready for pure a’la cart. [11:56]
    [12:19] All I’m talking about … is the fact that I believe there’s value in bundles. I believe that is a form of curation. [12:27] … it’s somewhere in the middle [between current bundles and pure a’la cart] [12:34]
    So it’s still bundles, but more intelligent bundles, or smaller bundles … 
    That’s a great way to explain things.
    … things that are more logical to me as a consumer?
    As a consumer.
    I may be so glad to not see the bundle Comcast makes me buy that I’ll say these guys at least giving me a better set of choices
    That’s the hope.
    It’s not a pure a’la cart but at least a better set of choices 
    It’s way towards more control, more choice for audience. [13:00]
    Are we going to save money?
    What this is not about for sure, it’s not about a value play. … What I believe is that if you get a vastly superior experience where the delta literally, when you get to see it ready to show you, this thing looks like a leap in time of 10-20 years compared to what you have today. That is much more personal, that learns about you, that actually cares about who you are versus being just … [13:40]
    The boxes would be a caring box? 
    We hope so. …
    … There is a less positive way to spin that caring box, this is the box that watches what you are watching, and targets you with advertising … 
    We think there is real value in the ability to actually identify the various users. Today TV doesn’t really know anything about you. It’s the same TV service for everyone in the household.
    … little creepiness here?
    I don’t think so because one of the features we put on there is, in order to actually recognize who is there and to offer you your personal experience, rather than having to log in or put on your fingerprint or do a retina scan whatever, to make it completely seamless, you need a camera, but if you don’t like the idea of a camera, you think it’s creepy there is a nice little shutter and you just close the camera and off your uncle. [15:00]
    … But cameras in iPads [etc.] are for those people to turn them on if they want to have a chatter, to take pictures something with it. It’s not looking at them for the purpose of serving up ads based on them. …  
    [15:58] But there is value. Let me talk about the value of a camera if we have to explain that. Imagine a scenario where you are watching your favorite TV show whenever that may be. … The idea of television back when I grow at least was that it was truly a social experience, a family experience. You’re together in the living room. What if you could actually watch that episode completely synchronized across the country and have a real social experience.
    … but we are talking about the camera watching you for the purpose of targeting, and that’s not the same as voluntarily Peter and I turning on the camera
    [17:02] I didn’t talk about that we will use the camera for targeting. What I’ve talked about  … it’s gonna watch you because: imagine the following scenario … I’ll give you another example. I have a Netflix account, and my five and nine year old will use that Netflix account all the time. When its my time to use that Netflix account with my wife the recommendations that I get are usually cartoons. They are not relevant to me because it’s a household account versus a personal account. That sort of the enviropnment is the living room. But if I can just … Want your kids a separate account. I’m cheap I am Dutch. … If you have the ability to actually distinguish that it’s you or Peter, or me or the kids, or me and the kids, then you can create an environment where you can recommend me ads actually relevant for you.
    [18:14] Why is Intel getting into the consumer business? 
    Over the weekend I visited the Intel museum for the first time. … Tjere was a quote on the wall that actually stuck with me. One of our founders, Robert Noyce, he said, apparently, don’t be encumbered by past history. You go off and do something wonderful. So that’s what we’re trying to do. We’re trying to get closer to the end user. We understand that the end user audiences have a much bigger control these days over the direction of travel.
    [18:56] When you use the term end user then you’re not all the way there yet? 
    Audiences, audiences … We talk about one thing. Back of our card, of the Intel Media card, it says: the audience is at the heart of everything we do. So we really care about audiences.
    [19:32] Is it just because Robert Noyce said … or is it because Intel’s principal business making chips for PCs has been flat or down in recent years? 
    Intel is very interested in getting into consumer businesses, having a direct relationship with those consumers. Intel as a company is making a big shift towards, what Intel calls, becoming an experience driven company. So if you think about it what better way to learn what experience driven is all about than through digital media, through a service that directly relates to audiences, that directly delivers experiences to audiences. [20:16]
    …. [Will the coming CEO sign off for that as well?] …
    [20:40] Intel Media is governed by a small board, some of the most senior people at Intel. While our CEO is certainly a very important proponent of what we’re doing here with Intel Media there is a very broad support for …
    [20:58] Why are you going to be better at having that kind of resonance with the consumer than big companies that happened to have a lot of consumer wealth already? Apple, Microsoft … Google. These are not companies that have to come from someplace different. They are right there in the consumer space. Look at this room and look at the logos of the devices, and they don’t say Intel. Intel somewhere in the device but … The point is what makes you thinking to compete with these guys? 
    In the end of the day all comes down to people … people, people, people. I spent the last twelve months putting together an incredible leadership team. We have a lady from Apple who’s been there for literally twelve years. She launched most of their iProducts. She’s our head of marketing. We have a gentleman from Jawbone who put Jawbone, the Bluetooth company, in twenty thousand retail outlets around the world. We have a gentleman from Microsoft who built the Mediaroom platform that AT&T U-verse runs on. The list goes on and on. We have gone out of our way to bring a completely new class, new type of skill sets into this crew. And we’ve set the group up in such a way that it is run in its own building, complete own building, with our own security, we have our own culture. We are still proud to be part of Intel, don’t make any mistakes, but this is a new effort. [22:43]
    In many ways I sort of compare it with … when I’m [going] back in the day when I was at Microsoft, many years ago, there was this moment when Microsoft indeed got into the gaming console business, and a lot of people said what are they doing, you make enterprise software you don’t know anything about this space. Yet ten years later or maybe more it’s the enormous success … [23:10]
    [23:26] Why do you think no one else [from those already consumer companies] has stepped forward [in the TV space] so far? 
    I don’t know frankly. It’s hard to tell. I mean those companies [already in the consumer space and trying] should speak for themselves.
    [24:16] We’ve taken the leap of faith that time is here. I mean broadband capability is here, it works. Compression of video is completely changing landscapes again. I mean we are moving to a new codec HEVC [H.265] which again compresses fifty percent better than H.264. So the ability to deliver super high quality video via the Internet live and on demand is here today. We have the silicon and the software, and the knowledge and the know how to create an incredible product with an incredible UI, with a new user paradigm. Rather than wait for others to jump into that market and see it take off we’re jumping in, and we’re going to try and make … [24:57]
    [25:22] But it’s not a value play, this is not a kind of cutting our cable bill
    It is not a value play, it is a quality play. It is a play where we will create a superior experience for the end user.

    [30:57] You’ve got all these connected devices already. Why just not build apps for them?
    When we started the discussion we talked about the fact that I built this thing called iPlayer in the UK. We made that service available to over 650 different devices. Everything from phones to tablets, to game consoles, to smart TVs, to Blueray … literally anything that plays back audio and video as I play on it. … It’s fair to say that with the experience that we’ve had over there in the UK, this is definitely a direction that we’re going to follow with this as well.
    Why build the device it’s an excellent question. I happen to believe that if we want to deliver the experience that we have in mind for the living room there is no platform out there today where we can do that. In order to deliver on our vision of that new experience you need to control everything, you need to control the chip, you need to control the operating system, you need to control the app players, you need to control the sensors etc. That sort of the reason why we were there. If there were platforms out there where we could deliver exactly what we have in mind there wouldn’t be a need to do it but there isn’t. [32:15]
    [32:30] At the end of the day I believe in the world where you have a good, better, [and] best experience. The best experience that we have in mind. We will deliver on that device that we will ship and sell to audiences.
    [32:55] The programmers are taking billions and billions of dollars from the cable companies. What incentive do they have to unbundle and work in different ways? No matter how beautiful your devices are going to be, what incentive do they have to break their established business models and give you some of their content, no matter how much are they getting paid for?
    [33:12] First of all I didn’t say that we will unbundle. What I said we will create new bundles. … But your question is a very valid question.
    Let’s take a look at history. We went from over the air television to cable TV, to satellite TV, to telco driven television. Constantly there were new forms of distribution out there. For programmers to get new distribution is a good thing in the end of the day. So I believe that this is just another step in the evolution of distribution. The internet has finally got to a point where you can deliver a true television experience where channel zapping becomes the same experience that we lost twenty years ago when we went digital. In the analog days when you zap to channel it was instant. In today’s digital world if you zap a channel you have 2-3 seconds of nothing in your waiting. We can bring an incredible TV experience via the Internet to consumers, and that is a real opportunity for programmers.
    The final thing I would say is that the model that we are envisioning is a model where live TV and catch up TV all live in the same paradigm. These are not different applications, and so if you’re a programmer why would you want your catch up programs to live in an app somewhere else? Why doesn’t that live under your brand? If you were an NBC or something like that, why do I need to go to the service provider catch up service to find the NBC programs when actually I wanted to find it through … [34:58]
    [35:14] The ability to wirelessly beam things from these devices to your TV, which Apple brands AirPlay and other companies have something like it, but I think Apple’s is the only one that’s really taken off and a lot people use so far, is that can be a feature of what you do? Wireless beaming from my other devices?
    This is certainly something that we’re looking for sure. It is an important use case where consumers have obviously a multitude of media capable devices, whether it’s photos, audio or video. The ability to display that on another screen a certain something I’m of intense interest.


    Historical evidence

    Insight: Intel’s plans for virtual TV come into focus [Reuters, June 8, 2012]

    Intel is counting on facial-recognition technology for targeted ads and a team of veteran entertainment dealmakers to win over reluctant media partners for its new virtual television service

    But so far it’s proving a challenge to get the service off the ground, thanks to an unwillingness on the part of major media content providers to let Intel unbundle and license specific networks and shows at a discount to what cable and satellite partners pay.
    Intel, the world’s largest chipmaker, has kept its strategy to launch a slimmed down cable TV service under wraps as the tech giant risks getting into a completely new line of business.
    According to five sources who have been negotiating with Intel for months, the company is emphasizing a set-top box employing Intel technology that can distinguish who is watching, potentially allowing Intel to target advertising.
    The set-top box pitched by Intel doesn’t identify specific people, but it could provide general data about viewers’ gender or whether they’re adults or children to help target advertising, two sources said.
    Intel’s plans put it in the middle of Silicon Valley’s battle for the living room. Heavyweights such as Apple, Amazon and Google believe the $100 billion U.S. cable television ecosystem – dominated by major distributors such as Comcast and DirecTV Group and program makers like Walt Disney Co and Time Warner Inc. – is ripe for disruption for reasons ranging from shifting viewer habits to ever-increasing programming costs.
    While none of these companies have so far been able to make major inroads, Intel thinks it can build a better set-top box and over-the-top subscription service to deliver TV content to consumers, even though the initiative catapults it into virgin market territory. A successful TV service showcasing Intel technology could be a big step toward making its chips prevalent in more living room devices.
    “If they can create a virtual network and it incorporates proprietary Intel technology, they could certainly bring something different to the subscription TV model.” said JMP analyst Alex Gauna.
    Intel’s offering aims to exploit one of the TV industry’s major issues: the reliability, or lack thereof, of Nielsen ratings data on audiences. Nielsen has long been the dominant provider of TV ratings, but the accuracy of its data has come under attack by some network programmers, who argue that its polling system of 50,000 homes is antiquated for the digital age.
    For its part, Intel claims that the new interactive features in its set-top box would add greater value to TV advertising and help offset reduced revenue from licensing fees for network owners.
    “They’ve told us the technology is going to be so much more interactive with ads that you can make more money. But it’s just a little unproven,” said one executive who has been involved in the talks.
    An Intel representative declined to comment for this story.
    Chip features making it easier for Hollywood studios to protect content streamed to computers, as well as tools for detecting faces and analyzing audiences, are examples of current proprietary technology that Intel would like to see widely adopted.
    BEYOND PCs
    While Intel’s processors power 80 percent of the world’s PCs, its chips have not achieved a significant presence in smartphones, tablets and other interconnected devices. Intel executives say they are eager to make sure its semiconductors play major roles in new markets with big growth potential.
    According to a company source, ensuring that its chips become prevalent in home entertainment devices would be the driving reason behind any Internet TV service it launches.
    Comcast, for instance, recently announced the gradual rollout of an Intel-based set-top box that customers can control with their smarpthones. Called “X1,” the platform will rely on data centers packed with high-end servers — which typically also use Intel chips.
    Intel last year wound down a push to make chips specifically for “smart” TVs after Google TV, which it had backed, failed to make a major splash with consumers.
    At the same time, it formed the Intel Media business group with a mandate of promoting digital content on Intel-based platforms.
    According to sources, Intel is proposing to media companies a service could include both a bundle of TV channels similar to a normal cable package and an on-demand component.
    ENLISTING HEAVYWEIGHTS
    Intel is intent on launching its video service before the end of the year, sources said. Original plans called for it to be launched by November, said one of the sources, but that deadline likely will not be met.
    The biggest problem Intel faces is its inability to reach deals with major content providers, which are reluctant to license their networks and TV shows at rates that could undercut their larger established cable and satellite partners.
    Intel wants to keep its costs down by licensing smaller packages of TV networks instead of replicating the basic cable TV bundle of more than 100 channels. But network owners won’t agree to smaller bundles without being paid a premium for the channels they choose to license.
    “Why would I want you to take subscribers away from another distributor at a lower price?,” asked the same media executive who spoke with Reuters on condition of anonymity.
    To change that mindset, Intel has assembled a team of television industry veterans well-schooled in negotiating distribution deals. Leading the group as head of Intel Media is Erik Huggers, who worked on media at Microsoft before going to the BBC. Huggers enlisted as an adviser Garth Ancier, who most recently served as president for BBC Worldwide America and before that worked at NBC, FOX, and Disney.
    In addition to Huggers and Ancier, sources said, two other names prominent in TV circles have emerged as consultants for Intel: entertainment lawyer Ken Ziffren and former MTV executive Nicole Browning.
    Browning, who previously negotiated on the other side of the table for MTV, has been handling some of the talks with partners, sources said.
    Ziffren built his reputation representing Hollywood talent – he was instrumental in negotiating the deal that returned the “Tonight Show” to Jay Leno. Lesser known is his firm’s work negotiating deals for DirecTV’s video-on-demand service and carriage agreements for pay-TV network Starz.
    But even that quartet of executives may not be enough to resolve an intractable problem, which is that content companies have little incentive to offer their channels to Intel at a discount and Intel is loathe to pay a premium.
    “They’d love a better deal but they won’t get one,” said Needham & Co analyst Laura Martin of Intel. “The industry has always worked on volume discounts.”
    Underscoring the difficulty insurgent tech companies face in securing content, Microsoft in January indefinitely postponed plans for its own online TV subscription service after deciding that licensing costs were too high, according to people familiar with those discussions.
    And therein lies that dilemma that Intel and other insurgent over-the-top providers must tackle before their big plans can be realized.

    Intel eyes Internet-based TV service: WSJ [Reuters, March 12, 2012]

    Chipmaker Intel Corp is developing an Internet-based TV service for consumers and has been promoting it with media companies, the Wall Street Journal said, citing people familiar with the effort.

    The world’s top chipmaker plans to create a “virtual cable operator” that would offer media companies’ TV channels in a bundle over the Internet, the WSJ said.

    An Intel spokeswoman declined to comment on the story.

    The product could use an Intel set-top box and Intel’s name, and the chipmaker has told its potential partners it wants to start the service before the end of the year, the WSJ said.

    In October, Intel wound down its efforts to make chips for digital “smart” TVs, although it continues to make chips for set-top boxes.

    At the same time, it formed the Intel Media business group, headed by former BBC executive Erik Huggers, aimed at promoting digital content on Intel-based platforms.

    Intel winds down smart TV business [Digital TV Europe, Oct 13, 2011]

    Intel has ditched its move into internet connected TVs after closing its Digital Home Group.

    The company will continue to supply chips to gateway devices and set-top boxes but will wind down its Digital Home business. Digital Home Group staff will be relocated to focus on netbooks, smartphones and tablet devices.

    Erik Huggers, a high profile appointment from BBC Future Media in January, will remain at Intel where he will lead a new group called Intel Media.

    Intel’s decision is reportedly due to a lack of demand for its chipsets for internet-enabled flatscreen TVs. Intel’s Atom CE4100 chips currently are used to power to a variety of devices including  Sony’s Google TVs and the Logitech Revue Google TV-enabled set-top, but also the D-Link Boxee box as well as French ISP Free’s Freebox Révolution and Liberty Global’s Samsung-built Horizon set-tops.

    Intel Looks Beyond Smartphones, Tablets & TVs [Information Express blog, Oct 19, 2011]

    – Appoints Huggers to Found & Run New Intel Media Group
    – Digital Home Group Merged into Netbook & Tablet Group  

    Intel, under the hands-on direction and guidance of CEO Paul Otellini, wants to look beyond smartphones, tablets, TVs and consumer PCs, way beyond, so it can plot a course for its future, not just for the near term. To that end, Intel has taken two giant steps.  

    It has created a new group called Intel Media that Erik Huggers will head. Huggers’ Digital Home Group, except for smart TVs, will be merged into the existing Netbooks and Tablets Group that Doug Davis will continue to operate. The smart TV operation is being closed down except for existing customers.

    Intel sees the TV market as currently being a “footage per dollar” one. Consumers set a budget for what they can spend and then try to buy as big of a screen as possible for less than their budget. Evidence of that is 60-inch TVs that are going for $1,200 and a 42-inch LED smart TV from LG, this year’s model, being sold by Amazon for $650 including delivery to the home and the chip making giant Broadcom exiting the market a few weeks ago. 
    The economic downturn and increased competition has put brand name makers of TV sets under tremendous pricing pressure. Sony, once the king of high-end TV sets, has lost billions of dollars in the TV market and says it expects to lose millions more. Sony, Toshiba and Hitachi are working together with a government-backed fund to spin off and merge their LCD businesses. (Why not? The US did almost the same for US car makers with loans and advances for “green” cars.) 

    No one has cracked the smart TV platform yet and that’s why so many have popped up. In some respects Intel is doing what the smart TV industry will have to do at some point: stop and ask where we are going. It’s like the early days of MP3 players, Huggers said, when there were lots of MP3 players but no one was buying. Suddenly Apple entered the market with the iPod and the iTunes store and player, perfectly synched, and consumers started buying millions of its players and songs from iTunes.  

    Perhaps the straw that will break the camel’s back in TV pricing is that two major new factories are being built in China to make displays, according to Intel. The golden age for buying TV sets will continue but goodness help you if you’re trying to make them for a profit. 
    As long as the pricing pressure on TVs continues, TV set makers don’t want to add any feature they don’t have to — although it’s widely acknowledged that the tipping point for smart TVs has been passed. All TV sets will be smart, just like they all now have color. 
    There’s another reason for Intel to meld IPTVs and tablets. As Apple has clearly shown, successful CE makers will have one silicon and one ecosystem. Apple, for example, is not going to use a different silicon family or ecosystem for apps and online store than the ones it uses for iPhones and iPads if it were to launch a line of TV sets. 

    Let Us Praise the Dead Digital Home Group

    The Digital Home Group had some notable successes handling the CE versions of the Atom processor:
    – The Boxee Box
    – The failed (through no fault of Intel) Google TV that Sony andLogitech made
    – The IPTV STB Comcast ordered from Pace
    – The snazzy Samsung STB that Liberty Global’s UPC ordered 

    The follow up on those and others like them will be handled in Intel’s Netbook and Tablet Group. 
    Intel sees a major opportunity in IPTV boxes — media processors and the gateway/home network businesses. It sees the synergy that’s emerging between tablets and smart TVs plus other smart consumer devices. 
    The move to all-IP infrastructures by the cablecos and the links between TV sets and tablets were loudly obvious at The Cable Show in Chicago.

    The world’s telcos started with IP for their TV technology and the cablecos are rushing to catch up. The race to integrate tablets and TVs takes two forms:  

    – The use of the tablet as a second viewing device — a mobile TV within the home. 
    – The tablet and smartphone becomes a companion screen to what’s on the TV, one where viewers can chat with friends and the show’s stars about what they’re watching. It goes beyond allowing viewers to “click” on advertising links to learn more about a product. Ask any parent of teenagers about it.  

    Intel spokesman Claudine Mangano said, “We believe the future of TV is in IP delivery and multi-screen usages and are aligning our focus to these areas, and with other top corporate imperatives that include ultrabooks, smartphones and tablets.” She made it clear that Intel is not abandoning its existing smart TV customers.  

    Intel Media Looks Way Ahead

    Intel Media is being founded to look beyond the current generation of smartphones, tablets, TVs, PCs and IPTV. It is mandated to answer, “What technology will be needed as the digital media industry progresses?”  

    Intel is not clear publicly on what Intel Media’s mandate is but in Erik Huggers it has put one of the industry’s leading digital media executives in charge. Huggers is not talking about it very much except to say Intel is very, very serious and ambitious in digital media and that he is super-excited by Otellini’s challenge. 
    Huggers was previously at the BBC as director of the its future media and technology division until Intel hired him earlier this year. Before that he worked for Microsoft in various digital media projects. 
    With impossible hurdles in front of him, Huggers led the technology dinosaur BBC into the digital media era. He oversaw the launch of the BBC’s iPlayer for catch up TV. Launched in 2007, it was years ahead of its time and still ahead of anything in the States. 
    He nearly led the BBC to the forefront in smart TV platforms with Project Canvass, now called YouView. It is an attempt to develop a standard smart TV platform that lets developers easily add apps and CE makers to easily add to their gear. Unfortunately the BBC Trust, which runs the BBC, decided to play politics instead of getting out of the way.
    It forced the BBC to bring in seven other companies such as BT, each with a different opinion as to what should be done, to help develop and deploy YouView. Know the story about the committee and the camel? Well, that’s what happened. YouView is still not on the market and the rival HbbTV standard is becoming dominant on continental Europe. 
    A common smart TV platform would have benefitted consumers and CE makers just as Windows did for PC makers and consumers. Instead the world is awash in smart TV platforms — all incompatible and inconsistent in their user interface — and with some companies changing platforms from year-to-year. 

    The closest Huggers comes to revealing anything about Intel Media is to say, “For Intel to be successful in digital media, it must have the best access to digital content.” He then says that Amazon is showing the way with its Kindle Fire.  

    Intel wants Intel Media to sail out into the future of digital media and see what’s there. It has selected the best man for that task. Perhaps Huggers will again be called “director of future media and technology” as he was at the BBC.

    Innovation in Media [Erik Huggers on Intel Capital, Global Summit 2011, Nov 15, 2011]

    image

    image

    imageimage

    image

    Huggers joins CMI supervisory board [CMI press release, Oct 27, 2011]

    Consolidated Media Industries, the innovative European digital media group, is delighted to announce that Erik Huggers is joining its Supervisory Board. The Intel Executive and former BBC Future Media & Technology director, has an extensive international track record at the forefront of digital media innovation.
    “Erik Huggers really understands how innovative technology is changing the behaviour of media consumers worldwide,” said CMI President and CEO Bart-Jan van Genderen. “He will play a key role in the realization of CMI’s international ambitions. Erik’s experience and vision on global media, innovative consumer services and digital content creation are of tremendous value to us. We’re delighted such a talented person is joining our Board to share his insights and expertise.”
    “CMI is one of the most digitally savvy media enterprises in Europe” said Erik Huggers, Corporate Vice President of Intel Media. “I feel privileged to join this team and look forward to working closely with the other Board members during this important phase of CMI’s growth.”
    Erik Huggers Career
    Erik Huggers is currently Corporate Vice President and General Manager of Intel Media and a member of Intel’s Management Committee. Erik’s mission is to establish Intel as a global leader in consumer software and digital media services.
    Prior to his position at Intel, Huggers has worked with Endemol Entertainment as Director of Business Development for its interactive division. He then joined Microsoft, where he led the global business development for Windows Media Technologies.
    He joined the BBC in 2007 and became a member of the BBC’s Executive Board. He was appointed Director of BBC Future Media & Technology and during his tenure was responsible for the successful roll-out of BBC Online, BBC iPlayer, Mobile and Red Button services. All these technologies were designed to help audiences enjoy easy access to BBC content, on demand and on any device. Huggers also held responsibility for managing the Broadcast and Enterprise Technology group, BBC Archives, as well as leading the Research & Development department.

    Intel Names BBC Executive to Lead Digital Home Effort [Intel Newsroom post, Jan 18, 2011]

    Intel Corporation today announced that Erik Huggers will serve as corporate vice president and general manager of the company’s Digital Home Group and become a member of Intel’s Management Committee. Huggers is director of the BBC’s Future Media & Technology division and serves as a member of the BBC’s Executive Board. He replaces interim general manager Brad Daniels.
    “Erik Huggers’ proven track record of managing a variety of digital media businesses will be an extraordinary asset to Intel’s digital home initiative,” said Intel President and CEO Paul Otellini. “Erik’s background and vision for delivering new platforms, interactive content and services to consumers are an outstanding fit for Intel, and I am thrilled to welcome such a talented person to drive this key strategic business for Intel. We look forward to him joining our team.”
    Huggers joined the BBC in 2007 and is responsible for delivering BBC content over the Internet, interactive TV and mobile, helping audiences enjoy programming using a wide variety of devices from any location. He is also responsible for managing the BBC’s Broadcast and Enterprise Technology Group and BBC Archives, as well as leading the BBC’s Research and Development activities.
    Huggers has long been at the forefront of digital media innovation. Prior to joining the BBC, he was with Microsoft where he led the global business development for Windows Media Technologies. Before joining Microsoft, Huggers worked with Endemol Entertainment as director of business development for its interactive division.
    “I look forward to joining one of the leading technology companies in the world,” said Huggers. “This is a tremendous opportunity to build a new business for silicon, software and services to unlock the potential of high-quality connected media experiences in the living room.”

    Intellect Consumer Electronics Conference 2011 – Keynote presentation by Intel’s Eric Huggers [IntellectTechnology YouTube channel, Aug 2, 2011]

    Keynote presentation by Eric Huggers from Intel at the Intellect Consumer Electronics Conference 2011: Future of Digital Entertainment.

    image

    One on One with Erik Huggers [Intel Free Press, Aug 18, 2011]

    Former BBC executive heading up Intel’s consumer electronics efforts on management, smart TV and life.

    When Intel went looking for a new leader to replace departing executive Eric Kim as head of the Digital Home Group, they went to someone who knew very little about silicon.
    But through his work at the British Broadcasting Corporation (BBC) as director of the Future Media & Technology organization, and Microsoft, where he drove a wide variety of digital media initiatives, Erik Huggers is no stranger to digital media innovation.
    In the following Q&A, Huggers, a native of The Netherlands, talks about why he joined Intel, how the company needs to get into the heads of digitally savvy teenagers, and why his new user experience design team is in London is a key asset.
    Since joining Intel 4 months ago, have you ever asked yourself, “What have I gotten myself into here?”
    On day one. I’ll tell you the honest truth. Let me first say, I do not regret joining Intel for a second. I’ve been overwhelmed by the warm reception I’ve gotten.
    When I was at the BBC as an executive board member in a media and entertainment company, you get a certain set of privileges when it comes to office spaces.
    I had a proper executive suite on the top of the building, lots of windows, a living room set up in my office, projectors, televisions. That’s how it’s done for those companies for the last 90 years.
    When I arrived here on day one and they showed me to my cube on floor five [in the Robert Noyce Building of Intel’s Santa Clara, Calif. headquarters], I literally thought, what have I done?
    I’ll adapt, don’t get me wrong. But the delta on day one between the executive suite and my new cube (laughs). I had a bit of detox to go through, I think.
    So when I moved [in 2007] from Microsoft to the BBC I had people in front of BBC Television Centre dressed up in these chemical nuclear suits picketing against my appointment.
    At Intel, I’ve only been warmly welcomed by colleagues and folks around the business. And so far, it’s been an amazing 4 months.
    During your short tenure at Intel, have you seen areas where we can improve?
    As someone who’s been here for 4 months, I don’t claim to have tons of wisdom. I was surprised by the number of steering groups and meetings that happened. Some of these meetings are like professional debating societies, where there are armies of Intel people talking about incredible minutia. I would’ve thought we would be fleeter of foot.
    In these meetings, I am surprised by the number of people doing email. If you don’t want to be in a meeting, get out. Don’t do mail. Close your laptop.
    One of the things that I really learned being in the media industry directly and indirectly for 15, 20 years now, is that what those industries do really well is put the audience at the heart of everything they do. I don’t think that’s what we do today.
    What we talk about is valid stuff like the next process node, or putting more transistors on a die, or can we do more gigahertz or flips or flops or whatever we measure, and we get really excited — for good reasons. But what’s more important is: What does this stuff enable for the consumer?
    And I’m not talking about the people who buy our technologies and build end-products. I mean the person who buys the end-product. How is what we build valuable to a 15-year-old who’s completely connected?
    We need that hardcore technical super-engineering capability that we have in spades here. But we also need the audience insight.
    Finally, I’m a big supporter of our investments in software development, and I think that’s absolutely critical. We need to attract the best possible engineering talent in order to take a bit more control over our own destiny as a company.
    Can you talk more about user experience?
    Everyone talks about user experience at Intel these days. I’ve come to the conclusion that most people don’t know what they’re talking about.
    We have great talent inside Intel, don’t get me wrong. Genevieve Bell and the team [Interaction and Experience Research (IXR) group in Intel Labs] clearly get it.
    We need to bring top talent that can execute on that user experience and design piece into Intel so that starts to influence our culture, our way of thinking, how we think about products, the audience. So, we just hired a user experience design crew in London.
    Why London?
    Here in the Silicon Valley, when it comes to those sorts of skills, it’s impossible for us to — well not impossible — but it’s very difficult for us to compete, because you’re competing with Facebook, Apple, Google. We don’t have that same sort of competitive situation in the UK right now, and traditionally the UK has been a hub for design talent.
    Plus, the people that I’ve been able to attract I know very well, because they worked in my organization. These are the guys that have designed industry award-winning services across television, telephone, tablets, PCs.
    I think bringing that expertise into Intel will influence the direction of travel for whatever we do in next-generation silicon, next-generation software, next-generation services, so that we start with that audience in mind, and then we work our way back.
    So in 2 years, where do you see smart TVs and Intel’s play?
    My hope is that our play in smart TV is going to be more than just silicon. Silicon is absolutely a critical element to get right, and I would argue that the silicon engineering team has performed miracles.
    Just having that platform in your living room means nothing if there’s no content, no services, no applications, if there isn’t a vibrant ecosystem of third party ISVs and media companies who target that platform as a means of reaching the consumer and building a viable business.
    So is DHG only about smart TV?
    I think it’s important to realize that we have some pretty interesting early momentum. Getting Comcast to work with us is a huge milestone. Getting other service providers to take us seriously, like Free in France, a wonderful success story, and Sony on Google TV. As Intel, we’re going beyond the PC. We have early glimpses of what that world could look like in DHG. We have shipping products, we have customers.
    My entire career has been dedicated to digital media. And consumers do not care whether it’s consumed on a TV, a PC, a phone or a tablet. It doesn’t matter.
    Consumers today are hungry for taking control over their digital media consumption.
    And so to me, DHG is not just about television. DHG can potentially help the rest of Intel with our digital media ambitions.
    How would people at the BBC and Microsoft describe your management style?
    In some cases, if a project is going completely off the rails, maybe the management style is slightly more autocratic and directive and hands-on and micromanaging. In other cases, you have a great leadership team in place and they’re ticking along quite well, it’s much more coaching and supporting and helping resolve blocking issues. I don’t think there’s such a thing as a single style.
    Dutch people are very direct, and they call it as they see it, and I think that’s very important.
    Is there some area of management that you’ve had to improve upon?
    No one’s perfect. Everyone has opportunities to improve their day to day work, the way they interact with others. I think everyone always has to work on communication style and over-communicating, because just because you think something doesn’t mean that everyone automatically understands what you’re saying.
    What I’ve found is that when I get bored of the message, that’s when it really starts to ring through with other people.
    Who was your best manager?
    Two individuals that I have in mind were both entrepreneurial, self-starters, not afraid of managing up or managing down.
    They also were able to create teamwork, group spirit, and didn’t necessarily pit their best people against each other. A bit of creative tension is good, but animosity and negativity, that’s simply not good.
    What made you decide to come to Intel?
    [President and CEO] Paul Otellini convinced me that he was absolutely, completely, and utterly dead serious about moving Intel beyond the PC.
    The PC was going to remain critically important as were servers, but he was dead-set on making sure that we as an organization were going to be successful in phones, in tablets, in television, and whatever other form or factors comes along. We’re going to move from a PC company to be a compute company.
    How do you balance work with life?
    I’m passionate about what I do. This is not for me about a paycheck. I want to be part of an organization and contribute to an organization and lead an organization that has the ambition to change the world, change the industry.
    When you’re mission-driven like that, putting in the long hours doesn’t matter. You’re passionate about it, you love what you do, you enjoy it, that’s what gets you out of bed every day. And so, work/life balance is tough, but I’m fortunate that I’ve got a brilliant wife who’s very understanding and forgiving.
    How do you relieve stress?
    What I do is I talk all day with customers, with partners, with employees, with colleagues. To relieve some stress, I like to be quiet. Maybe simple stuff like watch a movie or go for a walk.
    What are your hobbies, besides traveling?
    I’m passionate about technology, keeping up-to-speed with the latest and greatest of what’s happening on the web, what’s happening with consumer electronics. I get the latest widgets and gizmos and try them out.
    My wife is a Formula 1 fan, and because of her, I get kind of forced into it.

    Media City Forum : Erik Huggers Presentation [SalfordUniversity YouTube channel, March 5, 2010]

    Erik Huggers delivers his presentation to Media City Forum and talks about opportunities for Higher Education (Part 5 of 5)

    Microsoft entertainment as an affordable premium offering to be built on the basis of the Xbox console and Xbox LIVE services

    OR create interactive content as a premium offering together with partners using Kinect technology as a starter OR moving Microsoft Xbox 360 to ‘entertainment console’ OR leaving the good quality commodities to others and going for a premium brand with Xbox as well OR Xbox 360 and Microsoft’s Gaming Future – D: Dive Into Media [WSJDigitalNetwork YouTube channel, Feb 12, 2013]

    [04:26] “Our focus is to really transition [Microsoft’s Xbox] to an entertainment device,” Nancy Tellem said Monday at All Things D’s D: Dive Into Media in Laguna Niguel, Calif. “We have thousands of movies, sports events, live events.” [04:34]
    [05:14] “We’re looking at a whole variety of types of content. What I call it premium, which is aligned with HBO and Showtime quality, networking cable quality” she said. “Then, of course we’re looking at alternative, reality, live events. All of the premium. It’s a premium service. Also we are looking at each content as what it would be best. Whether in format, whether in time. So we are not constrained in the same way traditional media distribution companies are, having to produce something at certain length. We have 46 million global users all connected. With all the premium service we offer may be behind, being part of the the membership.” [06:05]
    In addition to producing content internally, Tellem said she was also looking at partnering with “traditional media partners, studios and creatives.”
    “[The] Xbox brand stands for the best, most interactive, most amazing entertainment you can get,” Yusuf Mehdi said, comparing it to low-cost media streamers like the Roku. [08:35] “Our current and future investment is about doing the things that are big and premium. Let really do things that are amazing for customers.” [08:43]
    Would you become a pay-TV provider? [09:18] “For questions like this it always goes back to what do we feel we can really be best in the world we are doing, what’s our value add” he said. “Our value add is not being another distributor of content that comes from many good sources today, it’s about adding that level of interactivity, creativity, fidelity that makes it come alive.” [09:36]
    [12:46] Nancy Tellem: “I think one of the perfect example of more live events as you’re looking at, this is more hypothetical as obviously we’re not there yet,  but certainly if you do a comedy show, and you can actually from a transactional standpoint know where the comedians going to be performing, buy tickets, that’s an example.  You can also, not just from a Twitter standpoint, but actually share with your friends, actually experience it together.  He or she may be on the other side of the United States or in anthor country, Ireland or wherever, that’s kind of the real time experience that really is quite unique. We’ve already done a joint venture with Sesame Street where from children’s programming the kids can actually interact with the video itself and have a real interactive experience with …” [13:50]
    She saw episodes ranging from as short as 10 minutes to an hour and a half, multiple episodes being produced, and using Xbox’s interactive capabilities, be it the voice-and-gesture-enabled Kinect or the second-screen SmartGlass app.

    Xbox Execs Talk Momentum and the Future of TV [Microsoft feature story, Feb 11, 2013]

    Living room entertainment is in its largest evolutionary period since the transitions of black-and-white to color, and from standard definition to high definition. The Xbox 360, alongside Microsoft’s entertainment industry partners, is at the forefront of that evolution as one of the only devices that brings all forms of entertainment together in one device, while making access to content easy and providing new ways to interact with existing programming. In 2012, the amount of TV and other entertainment offerings on Xbox almost tripled, now surpassing 100 custom, voice-controlled TV and entertainment apps on Xbox LIVE.
    “Yes, we started with video games, but we have been on a journey to make Xbox the center of every household’s entertainment,” says Yusuf Mehdi, corporate vice president of Microsoft’s Interactive Entertainment Business.
    Today Mehdi, along with Nancy Tellem, president of entertainment and digital media at Microsoft, participated in a D: Dive into Media session, facilitated by Peter Kafka, to discuss that journey and the opportunities that lie ahead. Mehdi revealed some new data that illustrates how entertainment usage on the Xbox has exploded during its living room transformation, and Tellem shared more about her newly created Los Angeles-based Xbox Entertainment Studios.
    Today, there are more than 76 million Xbox 360 consoles around the world. That’s three times the number of original Xbox consoles sold. And a Kinect sensor now sits next to roughly one third of those Xbox 360 consoles; the company has sold 24 million Kinect sensors since launch.
    Social has been an important part of Xbox from the beginning, and that’s true today more than ever. The Xbox LIVE community has grown to 46 million members, a 15 percent growth since last year.
    2012 also marked the Xbox’s biggest year for entertainment and games usage. Users enjoyed more than 18 billion hours of entertainment in 2012, with entertainment app usage growing 57 percent year over year globally. Last year in the United States, Xbox LIVE Gold members averaged 87 hours per month on Xbox, an increase of 10 percent year over year.
    Those numbers strongly indicate that consumers enjoy all kinds of entertainment via Xbox, and Mehdi believes the future of entertainment is even brighter, as Microsoft plans to keep the momentum rolling.
    “We believe that Xbox is being used by more people in the household, during more hours in the day and for more forms of entertainment,” he says. “People are using Xbox in the morning to work out with the Kinect Nike+ Fitness program, kids are watching cartoons, families are enjoying movies, and of course people are playing blockbuster games like ‘Halo 4.’”
    The Future of TV Is Interactive and More Engaging
    According to Mehdi, Xbox has something in the living room no one else has – a large installed base of devices already in the home, connected to TVs, and over half of those are already linked together, delivering amazing personalized and social entertainment experiences via the Xbox LIVE network.
    Microsoft believes that the future of TV and entertainment is one where the TV becomes interactive and more engaging, Mehdi and Tellem explain. Microsoft sees that viewers want to do more with their TV shows, movies, sports and other forms of entertainment.
    “We believe that we are at the start of the next wave of truly interactive entertainment,” Tellem says.
    Tellem is spearheading a new L.A.-based studio called Xbox Entertainment Studios, where the mission is to create true interactive content for Xbox and other devices that will change the way entertainment content is experienced and delivered. Tellem also now oversees live event programming for Xbox LIVE. Xbox has had success with live events such as the Elections 2012 Hub on Xbox LIVE, which aired the presidential debates with an added interactive polling capability. Viewers submitted 3 million answers to on-screen questions during the live telecast of one of the debates. More recently, Xbox aired an interactive red carpet experience for this year’s Grammy Awards and will be doing the same for the 85th Academy Awards.
    “When I worked in traditional TV, we would find ourselves saying things like ‘Wouldn’t it be cool if we could add an interactive aspect directly into the show and engage directly with the viewers?’” says Tellem. “With Xbox, that is possible today.”
    Xbox already offers content such Kinect Sesame Street TV, which blurs the lines between traditional linear TV show and interactive experience, where a kid can jump into their beloved Sesame Street and throw coconuts at Grover.
    But it’s not just about new types of entertainment; it’s also about new business models and new engagement opportunities for advertisers. Mehdi called the launch of NUads a new ad format that harnesses Kinect and natural user interface an important moment for TV advertising. NUads deliver what is most scarce to advertisers today: consumer engagement. NUads enable natural interactivity using the simplicity of a spoken word or the wave of a hand. The first wave of NUads, which launched last fall with interactive polling, saw a record level of consumer engagement with 37 percent of people responding. With this model, passive TV advertising is transformed into engaging and actionable experiences.
    Pioneering the Future of TV
    In addition to Xbox Entertainment Studios creating content that will highlight what’s possible and demonstrate the capability of the Xbox platform, Microsoft will continue to partner with content creators, networks, aggregators and advertisers to “pioneer the future of TV,” says Tellem.
    During 2013, Microsoft is planning to launch more than 40 new voice-controlled, customized TV and entertainment apps on Xbox.
    “We want to partner with the industry to bring entertainment into a new era,” she says. “It’s an era when interactive entertainment becomes the greatest form of all entertainment – and we couldn’t be more excited to play a part in it.”

    Will Next Gen Xbox Win With Premium Media Combo? [Breaking Analysis] [siliconangle YouTube, Feb 12, 2013]

    Microsoft’s Xbox 360 has been morphing from a gaming console to an entertainment console, and Microsoft intends to capitalize on that in its future Xbox consoles. Although Microsoft has not confirmed that a next gen Xbox is in the works, details have still been leaked about a new console. SiliconANGLE Contributing Editor John Casaretto shared his knowledge regarding the source of the leaks, a person by the name of SuperDaE. SuperDaE made headlines last year when he tried to sell a Durango development kit on eBay. Durango is allegedly the code name for the next gen Xbox project. Casaretto concluded, “He’s got a lot of information, a lot of it seems really poignant, and he’s a very credible source.” Casaretto named some of the new features of the new console, noting there are some interesting twists to them, such as the ability to run multiple games simultaneously, required game installations, and the compatiblity with a new, improved (and mandatory) Kinect sensor. He explained that some of the features that were previously optional are now unified into one grand Xbox. Microsoft’s senior VP of interactive entertainment, Yusuf Mehdi, claimed that the Xbox 360 is transitioning from a gaming console to an entertainment before their eyes. Casaretto believes this is a natural evolution and expects to see more gaming, more video content, and even more shopping coming through the Xbox, turning it into a genuine family entertainment center. He predicted, “I think we’ll see this symbiosis of entertainment and gaming, and this will apply to Nintendo and Sony as well. They’ll certainly follow suit.” There is also some speculation that Microsoft will expand its popular monthly subscription service, Xbox Live, to include original shows, just as Amazon and Netflix have done. Casaretto stated that their success in this area will depend on the type of content they put out. He said if they’re trying to make completely original shows, such as sitcoms, it could easily turn into a money pit very quickly in trying to keep up with other popular shows out there. However, if it’s in alignment with game releases and other events, he said it’s possible Microsoft could be right on the money with this expansion idea, but it will take some time to establish their presence in this space. Likewise, in other new media TV developments, Intel Media has confirmed they are currently working to launch internet television this year. The Verge reported that Intel Media plans to provide “live television, catch-up television, on-demand [and] a set of applications.” Their hardware package will include a set-top box and a camera, which could be used for synchronized viewing with other viewers across the country. Casaretto observed that Microsoft has been very quiet about their new branding strategy with Xbox. He reminded viewers that Sony is expected to say something about their new Playstation soon, and he believes Microsoft is waiting to key off of Sony’s announcement. See the entire segment with Kristin Feledy and John Casaretto on the Morning NewsDesk Show.

    See also:
    Xbox LIVE Turns 10: A Decade of Entertainment [Microsoft feature story, Nov 15, 2012]
    The New Entertainment Experience From Xbox [xbox YouTube channel, Oct 25, 2012]

    The biggest blockbuster games, movies and music that go where you go, your favorite sports in one place, and your Nike personal trainer at home

    Xbox Music [xbox YouTube channel, Oct 14, 2012]

    All the music you love, every way you want it — Xbox Music. http://www.xbox.com/music

    To understand the outstanding leadership in that Microsoft entertainment effort see this earlier Microsoft’s Yusuf Mehdi at MIXX 2009 [IABtv YouTube channel, Oct 8, 2009] presentation:

    Yusuf Mehdi provides a rarely-seen demonstration of a truly interactive home office powered by Microsoft Touch—complete with voice detection, touch screens and surface computing capabilities.

    Xbox Beyond the Box [The Official Microsoft Blog, May 29, 2012]

    Next week, the Xbox team heads to Los Angeles for E3. Having recently joined the team, I have the benefit of fresh perspective, and one of the things that has struck me is the amount of opportunities we have ahead.
    Before joining Xbox, I had the fortune of being a part of some incredible periods of innovation and incubation such as in the early days of Windows and Internet Explorer, and, more recently, with the creation of Bing. I have also had a great passion for gaming and entertainment, and watched the growth of MSN into one of the world’s largest media sites on the Internet.
    Similar to many of those experiences, the Xbox has arrived at an important inflection point in its growth and development. In particular, in the last year, the Xbox has transcended from a gaming console to a broad entertainment device inclusive of movies, TV, music and sports. I will give you an example of what that means.
    A few weeks back, my family and I decided to escape a rainy Sunday afternoon by watching a movie. The need for indoor entertainment in Seattle in May is (sadly) no surprise, but what really struck me was the way my son went about finding a movie. After my children finished a heated rock-paper-scissors battle to determine who got to choose what we would watch, my son sprinted to the Xbox 360. He didn’t first turn on the TV or go to our relatively hefty collection of DVDs. For him, the Xbox is now the gateway to what he wants to watch.
    Now, my household may be a little biased, but my son isn’t alone. Xbox will always mean games, but for tens of millions of people around the globe, it also means music, TV, movies and more. Xbox LIVE subscribers now spend an average of 84 hours per month on the console. For comparison, the average American watches a little more than 150 hours of TV in the same period. The more entertainment options we add, the more time people spend on Xbox. In the last six months, we’ve grown our entertainment library on Xbox to include more than 60 voice controlled applications and more than 200,000 premium movies and TV shows.
    As a result of people engaging with Xbox more frequently for more entertainment activities, we have been able to defy gravity compared to where consoles traditionally are at this point in their lifecycle. Sales for Xbox 360 in year five were greater than in year four, sales in year six were greater than in year five, and sales in year seven were greater than in year six.
    image
    Since 2005—when we launched Xbox 360—we have sold 67 million consoles and have generated more than $56 billion at retail, and we’re still going strong in our seventh year. With 47 percent share of the current-generation console market in the U.S., we are hitting our stride largely as a result of the success of Kinect for Xbox 360 (19 million sold worldwide) and the flood of new entertainment options through Xbox LIVE (40 million members worldwide).
    To date, our success with Xbox has been led by a box in the living room. Moving forward, Xbox will go beyond the box to reach all new families of devices. Just as Xbox has grown to mean more than just games, it also is more than just a console. This year, Xbox becomes the premium entertainment service for Microsoft. Whether on your PC, tablet, TV or phone, Xbox will be a gateway to the best in music and video, your favorite games and instant access to your friends. With the launch of Windows 8, we’ll bring Xbox entertainment to everyone. With Xbox on Windows 8 devices, we rapidly accelerate the reach of Xbox entertainment from more than 60 million people to hundreds of millions of people worldwide.
    We understand that entertainment has become a multi-screen experience where you and your friends are watching TV, listening to music, and playing games while interacting with your tablets and phones in new ways. We’ve got ideas for making all the entertainment you love more personal, interactive and social across the devices you love—and on the phenomenal Windows 8 devices that are to come.
    You’ll see the first of that next week at E3 where we will showcase the very best of Xbox. We’ll unveil new games, show new ways to enjoy the entertainment you love and, as always, we’ll have a few surprises to share!
    We can’t wait. We kick things off on Monday, June 4 at 9:30 a.m. PT. You can watch us on live TV though our partnership with Spike TV, on the Web (Xbox.com), and for the first time ever – on Xbox LIVE.
    Posted by Yusuf Mehdi
    Chief Marketing Officer, Interactive Entertainment Division, Microsoft

    Introducing the New Entertainment Experience from Xbox [The Official Microsoft Blog, Oct 22, 2012]

    Below is a post from Yusuf Mehdi, Chief Marketing Officer for Microsoft’s Interactive Entertainment Division, announcing details about a major new update to the Xbox dashboard and the launch of Xbox SmartGlass, which brings games, music, TV and movie experiences off the console and onto your phone, tablet and PC.

    Steve Ballmer recently stated that 2012 will be “the most epic year in Microsoft’s history.”

    Not only do we have major releases coming to the PC, tablet and phone, but we have worked extremely hard to ensure those screens work together with the other major screen in peoples’ lives, the television. People often call out the role Microsoft design style is playing in this new wave of experiences from the company. Whether you are using a phone, PC, tablet or console that is running our software you have an experience that is distinctively Microsoft, elegant, intuitive and integrated.
    Now, there is another common thread that ties all of these experiences together – Xbox entertainment.
    Today marks the beginning of two major launches for Xbox entertainment.
    First, on top of what is the greatest year for games in Xbox history, beginning today a brand new update will roll out broadly for every Xbox 360 owner in the world that brings entirely new TV entertainment experiences. We are bringing the Web to the TV like never before with Internet Explorer, launching a brand new music service, and making it even easier to find the entertainment you love using Kinect and Bing voice search. This release is the next step in our journey to transform Xbox 360 from a games console to an all-in-one entertainment system.
    Second, we are bringing Xbox Entertainment off the console to your phone, tablet and PC to deliver great games, music, TV and movie experiences. And as a part of this effort with Xbox SmartGlass we are going to introduce amazing multi-screen entertainment experiences.
    Back in June, we talked about how this is the year Xbox becomes the premium entertainment service for Microsoft. While our success with Xbox to date has been led by a box in the living room, we’re now reaching individuals in new and exciting ways across PCs, tablets and phones. Xbox will be a gateway to the best in movies, TV shows, music, sports, your favorite games and instant access to your friends, wherever you are.
    This week, with the launch of Windows 8, followed shortly thereafter by Windows Phone 8 devices, we’ll flip the switch on that transformation with the launches of four new experiences that make entertainment better on every screen in your life:
    At that moment, we will rapidly accelerate the reach of Xbox entertainment from more than 67 million consoles to literally hundreds of millions of devices worldwide. Also this week, we will take our biggest step ever to increase our global reach, extending Xbox entertainment experiences to 222 countries from 35.

    image

    We realize some may ask, “Why are all these apps called Xbox, isn’t Xbox just a game console?” For us the decision to have Xbox stand for our broad entertainment efforts was a simple one. It is a natural evolution of our consumer offering. Even as Xbox has become the number one game console in the world, and continued to deliver arguably the best line up of games in our history, we have seen the use of Xbox broaden to watching TV/movies and listening to music.
    During the last couple of years, we have grown our global entertainment portfolio to more than 62 TV and entertainment partners. Our Xbox LIVE subscribers now spend more time enjoying entertainment apps than multiplayer games. And this is occurring even when multiplayer gaming is also growing on our console.
    Also, we live in a multi-device world. The millions of people enjoying entertainment on their Xbox are doing so within arm’s reach of another device. We believe your entertainment should travel seamlessly across devices, that devices should work together to make your entertainment more accessible and create entirely new experiences. We knew we needed a single name for all entertainment experiences from our company and nothing means entertainment at Microsoft more than Xbox.
    Xbox SmartGlass is a great example of our approach to multi-screen entertainment. With SmartGlass we are focusing on two key objectives:
    1. Make discovering and controlling your entertainment simple, no matter the device you’re using; and
    2. Ensure you get more from your entertainment experience.
    Xbox SmartGlass is a free downloadable app that takes your Windows 8 and RT tablets and PCs, Windows Phone 8, iOS and Android devices, and converts them into smart second screens for the entertainment you are enjoying through your Xbox.
    Today, we are unveiling our first wave of experiences and partners for Xbox SmartGlass. They are the first of many to become available over the coming months.
    Navigate your Entertainment – Your phone and tablet will become the best remote controls in your house. Use the touch screen on your smartphone or tablet to control your Xbox 360, and use your devices to pause, rewind or advance entertainment.
    TV & Movies – With Xbox Video, start a TV show or movie on your Windows 8 tablet and finish it on the big screen through Xbox 360; see the names of cast and crew for the film you are watching and discover related films. To give you one example of what you can expect, coming next season, HBO GO’s “Game of Thrones” will offer groundbreaking Xbox SmartGlass experiences.
    Sports – While watching the game, use a second screen to follow real-time stats, player bios, news and highlights you may have missed. All of this and more will be available for NBA Game Time, ESPN and UFC in the coming months.
    Music – Control your Xbox Music experience on the TV using your smartphone or tablet, discover related artists and songs, cue up additional music, read artist bios and more.
    Internet Explorer for Xbox – The Web comes to TV like never before with Xbox 360 and Xbox SmartGlass. With your smartphone or tablet, pan or pinch and zoom to explore the Web on the largest and best screen in the house, enjoy easy text entry with the keyboard on your tablet or phone, and then move your browser back to your device to take it on the go.
    Games – Get more from your game when you can use a second screen. Turn you phone or tablet into a virtual GPS in Forza: Horizon. Don’t stop the dance party in Dance Central 3 by going back to the menu to choose your next song. Instead, queue it up on your tablet or smartphone. In HomeRun Stars, use SmartGlass on your favorite device to throw a surprise pitch to your friend up at bat in front of Kinect. See detailed stats on how you are progressing in Halo 4, or check up on friends. All of these features and much more will be available when your favorite game extends to multiple screens.
    And we’ve only begun to scratch the surface of what’s possible. In the future, new games will be created, TV shows and movies will be re-imagined, and sports will be broadcast from the ground up with Xbox SmartGlass in mind.
    Moreover, we are making it easier than ever to buy an Xbox 360 console. Starting this fall, we are rolling out the “Entertainment for All” pricing plan that enables you to buy an Xbox 360 for $99 when you sign up for 2 years of Xbox LIVE. Entertainment For All Plan will include U.S. retailers: Best Buy, GameStop, Walmart, Toys R Us and the Microsoft Store.
    With these new technologies, services and pricing, Xbox entertainment has never been:
    • More simple and engaging
    • Available on so many new devices; and
    • More affordable.
    So, yes, this is certainly an epic year for Microsoft. But more importantly, it’s an epic time for all of you that love amazing entertainment.

    Then there is certainly Nancy Tellem:
    Microsoft Names Longtime Entertainment Executive Nancy Tellem Entertainment & Digital Media President [Microsoft press release, Sept 18, 2012]

    Microsoft Corp. today announced that Nancy Tellem, former president of CBS Network Television Entertainment Group, has joined Microsoft as Entertainment & Digital Media president.
    In her role reporting to Phil Spencer, corporate vice president, Microsoft Studios, Tellem will oversee the launch of a newly created production studio in Los Angeles that will develop interactive and linear content for Xbox and other devices. In addition to running the production studio, she will help spearhead the company’s efforts to turn Xbox into a destination where consumers can enjoy all their entertainment in one place.
    Tellem and her group will collaborate with the creative community to develop unique, compelling storytelling experiences for the Xbox brand. More than just a gaming console, Xbox has evolved into a consumer destination for the world’s most popular TV, movies, music and sports content. With a roster of more than 65 entertainment apps, including Netflix, Hulu Plus, HBO GO, MLB.TV, ESPN, YouTube and VEVO,global video consumption on Xbox LIVE has increased 140 percent during the past year. In addition, this September, Microsoft will introduce 2-way TV experiences from renowned entertainment partners Sesame Workshop and National Geographic that will further expand the Xbox platform beyond games, offering entertainment options for everyone in the family.
    “I am excited to be a part of the continued evolution of Xbox from a gaming console to the hub of every household’s entertainment experience,” Tellem said. “The Xbox is already a consumer favorite, and we now have a tremendous opportunity to transform it into the center of all things entertainment — from games, music and fitness to news, sports, live events, television series and movies — so consumers have one destination for all their entertainment needs. I look forward to building a studio team that embraces the challenges of creating true interactive content that the Xbox platform supports and to work with talent to create content that will change the way entertainment content is experienced and delivered.”
    “Whether you are voting for your favorite contestant on a TV show or playing a game, entertainment is becoming more personalized and social, driven by the Internet and new tools to interact with content,” Spencer said. “We are embarking on a new chapter with the creation of a studio dedicated to making original interactive and linear content, and I’m excited to have Nancy leading this effort.”
    “With her impressive background in entertainment innovation, I am thrilled to have Nancy join our team,” said Don Mattrick, president of the Interactive Entertainment Business Group at Microsoft. “Under her direction, we look forward to building and extending our creative offerings from the living room to all your devices.”
    Tellem has been a CBS executive since 1997, most recently as senior adviser to CEO Leslie Moonves, where she explored business and strategic opportunities — domestic and international — involving content partnerships, new production models, emerging media and technologies. As president of the CBS Network Television Entertainment Group, Tellem supervised programming, development, production, business affairs and operations. She oversaw the network’s prime-time, daytime, late-night and Saturday morning lineups (with shows like “CSI,” “Survivor,” “Everybody Loves Raymond” and “The King of Queens”). Before joining CBS, Tellem was executive vice president of Business and Financial Affairs for Warner Bros. Television and was part of the team that launched the landmark programs “Friends” and “ER.”
    In 2006, Tellem was inducted into the Broadcasting & Cable Hall of Fame for her contributions to the electronic arts. In 2008, Forbes ranked her 32nd on its list of “The World’s 100 Most Powerful Women” and Entertainment Weekly named her the third-smartest person in TV for quickly restoring CBS’s entire primetime lineup after the 100-day writers’ strike. That same year, the National Association of Television Program Executives awarded Tellem a Brandon Tartikoff Legacy Award in recognition of her “extraordinary passion, leadership, independence and vision in the process of creating TV programming.”
    About Xbox
    Xbox is Microsoft’s premier entertainment service for the TV, phone, PC and tablet. It’s home to the best and broadest games, as well as one of the world’s largest libraries of music, movies, TV and sports. With Kinect, Xbox 360 transformed gaming and entertainment in the living room with a whole new way to play games and find entertainment — no controller required. More information about Xbox can be found online at http://www.xbox.com.

    Nancy Tellem: Why Microsoft’s Looking for TV Hits on XBox Live (Q&A) [Hollywood Reporter, Jan 4, 2013]

    The entertainment and digital media president on why she left CBS, how she’d handle Angus T. Jones and why she’s not into violent entertainment: “I don’t like blood.”
    Nancy Tellem, who had been a consultant to CBS chief executive Leslie Moonves for nearly three years after stepping down as president of the CBS Network Television Entertainment Group in 2009, became entertainment and digital media president at Microsoft on Sept. 18. One of her primary mandates: to create a TV studio in Los Angeles during the next year, where she and her staff will make original content for the 40 million customers worldwide who subscribe to Xbox Live, which already features fare from Netflix, ESPN, HBO Go, Amazon Prime and other providers. Tellem, 58, a mother of three sons who is married to sports agent Arn Tellem — and who spends what little free time she has bicycling, reading and doing yoga — spoke with THR at her temporary office at Wasserman Media Group in Los Angeles.
    Nancy Tellem, who had been a consultant to CBS chief executive Leslie Moonves for nearly three years after stepping down as president of the CBS Network Television Entertainment Group in 2009, became entertainment and digital media president at Microsoft on Sept. 18. One of her primary mandates: to create a TV studio in Los Angeles during the next year, where she and her staff will make original content for the 40 million customers worldwide who subscribe to Xbox Live, which already features fare from Netflix, ESPN, HBO Go, Amazon Prime and other providers. Tellem, 58, a mother of three sons who is married to sports agent Arn Tellem — and who spends what little free time she has bicycling, reading and doing yoga — spoke with THR at her temporary office at Wasserman Media Group in Los Angeles.
    The Hollywood Reporter: Is it scary to move from traditional TV to a tech company?
    Tellem: For me, it was a natural next step. I was always interested in where the next distribution platform was happening and was very engaged with talking to tech companies.
    THR: Xbox Live already gives users access to tons of video. Why do you need original content?
    Tellem: Incredible content really raises the brand — look at Mad Men with AMC. So original programming gives us an opportunity to kind of brand the Xbox. And looking at the technology the Xbox console provides, we are really a bit ahead of the more traditional media companies in having the ability to develop and produce interactive content.
    THR: What types of shows do you plan to make?
    Tellem: We’re looking at all forms of content for every member of the family. So that certainly covers live events, reality, game shows, documentaries and scripted comedy or drama. We’ll cover it all. We’re figuring this out as we go. But we certainly intend to produce things with high production value, with the same breadth of storytelling that you see on traditional TV.
    THR: Interactive TV hasn’t taken off yet. What are you going to do differently to make it popular?
    Tellem: We’ve all tried to produce multiplatform programming, but the difficulty has been that you don’t have the technology to support it. This is where Microsoft and Xbox are in a unique position. The technology is there, not only for the audience that just wants to watch passively, but also for those who want to engage the content more, whether by mobile, tablet or on TV.
    THR: Will you be able to attract major TV talent to make shows for a tech company?
    Tellem: I certainly hope so. I know that before my arrival, they were able to attract a certain level of great talent. I invite everyone to come and listen to what we want to do. There’s a great playground we have here.
    THR: What agents and talent have you spoken with so far?
    Tellem: I’m somewhat overwhelmed by the interest, but I can’t say.  It’s a little too early.
    THR: Have you made any major hires yet?
    Tellem: Not yet. We’re obviously taking advantage of all the existing people we have in Seattle, but at the same time we want to build a top-notch staff in L.A. for what will certainly be a full-fledged production studio.
    THR: Why did you leave CBS?
    Tellem: It’s probably the toughest decision I’ve made in my career. I’m very, very close to Leslie and close to all the executives there. We’re like a wonderful family. I just felt that I had done the job, and hopefully I did it well, and there were new challenges I wanted to take on. I wanted to be a little less comfortable, and I was always so intrigued with where television was going.
    THR: What exactly did you do as an adviser to Moonves?
    Tellem: I’ve always been interested in looking at the next generation of television. In the early days, I ran CBS.com before CBS acquired CNet, and I initiated the mobile strategy. So I’d try to embrace what’s on the horizon. I was also interested in the global production model, so I did a lot of that, going to places like India and exploring.
    THR: Can you give me an example of a crisis at CBS when you were there and how you handled it?
    Tellem: I don’t want to be specific, but there are situations where talent has personal issues and it’s like an athlete: Do you throw them out in the field if it hurts them but helps the team? Ultimately you have to think about the well-being of the talent. I guess I’m referring to the Charlie Sheen, Angus T. Jones type of situations. But among the executives, it should be a collaboration, and we should listen to different points of views to come to a conclusion we can live with while also protecting the show.
    THR: How would you handle the controversy over Two and a Half Men star Jones’ plea for people to stop watching the show because it’s “filth”?
    Tellem: I was just thinking this morning that it’s a good thing I’m not at CBS. Nina [Tassler, CBS Entertainment president] and Leslie and the Warner Bros. people are handling it appropriately. Who knows what motivates these things, but I think his apology was correct. We’re all blessed to be in this business.
    THR: Is any part of traditional TV in jeopardy because of advancing technologies?
    Tellem: It’s certainly disruptive. We always compared ourselves with the music industry and said we had to be much more nimble and accepting of change. The TV industry has done just that.
    THR: How does the industry combat DVRs and ad-skipping?
    Tellem: With VOD and embracing the whole concept of giving consumers their content where and when they want it. And there have been studies — and I do this also — where I watch the commercials because I forget I’m watching a recorded show. Ratings aren’t reflecting what’s really happening. It’s interesting, though, that when you download something, you end up watching the commercials, because you’ve made the decision to watch it free with ads instead of paying for it without ads. Because of that proposition and the intimate relationship people have with their computers and tablets and such, people are accepting commercials and watching them more.
    THR: What was your favorite CBS show with which you were involved?
    Tellem: Oh my God. You know, it’s like, as Leslie would always say, “They’re all our children.” But obviously, the things that made a huge difference in turning around that network were shows like CSIand Survivor and, my gosh, Everybody Loves Raymond.
    THR: What do you like to watch now on TV?
    Tellem: I’m quite enthralled with Game of Thrones. I’ve been watching Newsroom a lot on HBO. I love Homeland, of course, because everyone does,  and Modern Family. I hate to admit it, but I agreed with most of the Emmy Awards this year. I love television. I spent the last 25 years in it, you know?
    THR: Now that you’re at Microsoft, is there pressure to become a gamer?
    Tellem: I love FIFA and I play Madden and golf games, and I think it’s due to the influence of my kids. I was aware of Halo, Call of Duty and the shooter games, but I didn’t play Halo until four days ago. It’s really amazing, but shooter games aren’t my cup of tea.
    THR: Do you have a political problem with them?
    Tellem: It’s funny. Like in television, we’re all fine with violence but not sex. The shooter games are much like procedural series, which I’ve never embraced. I don’t like blood.

    And that’s not all as Microsoft Names Longtime Technology Entrepreneur Blake Krikorian Corporate Vice President in Its Interactive Entertainment Business [Microsoft press release, Jan 10, 2013]

    Proven innovator and leader in media and technology brings strong entertainment background to Xbox team.

    Microsoft Corp. today announced the appointment of Blake Krikorian as corporate vice president for its Interactive Entertainment Business (IEB). Krikorian will report to Marc Whitten, chief product officer for IEB. This announcement follows the acquisition of Krikorian’s company, id8 Group R2 Studios (R2 Studios).
    “We are thrilled to have Blake join the Xbox team,” Whitten said. “He’s a proven innovator and well-respected leader in both the media and technology industries, having created simple, elegant products that have transformed the way people engage with and consume content. We look forward to his contribution to our team as Xbox continues to evolve and transform the games and entertainment landscape.”
    “I am excited to join Microsoft and be a part of the Xbox team. As a 10-year Xbox LIVE subscriber, I have seen firsthand how Xbox has delighted us by reinventing how consumers experience games and entertainment,” Krikorian said. “I look forward to helping the team define the future of entertainment and contribute to the next decade of continued innovation.”
    Krikorian most recently founded R2 Studios. Before that he served as the co-founder, chairman and CEO of Sling Media Inc., inventors of the Emmy® award-winning Slingbox®, which is now owned by EchoStar Corp. Krikorian served on the board of Amazon.com Inc. and also co-founded the Philips Mobile Computing Group where he co-led the team that created the award-winning Velo 1 handheld PC running Windows CE. He has received numerous lifetime achievement awards including the Lifetime Technology Leadership Award from Broadcasting & Cable, as well as the TechFellow Award for Disruptive Innovation from TechCrunch, Founders Fund and NEA.

    Microsoft has acquired id8 Group R2 Studios [Neowin.net, Jan 10, 2013]

    … Microsoft apparently was the winner in the race to acquire R2 Studios with Google and Apple both reportedly also interested in buying the company. R2 Studios’ only product was a $99 Android app, R2 Control For Creston, which allows users to remotely control devices such as security systems, lighting and more on an Android smartphone or tablet.

    This move could be a signal that Microsoft’s next version of the Xbox console will have some kind of expanded remote control features. The company has already launched Xbox SmartGlass apps for Windows 8, iOS and Android device owners for use as a “second screen” experience for some Xbox 360 games and media.

    R2 Android App Controls Crestron Systems [CrestronElectronics YouTube channel, Sept 23, 2010]

    Blake Krikorian of id8 Group shows off his new Android app, R2. Designed to work with the existing Crestron Mobile framework on the backend, R2 integrates and programs just like an iPod touch, iPhone or iPad. This new Android app can be seen at Crestron’s booth running on the new Samsung GALAXY tab so stop by if you’re at CEDIA!

    Slingbox® Inventor And Crestron Collaborate To Bring Android™ OS Support To The Crestron Platform [Crestron press release, May 17, 2011]

    R2 Control App For Android™ is Now Available

    Slingbox® inventor, Blake Krikorian and Crestron today announced the release of the R2™ Control™ for Crestron, a software app that turns virtually any Android™ smartphone or tablet into a fully-functional Crestron touch panel for residential and commercial applications. Utilizing the R2 app and Crestron processors, customers can now control AV, lighting, thermostats, security systems, and thousands of other products via their Android device from anywhere in the world.
    imageR2 was initially unveiled at the Crestron booth at CEDIA Expo last September. Since then, R2 conducted a seven month private beta test program consisting of hundreds of residential, commercial and government Crestron-authorized integrators from around the world. The input from this beta team helped R2 achieve a sophisticated solution compatible with a multitude of Android devices.
    R2 was developed by id8 Group Productions, a product development and technology lab. Co-founder and inventor of Emmy award-winning Slingbox, Blake Krikorian founded id8 Group Holdings (parent company of id8 Group Productions) in 1999. R2 is the first product developed by id8 Group since Slingbox expanded in 2004 to form Sling Media®, Inc. Sling Media was subsequently acquired by Echostar Corporation in 2007.
    “We are thrilled to have the opportunity to collaborate with an innovator like Blake and officially support Android,” says Fred Bargetzi, Crestron VP of Technology. “The R2 control app for Android is the latest development in our ongoing open-standards platform which also includes integration with iOS, MAC OSX and Windows.”
    Krikorian initially conceived of R2 for his own use. “In addition to being able to control aspects of my home via Crestron remote controls and iOS devices, I really wanted to be able to use my new Android-based phone,” says Krikorian. “I desired a software platform that allowed me to further optimize the home control experience for general purpose smartphones and tablets, beyond the industry’s current state of the art. R2 and Android provides the flexibility to do just that.”

    R2 Key Features

    • Communicates with Crestron 2-Series and 3-Series™ control systems via WiFi and cellular network
    • Controls multiple systems/homes from one Android device
    • Uses the same Crestron development tools to create projects for R2. R2 touch panel projects are created using the existing and familiar development tools such as SIMPL Windows, VTPro-e®, D3 Pro® and System Builder.
    • Compatible with Crestron Mobile Pro®/Mobile Pro G™ apps: runs projects created for iOS devices
    • Quick access: ability to disable screen unlock requirement; Device’s built-in proximity sensor can automatically wake device
    • Automatic project UI scaling: resizes Mobile Pro and Mobile Pro G projects to the native resolution of any Android device
    • Visual, haptic and audible feedback: provides clear confirmation of key presses
    • Optimized performance for Android: takes advantage of Android’s multitasking and flexibility to deliver an experience optimized for home and building control
    • Support for multiple and custom resolutions: in addition to R2’s built-in UI display scalar, an upcoming VTPro-e add-on (coming soon) enables developers to optionally create pixel-perfect projects for any screen size.
    If you have any questions, please contact: r2control@crestron.com.
    To get all the latest news, information and product updates subscribe to our RSS feed, “Like” us onFacebook and follow us on Twitter.
    About Crestron
    For 40 years Crestron has been the world’s leading manufacturer of advanced control and automation systems, innovating technology and reinventing the way people live and work. Providing integrated solutions to control audio, video, lighting, computer, IP and environmental systems, Crestron streamlines technology, improving the quality of life for people in corporate conference rooms, hotels, classrooms, auditoriums, and in their homes.

    Analysis: Michael Dell acquiring the rest 84% stake in Dell for $2.15B in cash, before becoming the next IBM, and even getting the cash back after the transaction

    OR Michael Dell’s new cash skimming strategy by privatization and targeting the high-growth and fast SME/SMB (small to medium-sized) businesses with solutions worldwide which will help the adoption of Dell solutions by larger enterprises later on as well. OR how to exploit Dell’s competitive advantage of having NO legacy (“old things”/”old”) business in the enterprise market versus the established enterprise solution players like IBM, HP, Oracle et al. OR the story of leaving its traditional PC business behind, and how the explosion of consumer IT devices and consumerization of IT is playing well with this specific kind of small to large enterprises focus by Dell. OR Michael’s way of showing a fig to all stock market actors (the diversity of “analysts” included) inspired by his thinking ‘You are utterly stupid, and will remain so’. OR the huge bonus for creating the tremendous value in the last 6 years he’d lead the company again, as described in the details sections of this post, as well as earlier in the Pre-Commerce and the Consumerization of IT [Sept 10, 2011] and Thin/Zero Client and Virtual Desktop Futures [May 30, 2012] posts on this same blog. OR, in the very worst case, getting a normal evaluation (sooner or later) of his 16% of shares.

    ANYWAY Michael will become hyper-rich. As a minimum think of attaining a $36B value instead of his current $3.8B for his 16% share of Dell when the company indeed becomes the next IBM. This is absolutely possible, and for no more time than another 6 years he will continue to lead Dell. See more about all that in the first section of this post titled:

    Michael Dell: We are not a PC company anymore

    Update: Highlights From Dell Tech Camp 2013 [DellVlog YouTube channel, Feb 12, 2013] will provide the latest and only 3 minutes long glimpse into the current state of such a “non-PC company anymore”

    The event, now in its fourth year featured: * Dell’s latest technologies and solutions that address customer issues and challenges around Cloud Computing, Data Insights, Mobility and Converged Infrastructure * Speakers from Dell including Marius Haas, President of Enterprise Solutions; Aongus Hegarty, President, Dell EMEA; and Tony Parkinson, Vice President, EMEA Enterprise Solutions alongside a number of Dell solutions experts, customers and partners * Hands-on, deep-dive sessions around Dell’s latest Cloud, Storage, Mobility and Convergence solutions * Customer and partner insight on the latest enterprise technology challenges and trends * Two live-streamed Think Tank events at the event which bring together some of the industry’s principal thought leaders to discuss Converged Infrastructure and enterprise solutions for SMBs

    Here is a slide copy which is speaking for itself in showing the difference:
    image

    Then read the second section of this post titled:

    The Indian case as a proofpoint of readiness

    Before those detailed background sections I should elaborate somewhat more about the founder’s cash skimming approach. Michael Dell’s classical business recipe was to collect the bills ahead of paying his suppliers. What was possible in the 90’s is not anymore. Nevertheless: Dell Push-Pull Supply Chain Strategy [Ian Johnson YouTube channel, June 11, 2012]

    http://www.driveyoursuccess.com this video explains how to run Dell’s Push-Pull supply chain strategy.

    Now he decided to apply the original idea to the current state of Dell’s business. This was the sole reason of his one a half year effort taking Dell private with which he succeeded 3 days ago. The official press release, certainly, has no mention of that at all, just the usual bullshit:

    Dell Enters Into Agreement to Be Acquired By Michael Dell and Silver Lake [press release, Feb 5, 2012]

    Mr. Dell said: “I believe this transaction will open an exciting new chapter for Dell, our customers and team members. We can deliver immediate value to stockholders, while we continue the execution of our long-term strategy and focus on delivering best-in-class solutions to our customers as a private enterprise. Dell has made solid progress executing this strategy over the past four years, but we recognize that it will still take more time, investment and patience, and I believe our efforts will be better supported by partnering with Silver Lake in our shared vision. I am committed to this journey and I have put a substantial amount of my own capital at risk together with Silver Lake, a world-class investor with an outstanding reputation. We are committed to delivering an unmatched customer experience and excited to pursue the path ahead.”

    An opinion a little bit closer to the real aim:
    Dell Computers In Buyout Bid By Firm’s Founder [spworldnews YouTube channel, Feb 5, 2012]

    With an attached background article: Dell Heads For Radical Restructure

    Dell Computers was built from scratch in a college dorm room, and now its founder launches a $24.4bn bid to make the firm private. Once-dominant US computer company Dell has unveiled a £15.5bn plan to take the firm private in a buyout by founder Michael Dell. The firm said it had signed “a definitive merger agreement” that gives shareholders $13.65 (£8.70) per share in cash – a premium of 25% over Dell’s January 11 closing share price.
    “I believe this transaction will open an exciting new chapter for Dell, our customers and team members,” Mr Dell said.
    The deal was unveiled with investment firm Silver Lake, and backed by a $2bn (£1.27bn) loan from Microsoft. Dell shares dropped 2.6% to $13.27 on the Nasdaq after the plan was announced. The move, which would de-list the company from stock markets, could ease some of the pressure on Dell, which is cash-rich but has been seeing profits slump.
    Michael Dell Michael Dell founded the firm in his college dorm room. The Texas-based computer maker, which Mr Dell started in his college dormitory room, once topped a market capitalisation of $100bn (£63bn) as the world’s biggest PC producer.
    The plan is subject to several conditions, including a vote of unaffiliated stockholders. It calls for a “go shop” period to allow shareholders to determine if there is a better offer.
    “We can deliver immediate value to stockholders, while we continue the execution of our long-term strategy and focus on delivering best-in-class solutions to our customers as a private enterprise,” Mr Dell said of the plan.
    Dell was a pioneer of phone-ordered, custom built PCs in Britain during the 1990s.
    The company worked from facilities in the Irish Republic, Britons were able to specify their hard and software requirements before machines were delivered to their home.

    But a realistic assesment I’ve found only in that source:
    Here’s The Secret Private-Equity Plan For Dell by Henry Blodget [Daily Ticker on Yahoo! Finance, Feb 6, 2013] CLICK TO THE LINK AS THERE IS A VERY GOOD VIDEO RECORD OF DISCUSSION BETWEEN DAILY TICKER’S HOSTS AARON TASK AND HENRY BLODGET

    Earlier, I wrote about what Dell was likely to do now that it is taking itself private.

    I suggested that Michael Dell and his private-equity backers would coin money, in part by paying themselves a huge one-time dividend with the cash sitting on Dell’s balance sheet.

    I also bemoaned the fact that Michael Dell had to take his company private to coin this money instead of executing his plan as a public company and sharing the loot with his current shareholders.

    More broadly, I complained that too few public-company management teams (like Dell’s) have the balls to tell short-term public-market investors to take a hike and implement long-term strategic plans.

    And that is indeed a bummer.

    But it’s also the reality.

    Most public-company management teams are so cowed by Wall Street’s short-term demands that they sacrifice the vision and cojones that enabled them to build big public companies in the first place. And then they just manage their companies from quarter to quarter while avoiding the tough, ballsy decisions that separate great companies from good ones.
    Anyway, Dell has decided to go private.
    So the questions are:
    • Why is Dell going private?
    • What is Dell going to do as a private company?
    Earlier, I speculated about what a generic private-equity firm might do with Dell after taking it private.
    I have since spoken with sources familiar with the specific Dell situation. So I have some better information.
    Here’s what the sources told me:

    • Dell is going private because the company is in the middle of a 5-year transformation from “PC manufacturer” to “single-source provider of corporate cloud and security solutions” (sort of a mini-HP or mini-IBM model) and the market is giving it no credit for that transformation. The company feels it has been making good progress on its transformation, but management is worried about meeting quarterly targets and other milestones that are slowing the transformation down. And the stock just keeps dropping. So Michael Dell and Silver Lake felt there was an opportunity to be bolder and more aggressive with Dell as a private company.


    • Silver Lake and Michael Dell are borrowing about $17 billion of the $24 billion Dell purchase price ($15 billion from banks and $2 billion from Microsoft), which means they are temporarily putting up about $7 billion of equity capital. Dell has $15 billion of cash sitting in the bank. So it seems highly likely–we’ll know in 45 days, when the SEC filing appears–that Silver Lake and Dell will pay themselves a big dividend to cover their cash investment. After that point, they’ll be playing with house money. (Correct–it doesn’t suck to be in the private-equity business!).

    • The secret plan for Dell is NOT to fire thousands of people and chop the company up and sell off the parts. Sure, some folks might get fired and some divisions might get sold. But the plan is to invest in the company’s product suite, R&D, pricing*, and marketing capabilities, thus accelerating Dell’s transformation into a solutions provider. This investment will temporarily reduce the company’s free cash flow and profits, which public-market investors might (stupidly) have freaked out about. This was one of the reasons Michael Dell wanted to take the company private.

    • Dell’s plan is to focus on selling its solutions to mid-market companies (~500 employees [more precisely to companies with 215-2,000 employees, see the details in the first “Michael Dell: We are not a PC company anymore” section of my analysis]), not the gigantic Fortune 500 companies that are already well-served by IBM, HP, and other huge “solutions” providers. By providing comprehensive solutions for cloud and security to companies that are not currently well-served, Dell also hopes to increase demand for PCs at these companies–PCs that Dell will obviously provide.
    The private-equity firm backing Dell, Silver Lake, has a long history of investing in troubled tech companies, and it has posted excellent returns over the years. Silver Lake’s target investment time horizon is about 5 years, which is about 100-times longer than the time horizon of the typical public-market investor. So Silver Lake is willing to depress Dell’s earnings and cash flow for a couple of the years while investing heavily to transform the company–thus, hopefully, creating a more valuable Dell over the long term.
    That said, Dell’s competitor HP is not so optimistic and had these crushing statements about Dell’s turnaround:
    That said, Dell’s competitor HP is not so optimistic and had these crushing statements about Dell’s turnaround:
    “Dell has a very tough road ahead. The company faces an extended period of uncertainty and transition that will not be good for its customers. And with a significant debt load, Dell’s ability to invest in new products and services will be extremely limited. Leveraged buyouts tend to leave existing customers and innovation at the curb. We believe Dell’s customers will now be eager to explore alternatives, and HP plans to take full advantage of that opportunity.”
    Public market investors and wimpy management teams take note: Your obsession with quarterly performance creates the opportunity for firms like Silver Lake to come along and buy your companies on the cheap, thus coining money for their private-market investors. In short, your quarterly earnings obsession is ruining companies and destroying value. So grow a pair, tell Wall Street to be patient, and focus on creating value for the long term!
    * What I mean by “investing in pricing” is cutting prices on hardware and, thus, reducing profit per unit. This will hurt profit margins but make the company’s solutions more attractive to customers. And given that the focus is now on “solutions,” they’ll be looking to sell the hardware at closer to cost and then make money on add-on software and services.

    In addition I will draw your attention to the following facts in the first “Michael Dell: We are not a PC company anymore” section of my analysis:

    • John Swainson President of Dell Software Group was senior advisor to Silver Lake before he came to Dell a year ago to form this most essential unit for Dell’s long-term business strategy. His earlier role was to advise on value creation activities for Silver Lake’s portfolio companies. Prior to that he was CEO of the big software company Computer Associates (now CA Technologies) for five years, and before that worked for IBM Corp for more than 26 years, including seven years as general manager of the Application Integration Middleware Division, a business he founded in 1997. During that period, he and his team developed the WebSphere family of middleware products and the Eclipse open source tools project. He also led the IBM worldwide software sales organization.
    • Marius Haas hired in August to lead the Enterprise Solutions Group (ESG) came from Kohlberg Kravis Roberts & Co. L.P. (KKR). KKR was the leader of the leveraged buyout boom of the 1980s. Its biggest LBO deal is still the biggest one in the histroy of mankind, and well documented in both a book and a film Barbarians at the Gate: The Fall of RJR Nabisco. Prior to KKR Haas was senior vice president and worldwide general manager of the Hewlett-Packard (HP) Networking Division, and also served as senior vice president of Strategy and Corporate Development. Before that he worked in senior operations roles at Compaq and Intel Corporation.
    • Jai Menon became CTO of Dell’s Enterprise Solutions Group in last August but before that he was CTO and VP, Technical Strategy for IBM’s Systems and Technology Group (STG). … Jai joined IBM Research in 1982. He has made many contributions to the storage industry and to IBM in the areas of disk emulation, storage controllers, disk caching, storage networking, storage virtualization, file systems and RAID. He is one of the early RAID pioneers that helped create a technology that is now a $20B industry.

    With such high level of private equity, leveraged buyout and both business and technical strategy expertise in the Executive Leadership Team, as well as top enterprise technology leadership behind that, Michael Dell is best positioned to reap both immediate and ongoing financial benefits of unprecedented scale from taking Dell private. Some more information from the business media to support my statement:

    Inside Michael Dell’s World [The Wall Street Journal, Feb 5, 2013]

    … The buyout would give Mr. Dell the largest stake in the company, ensuring that the 47-year-old is the one who gets to oversee any changes. … As part of the deal to go private, Mr. Dell would contribute his nearly 16% stake valued at about $3.7 billion, plus $700 million from an investment firm he controls, the people said. Microsoft would invest about $2 billion in the form of a subordinated debenture, a less-risky investment than common stock. … Microsoft isn’t expected to get board seats or governance rights in a closely held Dell, one of the people said. Instead, the companies would tighten their relationship regarding use of Microsoft’s Windows software, the person said.

    Microsoft Loan Said to Help Dell While Avoiding Favorites [Bloomberg, Feb 5, 2013]

    Microsoft Corp. (MSFT) is using a $2 billion loan to help finance Dell Inc. (DELL)’s $24.4 billion buyout to bolster one of the largest makers of computers using Windows software and fend off competition from Google Inc. and Apple Inc.

    Steve Ballmer, Microsoft’s chief executive officer, discussed the loan with Dell founder and CEO Michael Dell, according to two people familiar with the negotiations. Microsoft opted for a loan rather than an equity investment to avoid rankling other personal-computer makers that use Windows, said one of the people, who asked not to be named because the matter isn’t public. …

    … Microsoft’s investment helps to support “the long term success of the entire PC ecosystem,” the company said in a statement. Peter Wootton, a spokesman for Microsoft, declined to comment beyond the statement.

    Microsoft won’t be involved in day-to-day operations, Dell Chief Financial Officer Brian Gladden said in an interview. …

    Michael Dell coughs up $750 million cash to buy out Dell [Reuters, Feb 6, 2013]

    Michael Dell and his investment firm are ponying up $750 million in cash toward the $24.4 billion purchase of Dell Inc to help bankroll the largest private equity-backed buyout since the financial crisis.

    The Dell founder and CEO this week struck a deal to take private the company he created out of a college dorm room in 1984, partnering with private equity house Silver Lake and Microsoft Corp.

    Michael Dell will contribute $500 million of his own cash, and MSDC Management – an affiliate of his investment vehicle, MSD Capital – will contribute another $250 million, according to a company filing on Wednesday.

    Dell Inc also said it is targeting the repatriation of $7.4 billion of cash now parked abroad to help finance the deal. That may dismay some shareholders, as a hefty tax is usually levied on cash brought back from overseas.

    The deal, which ends Dell’s rocky 24-year run on the Nasdaq just as the once-dominant PC maker struggles to revive growth, is contingent on approval by a majority of shareholders — excluding Michael Dell himself.

    Several shareholders, including prominent investor Frederick “Shad” Rowe of Greenbrier Partners, have spoken out against the deal, protesting a lack of specifics as well as a potential conflict of interest with Michael Dell being the company’s single largest shareholder with a roughly 16 percent stake.

    “Some shareholders are glad. But there are others who feel it’s a raw deal,” said Shaw Wu, an analyst with Sterne Agee, who has spoken with several Dell shareholders since the announcement but declined to provide further details.

    The company has not given many specifics on what it would do differently as a private entity, angering some shareholders who said they needed more information to determine whether the $13.65-a-share deal price – a 25 percent premium to Dell’s stock price before buyout talks leaked in January – was adequate.

    On Wednesday, an individual shareholder filed the first lawsuit, in Delaware, attempting to stop the buyout. The lawsuit – which is seeking class-action status – maintains that the $13.65 per share offered sharply underestimated the company’s long-term prospects.

    By engaging in the going private transaction nowin the midst of the company’s transition from a PC vendor to full service software and enterprise solution providerthe board is allowing defendants M. Dell and Silver Lake to obtain Dell on the cheap,” read the lawsuit filed by Catherine Christner.

    Dell, the world’s No. 3 personal computer maker, broke down details of the equity and debt financing secured for the buyout in Wednesday’s filing.

    Silver Lake is putting up $1.4 billion, while banks including Bank of America, Barclays, Credit Suisse and RBC will provide roughly $16 billion in term loans and other forms of financing.

    Wednesday’s filing also disclosed that under certain circumstances if the merger cannot be completed, Michael Dell and Silver Lake could have to pay a termination fee of up to $750 million to the company.

    What Should Dell Shareholders Do? [Seeking Alpha, Feb 6, 2013]

    … let’s have a look at some balance sheet items. If the company was highly leveraged, things would be different and this price could make some kind of sense given the risk. But, if we look at the numbers, at the end of last quarter Dell had $11.2 billion in cash and equivalents, a long term debt of $5 billion and a total equity of $10.1 billion. In other words, a very healthy balance sheet.

    Putting things together, it’s very hard to recommend accepting the current offer. Unless you have another investment where you can put your money to work at a higher rate of return than you would by sticking with Dell (and with the safety of its balance sheet) I cannot recommend selling the shares at this price.

    Of course, Michael Dell and Silver Lake know the company is worth much more, and that’s why they are offering to take the company private.

    Unplugged: Why is Michael Dell buying back his company? [USA TODAY, Feb 5, 2013]

    … Because the 47-year-old CEO is already a billionaire, who has had scrapes with the Securities and Exchange Commission, critics contend that he has become adept at financial engineering and is simply sticking it to current shareholders to enrich himself yet even more. (The chairman and the company settled fraud allegations with the SEC in October 2010.)

    No doubt, Michael Dell is a capitalist. But I doubt his sole motivation is pure greed and a perverse joy in sticking it to shareholders, which include employees.

    Yet having met and interviewed Michael Dell on a number of occasions over the past decade, I think he is far more complex than a money-grubbing tech titan without heart or soul. In fact, I think he really cares about his legacy, the company and Austin. …

    MEANWHILE BELOW YOU CAN FIND A FEW “NO-CASH-SKIMMING” VIEWS OF THE PROPOSED DEAL:

    Channel: Happy, Worried [CRN, Feb 5, 2013]

    Solution providers see two sides to Dell’s privatization move.

    The first side is the opportunity for Dell to go through the painful transformation into an enterprise solution developer. Paul Clifford (pictured), president of Davenport Group, a St. Paul-based solution provider, said Dell should be able to accelerate its enterprise transformation without the eyes of Wall Street on them. “Dell is bringing us great products and support,” Clifford said. “If they go private, I think we’ll see more good stuff.”

    The second side is how Microsoft’s new relationship with Dell will impact the rest of the industry. Michael Goldstein, CEO of LAN Infotech, a Fort Lauderdale, Fla.-based solution provider, said such a close relationship between the two is a little scary. “Dell is Microsoft’s biggest reseller partner,” Goldstein said. “They’re hugely important. Seeing the two of them combined makes me a little nervous because we’re a smaller solution provider, and we don’t want to get lost in the mix if [the deal] does happen.”

    What Will We Learn From Dell Tomorrow? [Bloomberg YouTube channel, Feb 5, 2013]

    Feb. 4 (Bloomberg) — Today’s “BWest Byte” is 1, for how many more days until we find out what’s happening at Dell. Cory Johnson reports on Bloomberg Television’s “Bloomberg West.” (Source: Bloomberg)

    Dell Gets Hit Hard by Sluggish Worldwide PC Market [Bloomberg YouTube channel, Nov 16, 2012]

    Nov. 15 (Bloomberg) — Nicole Lapin reports on trouble at Dell. She speaks on Bloomberg Television’s “Bloomberg West.” (Source: Bloomberg)

    Dell and HP down for the count? [CNNMoney YouTube channel, Aug 22, 2012]

    Slow to find success in the realm of mobile, HP and Dell are caught in a downward slide with no apparent end in sight.


    Michael Dell:

    We are not a PC company anymore

    Michael Dell addresses Dell’s future [published on FortuneMagazineVideo YouTube channel, Jan 16, 2013; recorded on July 17, 2012]

    Michael Dell, Chairman and CEO, Dell, was interviewed by Fortune’s Andy Serwer at Brainstorm Tech in Aspen. They talked about the PC market, the enterprise, China, and Apple. He also announced a new $60M venture fund and said sales have slowed in China.

    Full transcript: Michael Dell addresses Dell’s future [Fortune, July 17, 2012]
    See also: Pre-Commerce and the Consumerization of IT [this same ‘Experiencing the Cloud’ blog, Sept 10, 2011]

    A sure sign of that “not a PC company anymore” statement came recently with
    Financial Reporting Change – Product and Service-based P&L by Robert L Williams [DellShares blog, Jan 10, 2013]

    In 2009, we charted our course to become a leading provider of end-to-end solutions. We’ve been executing our strategy with discipline and consistency ever since, investing for growth in the data center, software and services.  Our Enterprise Solutions and Services business revenue was about $14 billion in FY08 and by Q3 FY13 we saw an annual run rate approaching $20 billion.  We now have critical mass in these businesses, and we need a financial reporting structure that supports their growth and success.  Today in an 8-K filing Dell announced in the first quarter of fiscal 2014, which begins on February 2, 2013, it will replace its current global customer segment reporting structure with the following product and services groups:
    •  End User Computing (EUC), led by Jeff Clarke, vice chairman of operations and president Dell EUC, will include a wide variety of mobility, desktop, desktop virtualization, third-party software, and client-related services and peripheral products.
    •  Enterprise Solutions Group (ESG), led by Marius Haas, president Dell ESG, will include servers, networking, storage, and related peripherals products.
    •  Dell Services, led by Suresh Vaswani, president Dell Services, will include a broad range of IT and business services, including support and deployment services, infrastructure, cloud, and security services, and applications and business process services.
    •  Dell Software Group, led by John Swainson, president Dell Software will include systems management, security and business intelligence software offerings.
    Steve Felice, chief commercial officer, will continue to lead Dell’s global sales and marketing organizations.

    That was already well manifested at Dell World [2012] Influencer Panel Highlights – December 11, 2012 [DellVlog YouTube channel, Dec 11, 2012]

    Highlights from the Dell World Influencer Panel and Q&A with Michael Dell and Dell’s Executive Leadership Team held December 11, 2012 live from Austin, TX. Join the conversation on Twitter via #DellWorld.

    The Dell wants to be more than your box provider post from The Register summarizes the above [Dec 12, 2012] as:

    Solutions in hand – but supply your own drinks

    … Dell is dead serious about being a “solution provider” … – and it has to be, because as we all know the margins are in software and services.

    That’s why Steve Felice, Dell co-president and chief commercial officer, bragged that Dell had spent over $10bn in the past five years to acquire Perot Systems, Quest Software, Wyse Technologies, Scalent, Boomi, AppAssure, SonicWall, KACE, SecurityWorks, and a slew of others to build out its portfolio of services and software.

    The executive roundtable was a way to introduce some of the new faces of Dell to customers and partners, with just about everybody but Dell, the man, and [Steve]Felice [Dell co-president and chief commercial officer], who joined Dell in 1999 from third-party tech support firm DecisionOne, and Jeff Clarke, vice chairman and co-president in charge of global operations and end user computing, being the old Dell hands.
    Marius Haas, president of the cross-group Enterprise Solutions (gulp!) group, just came aboard this year after a short stint at private equity firm KKR and a long career at rival HP. John Swainson, who runs Dell’s Software Group, is a long-time IBMer who turned CA Technologies around. After the surprise resignation last week of long-time EDS executive Steve Schuckenbrock, who has been at Dell since 2007 and who has run its Services and then its Large Enterprise groups, Suresh Vaswani is the new president of the Services group and was formerly in charge of Dell’s Indian services group; before that, he was the co-CEO at Indian services giant Wipro. The consensus on the street seems to be that Schuckenbrock wants to be a CEO, and it ain’t gonna happen at Dell. (There could be some openings up at HP.)
    The opening of Dell World was also a way to toss out some more statistics. Dell says that it has presence at 95 per cent of the Fortune 500, and that more than 10 million small and medium businesses rely on its solutions (gulp!) and services (okay, new rule, when Dell says services, you have to pay the person to your right $5.) Dell also has something on the order of 115,000 partners, with about 650 of them showing up at Dell World to get the inside track.
    The execs were also put on the spot to answer questions, and Dell, the man, was asked about what he thought about the future of the PC business, something on the minds of both HP and Dell these days and not something that IBM is worried about much these days. (IBM is more worried about the future of systems and services, and it will have its own issues here, fear not.)
    “We spend a lot of time talking about this and working and working on it together,” Dell said, referring to his collaboration with Clarke. “We’re quite optimistic about Windows 8. You’re going to hear over the next few days about a broad set of products. Think about a product like Latitude 10, which is a thin, light tablet that also docks to become a full workstation – totally secure, works with all of the other Windows things that a customer have, runs Microsoft Office, and has a USB port, and so on.
    “That’s the kind of product that really excites out customers and helps address some of the challenges that exist. We think the touch experience is incredible. We have this stunning 27-inch, quad HD display with our XPS27 all-in-one. We think we are seeing a real revolution in the PC.”
    Clarke was more adamant: “We still believe that the PC is still the preferred device to do work, to drive productivity, to create. I look at the long-term prospects of the PC business and I am very optimistic; 85 per cent of the world’s population has a PC penetration rate of less than 20 per cent. I look at the middle class as it grows over the next 20 years from 1.8 billion people to 4.9 billion people, and I see the opportunity there. I look at the number of small businesses that we sell to today, and the creation of small businesses continues at an unprecedented rate and serving that with PCs is still a huge opportunity for the company.”

    One of the big events at Dell World on Wednesday, which Felice hinted at, would be a partnership with the Clinton Foundation, the organ of former president Bill Clinton, to help spur the growth of small businesses. (I doubt they talk about solutions much.)

    The real issue, explained Dell, was moving from selling individual point products to standing up combinations of servers, storage, networking, PCs, software, and services to solve a particular problem. This is precisely what every major systems player is trying to do, and the big independent OS suppliers (Microsoft and Red Hat) as well, who treat x86 iron the same way they treat electricity: as a given and not worth much consideration or profits.

    The company  issued the following press releases to clarify everything:
    Dell Investment in Enterprise Solutions and Services Gives Customers Worldwide the Power to Do More [Dell press release, Dec 11, 2012], an important excerpt to add to the above

    Strategy, Execution and Progress
    Dell’s long-term strategy is grounded upon helping IT organizations more rapidly respond to business demands, improve efficiency and capitalize on new, standard-based technologies. Dell is successfully executing on its long-term strategy, including key acquisitions of Wyse, SonicWall and Quest Software in 2012, while growth in its Enterprise Solutions and Services businesses continues to outpace its competitors.

    • Dell’s server and networking business grew 11 percent in the 3rd quarter, representing the 12th consecutive quarter of growth.
    • Dell’s server business grew revenue 4 percent in the 3rd quarter, and was the only provider among the top three to achieve positive unit growth, while other providers lost share.
    • Dell’s storage business (Dell-branded storage) grew at twice the rate of a major competitor and continues to outpace other providers, many of which reported declining revenue.

    Dell Enterprise Solutions and Services now represent one-third of the company’s revenue and half of its gross margin. These businesses, which were about $14 billion in FY08 are on an annual run-rate approaching $20 billion through the 3rd fiscal quarter, are up 4 percent from the previous year. Dell is making solid progress in executing its strategy and continues to add to capabilities valued by customers.

    Dell Backs Growing Businesses With Scalable Technology Solutions, Resources and Capital to Fuel Job Creation, Economic Growth Worldwide [Dell press release, Dec 11, 2012], an important excerpt to add to the above

    Dell today announced a renewed commitment to accelerate growth of small and midsize companies with scalable technology solutions, resources for entrepreneurs, and a new partnership with Clinton Global Initiative designed for next generation business founders.

    Fast-growing entrepreneurial companies are an important catalyst for global economic recovery and job creation,” said Michael Dell, Chairman and CEO of Dell. “At Dell, we’re delivering agile, efficient and powerful solutions to help entrepreneurs succeed today, scale quickly and have their ventures grow as big as their dreams and ambitions.”

    Dell started to communicate heavily this change about one and a half year ago as evidenced by My Take on Dell’s Solutions Strategy post by Lionel Menchaca, Chief Blogger [Direct2Dell blog, June 13, 2011]. More communication since then were given in the following posts:
    My Thoughts on Dell’s Analyst Meeting by Lionel Menchaca, Chief Blogger [Direct2Dell blog, July 5, 2011]
    I see a mixed data center environment in your future by Praveen Asthana [Direct2Dell blog, Dec 15, 2011]
    Enterprise Solutions and Services Strength Highlight Dell’s FY2012 Results by Lionel Menchaca, Chief Blogger [Direct2Dell blog, Feb 21, 2012]
    Business Intelligence for the Mid-Market by Vickie Farrell [Direct2Dell blog, Feb 27, 2012]
    New Dell Appliance Makes Data Warehouses Simple and Affordable by Vickie Farrell[Direct2Dell blog, July 11, 2012]
    How Dell Helped Grow Financial Grow by Scott Schram [Direct2Dell blog, May 21, 2012]

    In my prior role with Dell I was part of the SMB business transformation team charged with integrating M&A acquisition solutions including KACE, Boomi, Compellent, SecureWorks and Force10 Networks into the core business. So when I moved into my new role with our Commercial Verticals organization focused on the Financial Services industry, I was anxious to observe firsthand how this newly acquired Dell IP was meeting customer needs. It didn’t take long.

    Dell announces the completion of its acquisition of Make Technologies by Suresh Vaswani, Chairman–Dell India [Direct2Dell blog, May 24, 2012]
    The NHS Information Strategy and Information-Driven Healthcare by Andrew Jackson [Direct2Dell blog, May 29, 2012]
    Dell AppAssure takes you beyond backup by Zorian Rotenberg [Direct2Dell blog, June 12, 2012]

    It’s been a little over four months since Dell acquired AppAssure, and we’ve settled right into the Dell family. Today at the Dell Storage Forum in Boston, Darren Thomas announced the first new Dell AppAssure release – Dell AppAssure 5 – designed to allow customers to achieve higher levels of scale, speed and efficiency for backups of big data sets.

    Mid-size organizations can gain first-mover advantages with desktop virtualization by Brent Doncaster [Direct2Dell blog, June 13, 2012]

    Watch how DVS Simplified offers a simple, easy-to-deploy and operate VDI appliance that delivers traditional desktop virtualization benefits in an all-in-one package. Learn more at:http://lt.dell.com/lt/lt.aspx?CID=823&#8230;

    Start virtualizing desktops with DVS Simplified DaaS – a cloud-based solution for desktop virtualization by Janet Diaz Solutions Communications Manager, Desktop Virtualization Solutions – End User Computing at Dell [Inside Enterprise IT blog from Dell, June 22, 2012]

    DVS Simplified DaaS delivers full-featured virtual desktops delivered from Dell’s state-of-the-art data centers and powered by Desktone’s industry-leading, secure, multi-tenant DaaS platform. DVS Simplified DaaS is ideal for organizations that want a cloud-based virtual desktop infrastructure (VDI) solution, simple onboarding and management (deployment takes only a few days and can include a proof of concept), a low set-up cost with monthly subscription-based pricing, and the flexibility to scale from a few seats to thousands of seats.
    DVS Simplified DaaS provides organizations of all sizes – SMBs, large enterprises and public sector entities – the ability to quickly deploy a VDI solution to address a variety of business imperatives. Picture workers in industries such as healthcare, insurance, construction, etc. using different devices to connect to their desktops while in the field. Or picture a company needing to quickly provision hundreds of desktops for an incoming class of interns (and also needing to redeploy these desktops at the end of the internship program). Or think of an organization that has a few employees on a different continent but does not want to invest in data centers and IT resources there. DVS Simplified DaaS can be the right solution in each of these cases.

    Knock Down the Barriers to Desktop Virtualization by Ann Newman, a technology writer, blogger and editor for Digital Online Marketing at Dell with specialties in BYOD, desktop virtualization, Windows 8 and other high-technology topics. Follow Ann on Twitter at @DellWebWoman [DellWorld 2012 blog, Oct 12, 2012]

    In today’s business environments, where BYOD (bring your own device) is becoming a fact of life, desktop virtualization is becoming a must-have. Don’t let the old barriers hold you back.

    Winning the data center by Paul Shaffer [Direct2Dell blog, June 18, 2012]
    Dell’s Enterprise Solutions Strategy Will Drive Company’s Long-Term Growth [Dell press release, July 13, 2012]

    “Through strategic acquisitions and organic growth, we are creating innovative solutions that provide more value and competitive edge for our customers,” Michael Dell, chairman and CEO, told stockholders. …
    Mr. Dell and Brian Gladden, Dell CFO, outlined the steps taken by the company to establish Dell as a full-service solutions company, and how the company’s business has shifted, with enterprise solutions and services accounting for 50 percent of its gross margin in the first quarter of fiscal year 2013. Among those actions was the formation earlier this year of a Software Group to add to Dell’s enterprise solutions capability, accelerate strategic growth and further differentiate the company from competitors with standards-based, scalable and flexible Dell-owned intellectual property.
    Dell is building its software portfolio in part through strategic acquisitions. The company recently announced a definitive agreement for Dell to acquire Quest Software, an award-winning IT management software provider offering a broad selection of IT solutions. The Quest acquisition is expected to be completed in Dell’s fiscal third quarter. Dell has made eight acquisitions in the last 12 months and 16 in the past two years.

    Dell Software Leadership Team Event #DellSoftware by Sarah Richardson Luden [Direct2Dell blog, July 19, 2012]

    Dell’s software organization leverages the strength of existing Dell software assets, as well as those obtained through organic and acquisitive growth, to better provide our customers with competitively differentiated hardware, software and services solutions. Dell recently announced its intent to acquire Quest, an IT management software provider, which extends Dell’s existing capabilities in security and systems and data management.
    Dell Software will initially focus on these four core areas:

    Dell CloudExpo Keynote Presentation from Kevin Hanes, Executive Director of Dell Services by Stephen Spector [#DellSolves blog, June 14, 2012] about Dell’s solution oriented approach to cloud computing to meet the challenge for any organization how to evolve, to adopt new architectures and processes that increase business agility, scalability and governance/compliance and decrease risk.
    Dell Cloud Client Computing launches public beta of Project Stratus by Allison Darin [Direct2Dell blog, Aug 27, 2012]

    Project Stratus is a comprehensive cloud-based management console that is geared at helping enterprises thrive in a world of “Consumerized IT” where corporate and consumer technologies intermingle. It empowers employees with the highest productivity and the best user experience, while giving IT organizations the required control to allow them to welcome employee owned devices into the enterprise. Through its unified, cloud-based console, IT administrators will be able to to securely manage user devices as well as deliver applications and services to their users across a variety of scenarios; in office, mobile and remote, corporate owned and managed, user owned and self-service.
    “As the BYOD trend expands the private or public cloud access paradigm beyond PCs to include mobile devices of all types, and organizations start to adopt other consumer technologies like apps, we see IT needing the ability to rapidly adapt and embrace new end user service delivery models,” says Hector Angulo, Product Manager for Project Stratus at Dell. “Project Stratus was designed to provide this agility in a simple, secure and cost-effective package – if IT needs to manage end user devices, they can; if all they care about is managing how corporate data and apps are delivered regardless of device, it supports that too.”

    Data Center Evolution by Scott Herold [Direct2Dell blog, Sept 6, 2012]
    Powering the Possible in Smart Grid by David Lear, Executive Director—Sustainability [Direct2Dell blog, Oct 3, 2012]
    Building a Practical Foundation for Big Data Transformation by John Igoe [Direct2Dell blog, Oct 3, 2012]
    My New Role as CTO of Dell’s Enterprise Solutions Group by Jai Menon, the former CTO of IBM Systems and Technology Group [Direct2Dell blog, Oct 10, 2012]
    Executing BYOD programs by Rafael Colorado Marketing Director, Desktop Virtualization Solutions [Inside Enterprise IT blog from Dell, Oct 10, 2012]

    Let’s start with a common use case of an enterprise customer enabling remote and internal employees to access company resources through various devices and provide more than simple e-mail; they need access to a variety of corporate applications.
    The first variable to consider, Device Management, ensures that governance and policies are applied to all end points. Dell KACE offers a practical device management solution deployed as an appliance or SaaS offering. Additionally, Dell can provide BYOD consulting for organizations that need a more customized solution.
    The second variable, Secure Data, is mission critical because it safeguards the integrity of corporate information. Dell’s SonicWALL ensures secure access to intranet resources with secure SSL/VPN technology to manage encryption across all corporately-managed mobile devices. For a higher level of enhanced security Dell SecureWorks can be added to account for threat management.
    The third variable, Develop and Modernize Applications, helps organizations optimize applications for deployment into BYOD environments. Dell offers AppDev services that provide image optimization and application rationalization services. With PocketCloud, Dell also offers a comprehensive application delivery solution to remotely connect to your desktop with your iOS or Android device. Here’s a quick video on PocketCloud:
    The expanded Wyse PocketCloud family fuses streaming apps and data with search, file management and sharing across personal devices delivering content management from the cloud.
    Finally, Infrastructure Optimization is the variable over which my team, Dell Wyse, has the most influence. Infrastructure Optimization is about providing the backend infrastructure to host and manage your desktops and applications by centralizing data and applications in the cloud or the data center. Dell Desktop Virtualization Solutions (DVS) provides the datacenter infrastructure, including preconfigured networking equipment, storage, and Dell 12G servers to accelerate the adoption of VDI and application virtualization. DVS also offers virtual desktops in Simplified or Enterprise “as-a-service” configurations where virtual desktops are hosted and managed in the Dell Cloud. Finally, DVS offers an assortment of services to help you asses, plan, and roll-out desktop virtualization deployments.

    Dell’s Desktop Virtualization Strategy from Citrix Synergy 2012 [DellTechCenter YouTube channel, June 6, 2012]: [1:10] We are the only company that can offer an appliance, a VDI appliance [(DVS) Simplified appliance]. Nobody else has that. [1:19]

    Rafael Colorado from Dell talks about Dell’s Desktop Virtualization Strategy from Citrix Synergy 2012 in San Francisco.

    Feeling the Energy at Synergy by Janet Diaz Solutions Communications Manager, Desktop Virtualization Solutions – End User Computing at Dell [Inside Enterprise IT blog from Dell, May 10, 2012]

    After viewing a live demo of our Dell Desktop Virtualization Solutions (DVS) Simplified appliance featuring Citrix VDI in a Box software coupled with a Wyse zero client in action, or testing out our DVS Simplified Desktop as a Service (DaaS), or seeing how our Dell Virtual Labs solution is purpose- built to solve the specific IT problems in the education field; our customers came away impressed that Dell’s transformation into a solutions-focused company is gaining major traction.
    As part of the DVS Simplified demo, we are also excited to be showcasing Dell’s partnerships with both Citrix and Wyse, which gives our customers a truly end to end VDI solution that is easy to buy, easy to deploy, easy to manage and easy to scale.  Dell worked closely with Citrix to develop DVS Simplified, incorporating Citrix’s VDI-in-a-Box, to deliver VDI as an applianceBy adding Wyse to the partnership, Dell can now deliver a wide array of plug-and-play, automatically managed thin clients to further extend that simplicity to the end points.  We are very excited to be demonstrating this end to end solution in our booth for all Synergy attendees to see first-hand.

    What the new release of [Citrix] VDI-in-a-Box 5.2 means to you by Rafael Colorado Marketing Director, Desktop Virtualization Solutions [Inside Enterprise IT blog from Dell, Oct 18, 2012]
    – see also: Accelerating desktop virtualization gains [Dell Power Solutions, 2012 Issue 2, May 16, 2012] discussing the issues which lead to the creation of Dell desktop virtualization portfolio of end-to-end solutions—available in Simplified and Enterprise segments—in order to effectively address the diversity of organizations
    – see also: Thin/Zero Client and Virtual Desktop Futures [this same ‘Experiencing the Cloud’ blog, May 30, 2012]
    BYOD: A Love Story by Ann Newman, a technology writer, blogger and editor for Digital Online Marketing at Dell with specialties in BYOD, desktop virtualization, Windows 8 and other high-technology topics. Follow Ann on Twitter at @DellWebWoman  [DellWorld 2012 blog, Oct 26, 2012]

    At Dell, over 15,000 employees use their iOS®-, Android™- and Windows®-based devices at work, worldwide. The company is thriving because the BYOD strategy is built on a solid foundation of mobile device management, application modernization and end-to-end security and networking IT.

    Dell Cloud Client Computing Solutions Support Citrix HDX 3D by Dan O’Farrell Director of Product Marketing, Dell Wyse [Direct2Dell blog, Oct 17, 2012]

    Dell Wyse Cloud Client Manager Eases Consumerization of IT and BYOD Challenges by Rami Karam Product Marketing Manager, Dell Cloud Client Computing [Direct2Dell blog, Nov 7, 2012]
    Release of Dell Quickstart Data Warehouse 2000 Hits Sweet Spot for Mid Market by Matt Wolken [Direct2Dell blog, Oct 17, 2012]
    Unveiling Dell’s next generation converged infrastructure solutions — Active System 800 by Ganesh Padmanabhan [Direct2Dell blog, Oct 18, 2012]
    Converged Infrastructure without the Compromise: Introducing Dell Active Infrastructure and Dell Active System by Dario Zamarian [Direct2Dell blog, Oct 18, 2012]
    Dell developed and acquired IP converge in Active System by Ben Tao [Direct2Dell blog, Oct 22, 2012]
    Taking a more “Active” approach to delivering applications and IT services by Marc Stitt [Direct2Dell blog, Oct 25, 2012]
    One Million Reasons to Celebrate – DCS [Dell Data Center Solutions] Ships its One Millionth Server by Tracy Davis, VP/ GM—Dell DCS Team [Direct2Dell blog, Oct 30, 2012]
    Dell and SAP Hana, or how organizations can harness the power of in memory databases and analytics with joint solutions from Dell and SAP, by Kay Somers  discussing with Mike Lampa, Global Practice Lead for Dell Services Business Intelligence practice and Jeffrey Word, Vice President of Product Strategy at SAP on Direct2Dell blog:

    Part 1, Oct 30: about in memory databases, SAP HANA and how it can dramatically alter organization responsiveness and performance … the capabilities and performance of the SAP HANA platform.

    Part 2, Nov 5: the various ways to add SAP HANA to your database and analytics environment

    Part 3, Nov 11: building the business case for an SAP HANA installation or migration

    Dell Speeds Path to SAP HANA with New Service Offerings in Europe by Andreas Stein [Direct2Dell blog, Nov 12, 2012]
    The Year of the Virtual Desktop- really! by Eric Selken [Direct2Dell blog, Oct 31, 2012]
    Dell Services Introduces New Microsoft Dynamics Solution for Manufacturers by M J Gauthier [Direct2Dell blog, Nov 6, 2012]

    Our manufacturing customers will benefit from the best practices Dell learned from implementing Microsoft Dynamics AX in its own manufacturing supply chain in 2010. Dell’s own implementation generated a 75% reduction in factory IT footprint, 50% reduction in server downtime and a 40% decrease in the IT cost of goods.

    What you may not know about Dell SonicWALL by John van Son [Direct2Dell blog, Nov 13, 2012]
    Dell Acquires Gale Technologies, a Leading Provider of Infrastructure Automation Solutions to help accelerate the momentum of Dell’s converged infrastructure family, Active Infrastructure [Dell press release, Nov 16, 2012]
    Enterprise Business Momentum and Major Milestones by Jai Menon CTO of Dell’s Enterprise Solutions Group [Inside Enterprise IT blog from Dell, Dec 3, 2012]
    Project RIPTide: Business Analytics meets innovation at Dell by Shree Dandekar Director BI Strategy [Direct2Dell blog, Dec 21, 2012]

    Real-time analytics solution for midsized customers is enabled by Dell Boomi and real-time business intelligence capabilities

    Imagine a midsized company collecting data in real time from different sources. Of course they’ll want to convert this data into meaningful insights to improve their business, also in real time. There’s a catch though, they don’t have the IT resources or, necessarily, the expertise to extract those meaningful insights, much less in real time or in plain English.
    Sounds like the right kind of challenge to tackle for Dell’s incubation program.
    With RIPTide, we designed a solution that can assemble relevant data sets (structured and unstructured) on-the-fly, using real-time data integration enabled by Dell Boomi and real-time business intelligence capabilities for reports, dashboards, analytics, and services for easy deployment.
    And it gets even better. This solution simply scales – it can be delivered on a laptop, a server, or an enterprise class platform depending on the customer’s size and needs. A customer also has the option to start off with the Dell Quickstart Data Warehouse and then build the solution on top of it. As part of this project, we’re also exploring to offer this capability as a service for customers to use within their private cloud environment, using Dell managed services.
    We wanted to help customers simplify interpretation of their data – ask a question, get an answer. What is my sales pipeline in real-time? What is my account status with a given customer? What are they saying about me in social media? What does my retail stock look like? Is my fall collection trending on Pinterest?
    We put our project to task, just in time for the two major shopping days of the year – Black Friday and Cyber Monday – with Team Express, a San Antonio-based sporting goods retailer with a small IT staff responsible for maintaining their legacy SQL-based transaction system as well as reporting on daily business activities. Team Express, just like other midsized companies, is challenged with assembling data from various sources, including Salesforce.com and their legacy transaction system, to glean actionable business insights, quickly and easily.

    With the RIPTide solution running on a PowerEdge R720xd 12th generation server, Team Express is now able to capture key business metrics along with new insights, including:

    • Top-performing products by region, customer, and revenue
    • Close-rate per salesperson
    • Sales team productivity
    • Opportunity and lead conversion rates
    Here’s what Brian Garcia, CIO of Team Express … has to say about his experience with this project, “This solution will transform the way almost all of our departments think about how our business is behaving. Now we can see more, we can do more and we will get more with less effort.”

    Dell Retail Announces Industry-Leading Solution to Help Retailers Move to the Cloud by Mike Adams [Direct2Dell blog, Jan 14, 2013]
    2012 – The Channel Perspective by James Wright EMEA Channel Marketing Director at Dell Europe [Direct2Dell blog, Dec 21, 2012]

    It’s almost five years since we started selling through the channel in Europe with Dell PartnerDirect, and it’s safe to say that, while the previous four years were headline years, 2012 has also been outstanding for both Dell and our partners; I want to talk about some of the great highlights that have come out of the Dell PartnerDirect program this year.  Three things really stick out for me – more partners (and more partners growing their Dell business), our continued move from pure PC sales to a far more comprehensive solutions offering for partners and customers, and a steady stream of acquisitions helping to build out our end-to-end solutions portfolio.

    • More than half of Dell’s European sales now go through indirect channels . We’ve now got over 900 Certified Partners in Western Europe. Many are seeing their Dell businesses growing by 30 per cent or more. Now, growth is nothing without volume, but this shows that you can use Dell to survive and thrive in your business despite the current economic climate.
    • We’re building far more complex, integrated solutions. Both server and networking businesses within Dell grew by 14 per cent in Q2. A third of Dell’s revenue, and over half of our profit comes from data centre solutions. In fact, we’re the only major computer vendor to increase server sales in the third quarter, according to both Gartner and IDC. We’re also seeing revenue growth year-on-year in this market. Let’s not forget about the other areas, too. Storage is a big deal for us – and the latest European event proved that it’s a big deal for the channel, too.
    • Thirdly (and this is linked to the point above), we’re acquiring organizations that give us – and our partners – significantly more scope, breadth and reach. Here’s a quick run-down for 2012. While it’s worth understanding what each business does, that is less important than understanding the bigger picture – what we are building in conjunction with partners:
      • Quest – scalable systems management, security, data protection and workplace management.
      • AppAssure – streamlined datacentre operations with backup and recovery software
      • Wyse – client cloud computing. See our earlier blog on what this means for partners here.
      • SonicWALL – network security and data protection – and one of the most recognised firewall and unified threat management brands in the business.
    What of next year? If anything, it’s likely to be just as eventful for the industry as this and previous years. From my perspective, I’m looking forward to carrying on the great work we began five years ago with our partners; we’ve come an awful long way, but there are also plenty of great places we can go to. One thing I do know: it’s never going to be dull. Here’s to a fantastic, profitable 2013!

    Interview Marius Haas, Dell, about its enterprise strategy [Marco van der Hoeven YouTube channel, Feb 6, 2013]

    Witold Kepinski, editor in chief of Dutch IT Channel, speaks with Marius Haas, president, Enterprise Solutions, at Dell Technology Camp 2013, Amsterdam.

    Marius A. Haas [Dell Executive Leadership Team]

    Marius Haas serves as president, Enterprise Solutions, for Dell. In this role, he is responsible for worldwide engineering, design, development and marketing of Dell enterprise products, including servers, networking and storage systems.
    Marius came to Dell in 2012 from Kohlberg Kravis Roberts & Co. L.P. (KKR) [the leader of the leveraged buyout boom of the 1980s with its biggest LBO deal, still the biggest one in the histroy of mankind, well documented in both a book and a film Barbarians at the Gate: The Fall of RJR Nabisco] where he was responsible for identifying and pursuing new investments, particularly in the technology sector, while also supporting existing portfolio companies with operational expertise. Prior to KKR, Marius was senior vice president and worldwide general manager of the Hewlett-Packard (HP) Networking Division, and also served as senior vice president of Strategy and Corporate Development. During his tenure at HP, Marius led initiatives to improve efficiency and drive growth, including the execution and integration of all acquisitions, and he also managed the company’s strategic planning process, new business incubation and strategic alliances.
    Earlier in his career, Marius held a wide range of senior operations roles at Compaq and Intel Corporation. He also served as a member of the McKinsey & Company CSO Council, the Ernst & Young Corporate Development Leadership Network and as a board member of the Association of Strategic Alliance Professionals.
    Marius has a bachelor’s degree from Georgetown University and a master’s degree in International Management from the American Graduate School of Integration Management (Thunderbird) in Glendale, Arizona.

    Dell sets out enterprise solutions strategy [Tech Central, Feb 4, 2013]

    New software group integrates acquisitions to offer end-to-end solutions

    Dell has set out its strategy to offer end to end enterprise solutions.
    At the Technology Camp 2013 in Amsterdam, Tom Kendra, vice president and general manager of the newly formed Dell Software Group, said the company was “steadily executing the strategy of becoming a full service solution provider to enterprise”.
    Software is the next step in Dell’s evolution, said Kendra in a presentation. Leveraging its core strengths, Dell will provide solutions in the client, services and enterprise spaces, with an emphasis on adding value, differentiation and a focus on growth.
    “Software’s intersection with our core strengths, combined with disruptive market trends, allow us to create relevant solutions for today’s, and tomorrow’s, challenges,” said Kendra.
    Under the headings of data centre and cloud management, information management and mobile workforce, Dell will provide software solutions in Windows Server management, performance monitoring, virtualisation management, data protection and management, application and data integration, business analytics and intelligence, bring you own device (BYOD) and endpoint management.
    The newly formed software group brings together elements from Dell’s recent acquisitions, Kace, SecureWorks, SonicWall, Quest, Gale and Wyse.
    A “tough, rapidly changing market fosters transformation,” said Aongus Hegarty, president, Dell EMEA. “All these capabilities from the acquisitions are coming together to form integrated strategies.”
    Hegarty said that Dell is now established as a key player in enterprise technology, as it boasts more than $1.5 billion (€1.1 billion) in software revenue, a 6,000 member software team, of which some 1,600 are engineers, with a 2 million user community from 100,000 customers.
    Kendra cited an EMA Radar report that classed Boomi as a value leader for cloud integration, an NSS Labs highest overall protection award for SonicWall and 9 software Magic Quadrant appearances from Gartner.
    “Customers asking for end to end solutions, right from SME to mid-market and enterprise,” said Hegarty.
    Dell has clearly stated a position of open standards for its solutions. Stephen Davies, Services Solutions Group EMEA, Dell, said that its cloud offerings would be based on OpenStack. With the aim of protecting customers from vendor lock-in, the approach allows for elements of any solution to come from other vendors or providers, without any loss of capability or performance. Where a customer may have a significant investment in one area, Dell’s approach would be to have its solutions work wherever possible with existing implementations.
    Dell launched two new offerings as part of integrated enterprise strategy, Active System Manager 7.0 and new workload solutions optimised for the SAP HANA platform.
    Active System Manger 7.0 is based on Gale Technologies applications and extends the management capabilities of Active System beyond the physical infrastructure to the virtualised infrastructure and workloads. It will be embedded into an Active System 800 and its associated reference architecture.
    Dell has said that it has certified the first of its server, storage and networking technologies in its pre-integrated systems to run SAP HANA. The systems are high-availability configurations that scale from 1 terabyte to more than 4 terabytes and are based on the same architecture found in its single-server appliances.
    For full products details see page the February issue of ComputerScope, available 8 February.

    What Dell Is Doing Today [VideoLifeWorld YouTube channel, Feb 6, 2013]

    Dell Tech Camp 2013 – Tom Kendra VP & GM SW Group at Dell – Key Themes For What Dell Is Doing Today. Dell’s latest technologies and solutions that address customer issues and challenges around Cloud Computing, Data Insights, Mobility and Converged Infrastructure . Video By Dell’s Official Flickr Page http://www.flickr.com/photos/dellphotos/8450786­781/ creativecommons.org/licenses/by/2.0/deed­.en

    Dell Acquisition Strategy [DellVlog YouTube channel, Oct 25, 2012]

    Dave Johnson VP of Strategy demonstrates how Dell’s recent acquisitions all fit together

    Conversation with John Swainson, President of Dell’s Software Group [DellVlog YouTube channel, Oct 2, 2012]

    On Friday September 28, 2012, Dell announced that we completed the acquisition of Quest Software, an award-winning IT management software provider offering a broad selection of solutions that solve the most common and most challenging IT problems. John Swainson, President of Dell’s Software Group joined us on DellShares to discuss the importance of Quest to Dell’s Software strategy. We invite you to listen to John as he provides perspective on the following: • Quest fit within Dell’s Software strategy • Synergies between Quest portfolio and existing Dell solutions • Platform nature of Quest acquisition and what that means Thanks and we look forward to your thoughts and feedback.

    Dell Completes Acquisition of Quest Software by Tom Kendra [Direct2Dell blog, Sept 28, 2012]

    If you haven’t already heard, I am excited to announce that Dell has completed the acquisition of Quest. This is an important acquisition for Dell Software because Quest helps extend our capabilities in systems management, security and business intelligence software, and it also strengthens our ability to bring industry-leading, differentiated, and easy to manage solutions to our customers around the globe.
    With Quest, Dell will be able to deliver a broad selection of software solutions that will help simplify and solve our customers’ everyday problems and tackle their most challenging IT needs. Quest also brings with it critical mass and key talent. Quest currently has more than 100,000 customers worldwide, 5,000 partners worldwide, 1,500 sales and marketing resources, and 1,300 software engineers. As a relatively young and growing organization, these resources are invaluable to the Dell Software Group.
    The acquisition of Quest is a critical step forward for Dell Software because, with Quest, Dell is better able to provide end-to-end solutions that help our customers simplify their operations, maximize workforce productivity, and deliver results faster. Quest supports heterogeneous and next-generation virtualized and cloud environments which is complementary to Dell’s design approach to develop solutions that scale with our customers’ needs. But most importantly, Quest’s software solutions and key technologies are strongly aligned with Dell’s software strategy to expand, enhance and simplify our capabilities and enterprise solutions in four focus areas: Systems Management, Security, Business Intelligence and Applications.
    Quest will be joining other Dell Software assets Dell KACE, Dell SonicWALL, Boomi, Dell Cloud Business Applications and AppAssure as part of the Dell Software Group. Dell Software helps customers of every size take advantage of new technologies and address organizational challenges to grow their businesses and remain competitive. For more than a decade, Dell has been making strategic software acquisitions and partnering in the industry to support and enable the hardware and services solutions that we provide to our customers.  Our Software Group, now including Quest, will continue to extend Dell’s capabilities in software IP and total solutions offerings, and draw on the strength of Dell’s distribution capabilities and reputation to help clients in every industry achieve better business outcomes.
    Please join me in welcoming Quest to Dell Software, and I look forward to the many opportunities we will have to demonstrate that Quest and Dell are truly “Better Together.”
    For more information about Quest software, go to: www.dell.com/quest

    Software strategy and innovation related excerpts from Cover story: Piloting innovation [Dell Power Solutions Magazine 2012 Issue 4, Dec 7, 2012] the executive Q&A by John Swainson

    make the cloud more accessible
    My vision for the cloud is an intelligent technology that organizations can literally just plug into without the need for excessive configuration, security measures, and other manual interventions. All of these things need to be automated and policy-based, but making this vision a reality will take a lot of invention, systems work, and integration. But, that’s the direction we need to take if cloud computing is to achieve its full potential.
    Cloud environments today, in general, are far too siloed, complex, and inefficient to really deliver on their full potentialBut as we move forward in time, the cloud can become so much easier to use and so much more automated than it is today. We want to give customers the best of both worlds—on-premises access to resources when they want it and access to the public cloud when they need it—seamlessly.
    security solutions
    Right now, our particular focus is on securing the pieces in the middle of the security equation. How can we secure data center access through a firewall? That’s Dell SonicWALL™ software. How can we secure access to applications and databases? That’s where the Quest™ identity and access management solutions come in. How can we measure and monitor all of these parts to build confidence that security has not been breached? Dell SecureWorks provides security monitoring and risk remediation services. And finally, how can we enforce security policies on the endpoints of the data environment? Dell AppAssure™ and Dell KACE™ software address that area. Dell Software is all about making sure that the right people get access to the right data, and that the wrong people do not get access. Risk management and secure access to information are at the core of all of these solutions.
    It’s a big, complicated world out there. A threat environment that once comprised casual hackers has evolved into a complex landscape of advanced persistent threats—including industrialized espionage, or cyber-espionage—in many places around the world. One important aspect of Dell’s comprehensive approach to security is the SonicWALL consulting service, which helps organizations safeguard their valuable data and protect the productivity of their workforce.
    big data analytics
    To help improve efficiency, the Dell Quickstart Data Warehouse Appliance provides a prepackaged solution that combines Dell PowerEdge™ 12th-generation servers, the Microsoft SQL Server® database, Dell Boomi™ cloud-based data integration software, and Dell-provided consulting and training services.
    We also offer database tools that allow organizations to go back and forth between conventional data sources and open source solutions such as the Dell | Cloudera Apache Hadoop solution. Our Dell Toad™ family of products has been enhanced to support big data as well as conventional relational data management tasks. On the services side, we have created Hadoop offerings that enable organizations to gain access to the power of Hadoop without having to set it up themselves. They can deploy Hadoop in production environments quickly and transform large data sets into intelligent information. And our Dell Boomi solution makes it easy for organizations to integrate data from various sources within a single data warehouse for analysis.
    And, we have only scratched the surface. We can do so many other things to make it easy for people without data science skill sets to collect and analyze data for enhanced decision making in business settings. This data analysis area is where we are going to see a lot of investment from Dell over the next couple of years.
    bring-your-own-device (BYOD)
    Looking ahead, the BYOD trend presents an enormous opportunity for Dell to offer additional products that manage personal and mobile devices. It also provides the software and services that help organizations simplify IT and derive added value from their systems. The cloud, mobile devices, converged infrastructure, social media—all of these trends have very positive implications for our customers if they have the tools to manage them securely. And that’s obviously where we at Dell Software come in.

    More information:

    Dell Targeting $5 Billion in Software Sales, Swainson Says [Bloomberg, July 20, 2012]

    Dell plans to build or acquire software in areas including computer security, PC and server management, data analysis and business applications for midmarket customers, he said. … It may also compete with SAP AG (SAP) and Oracle Corp. (ORCL) in some segments of the business-applications market, said Swainson. … “Companies like IBM, HP and Dell have to provide a computing platform with the server and the software as a service,” he said. “That’s what all these acquisitions and vertical integration are about.”

    Dell Outlines Big Software Ambitions [InformationWeek, July 20, 2012]

    Its target buyer is the often overlooked small to medium-sized company with 215-2,000 employees, said Swainson. These companies have small IT staffs with large responsibilities. “The sweet spot for Dell is the mid-market…We want to produce a set of solutions designed for that market,” Swainson declared. … Dell will also get into business applications but it has no intention of going head to head with Oracle or SAP, two of the largest application suppliers. Both tend to address customers above the mid-market and both are key Dell business partners, he noted. … Dell faces a formidable task in training its large direct salesforce and many channel partners to add software products to the long list of Dell hardware they are already trying to sell, said Swainson. IBM spent 20 years converting itself from primarily a hardware company into a server company that also sold services and software. … To get to $5 billion, “it won’t take us 20 years, but it will take us longer than a year and half,” he noted.

    Dell Power Solutions Magazine 2012 Issue 4, Dec 7, 2012

    Special section: Dell Software

      • Unfolding strategic new dimensions [Jan 27, 2013] excerpts giving a brief overview of the article describing the current software portfolio:

        – The Quest™ Identity and Access Management family adds to the solid set of Dell SonicWALL™ and Dell SecureWorks assets.
        – Dell AppAssure. From data centers to the cloud, Dell AppAssure™ software is a backup solution well suited for virtual, physical, and cloud environments.
        – Dell Boomi. Organizations can deploy Dell Boomi AtomSphere™ software to connect any combination of cloud, software-as-a-service (SaaS), or applications on-premises without requiring appliances, additional software, or coding.
        – Dell Clerity Solutions provides application modernization, legacy system rehosting, and capabilities that enable Dell Services to help organizations reduce the cost of transitioning business-critical applications and data from legacy computing systems to innovative architectures—including cloud computing.
        – Dell KACE. Servers, desktops, and laptops can be managed cost-effectively with Dell KACE™ systems management appliances, which provide time-savings benefits for systems management professionals and their organizations. The Dell KACE appliance-based architecture provides easy-to-use, comprehensive, and end-to-end systems management.
        – Dell Make Technologies. Application reengineering is a key capability in the growing field of application modernization and an important area of investment for Dell Services. Dell Make Technologies offers application modernization software and services that help reduce the cost, risk, and time required to reengineer applications.
        – Dell SecureWorks provides automated malware detection and analysis with real-time protection, 24/7 monitoring and response by security experts as needed, and security consulting and intelligence to identify gaps or respond to incidents.
        – Dell SonicWALL dynamic network security and data protection enable Dell to provide comprehensive Dell next-generation firewall and unified threat management solutions as well as secure remote access, e-mail security, backup and recovery, and management and reporting. Its Global Management System (GMS) enables network administrators to centrally manage and provision thousands of security appliances across a widely distributed network.
        – Dell Wyse desktop and mobile thin clients provide low-energy, highly secure, cost-effective access to data. Dell Wyse PocketCloud™ software—a remote desktop client—provides enterprise-grade access to cloud services along with desktop and enterprise applications, and it helps extend the benefits and security of virtual desktop infrastructure (VDI) environments to mobile phones and tablets. In addition, organizational and end user–owned devices can be managed from profiles that are set up using a single, cloud-based console in Dell Wyse Cloud Client Manager.
        – Dell OpenManage Essentials. Centralized monitoring of Dell servers, networking, storage, and client systems is available in Dell OpenManage™ Essentials (OME) version 1.1 software—a complimentary download from the Dell Support site. This one-to many hardware management console helps reduce the complexity of common management tasks.
      • Defending against advanced persistent threats
      • Gaining holistic insight into enterprise networks
      • Boosting virtual desktop performance with compact cloud clients
      • Business analytics: Gaining a competitive edge from the data deluge
      • Migrating to Windows 8 for heightened productivity
      • Accelerating the benefits of Windows Server 2012

    BYOD Reality Check: Focusing on users keeps companies ahead of the game by Tom Kendra Vice President and General Manager, Dell Software Group [Direct2Dell blog, Jan 28, 2013]

    If you are involved in the Systems Management business or follow it, you can’t help thinking about the incredible rate of change going on! Advances in Virtualization, Converged Infrastructures, Cloud Computing and an explosion in end-user devices are driving the need for a new generation of management and operations solutions. At Dell, we intend to lead in defining and delivering on that next generation of solutions.

    It is impossible to discuss all of these trends and what they mean in a single article. Over the next couple of months, we will provide points of view on each. Today, let’s start with the trend that many of us actually participate in—bringing our own laptops, phones and smart devices into our work environments.  This is commonly referred to as Bring Your Own Device, or BYOD. Many companies are actively working on their BYOD strategies and we recently conducted a study to get some insight on their approaches.
    The results of our recent global BYOD survey confirm what we have long suspected: organizations that build their BYOD strategies around the users realize a higher sustainable business benefit than those that focus their strategies solely on devices, or are slow to adopt BYOD at all. Survey responses indicate that three-quarters of organizations deploying a mature, user-centric approach to BYOD have seen improvements in employee productivity, customer response times and work processes, giving them a secure competitive advantage over those that don’t.
    We weren’t surprised by this. We know that early on, our customers’ first reaction to employee requests to use their own devices for work produced a scramble to figure out how to manage all those devices. Security was, and still is, of paramount importance. Over time, though, as their BYOD strategies matured, some IT organizations began to realize that by focusing on the users, they could respond quicker to the changing demands of the organization. They didn’t have to address those changes on every smartphone, tablet, laptop and any other device their employees bring to work, and, by focusing their BYOD strategy on managing user identities, they could resolve their concerns about security and other issues like access rights and data leakage, and still give employees everything they need to do their jobs.
    Our survey polled almost 1,500 IT decision-makers across the United States, United Kingdom, France, Germany, Spain, Italy, Australia, Singapore, India and the Beijing region. The results showed that more than 70 percent of those companies have realized benefits to their corporate bottom lines. Even more significantly, 59 percent say that without BYOD, they would be at a competitive disadvantage. Two-thirds of the companies surveyed said the only way BYOD can deliver significant benefits is if each user’s specific rights and needs are understood. Among respondents that both encourage BYOD and deploy a mature, user-centric strategy, this number jumped to three-quarters. They also reported that BYOD provides their employees the benefits of more flexible working hours, and increases morale and provides better opportunities for teamwork and collaboration. Overall, survey respondents with a user-centric BYOD strategy reported significant, positive improvements in data management and security, in addition to increased employee productivity and customer satisfaction.
    The survey results have confirmed for us ─ without a doubt ─ that organizations still trying to address BYOD by managing devices, or that have been slow to adopt BOYD at all, risk competitive disadvantage. The highest competitive edge, in terms of the increased business value gained from greater efficiency, productivity and customer satisfaction, goes to those embracing user-centric BYOD.
    We invite you to explore the key findings of Dell’s survey in our whitepaper, and if you want to “see” how this data reinforces our perspective on the importance of a user-centric management strategy for BYOD, take a look at our new infographic (Note: click on the image below to see a larger version of it, or you can download a copy of the PDF here).

    image

    Dell Names John Swainson President of New Software Group [Dell press release, Feb 2, 2012]

    • Software Group created to enhance solutions capabilities
    • Expanded software focus will extend Dell ability to improve customers’ productivity
    Dell today announced the appointment of John Swainson to serve as President, Software Group, effective March 5, 2012. Mr. Swainson will report to Michael Dell, chairman and CEO of Dell.

    The Software Group will build on Dell’s software capabilities and provide greater innovation and organizational support to create a more competitive position in delivering end-to-end IT solutions to customers. The organization will add to Dell’s enterprise solutions capability, accelerate profitable growth and further differentiate the company from competitors by increasing its solutions portfolio with Dell-owned intellectual property.

    “John is an outstanding leader with an unparalleled record of achievement,” said Mr. Dell. “He brings to Dell extensive experience in leading and growing software businesses, unique expertise in managing complex software organizations, and a passion for listening to and serving customers. I look forward to working with John as he expands our enterprise solutions and builds on our software capabilities.”
    “This is an exciting time to join Dell,” said Mr. Swainson. “As a leading IT solutions provider, Dell brings key assets and advantages to the software sector, including a strong global brand, a diverse global customer base and customer loyalty that creates opportunities to expand relationships with software.”

    The Software Group will bolster Dell’s ability to execute in several strategic areas critical to its customers. The combination of strong internal development capabilities in hardware, software and services gives Dell the ability to serve the largest possible group of customers within the $3 trillion technology industry.

    “The addition of software, both within the Software Group and across all of Dell, will help catalyze our transformation,” Mr. Dell said. “As software will be a part of all of our products and services, the group’s success will be largely be measured by the success of Dell overall.”

    Most recently, Mr. Swainson was senior advisor to Silver Lake, a global private equity firm. Prior to Silver Lake, he was CEO and director of CA, Inc. from early 2005 through 2009. Under his leadership at CA, the company significantly increased customer satisfaction, its operating margins, and revenue.

    Prior to CA, John worked for IBM Corp for more than 26 years, holding various management positions in the U.S. and Canada, including seven years as general manager of the Application Integration Middleware Division, a business he founded in 1997. During that period, he and his team developed the WebSphere family of middleware products and the Eclipse open source tools project. He also led the IBM worldwide software sales organization, and held numerous senior leadership roles in engineering, marketing and sales management.
    Mr. Swainson holds a bachelor’s degree in engineering from the University of British Columbia, Canada.

    John Swainson [Forbes profile, Aug 10, 2010]

    … Mr. Swainson is also a Senior Advisor to Silver Lake Partners, a global private equity firm, which he joined in June, 2010. Mr. Swainson advises Silver Lake’s portfolio companies on value creation activities. …


    The Indian case as a proofpoint of readiness 

    ‘Software’s becoming key to our biz, and so is Bangalore’ [The Times of India, Jan 9, 2013]

    Marius Haas President, enterprise solutions, Dell
    As Dell works to transform itself into an enterprise solutions and services company, Marius Haas has a pivotal role. He heads the $63-billion company’s enterprise solutions business. He joined Dell last year from investment firm Kohlberg Kravis Roberts & Co. Prior to that, he was senior VP in Hewlett-Packard. Haas was recently in India, where Dell has a quarter of its 1.1 lakh employees, and spoke exclusively to TOI.
    How important is the India enterprise market for Dell?
    The top ten markets in the world represent 70% of the total spend in the enterprise space for the things that we do. Out of the top ten markets, three markets represent 60% of the incremental spend over the next three years. And those three are India, China, and the US. So the India market is very, very important to us. You can imagine that we are gonna be focused quite a bit on what we can do for this market.
    What segments of industry do you see demand coming from?
    In India I think 80% of the growth comes in customers that are 500 employees or less. So clearly we need a small business led market strategy, and for the solutions we create. You will see us with solutions that bring together server, storage, networking in a very scalable way, so that you buy what you need, at the scale that you need, at the price points that you need. They are pre-integrated, pre-configured, and designed to run specific workloads. For small businesses, it will save a lot bother in trying to put together systems from different components.
    Several IT vendors today talk of pre-integrated stacks. Do you see customers opting for such stacks?
    The estimate is that 30% of the enterprise purchases in 2016 will be with a systems view (pre-integrated, pre-configured stacks). There will be cannibalization of the traditional silo selling mode – of buying servers, storage, networking separately. All of a sudden a big part of how people are thinking is, I want to buy the cloud solution that enables me to run application X, Y and Z. So we recently announced our Active Systems infrastructure family that brings together server, storage, networking all in one chassis with one common management capability. It requires 75% fewer steps from the time you receive it to the time you are actually running workloads. We have optimized all components to work together for specific workloads in such a way that it generates 45% better performance per watt than what’s out there from the competition. Saves money for our customers.
    Is your India R&D contributing to these systems?
    Clearly if you are going to go towards a more systems view, there will be a lot more focus on software. Software provides the value add to servers, storage and networking coming together. Our Bangalore team has capabilities in servers and specifically around software. A big part of the management capabilities built into the system is done by a team here in India. The skill sets and capabilities in India are part of the core competency that we need today. Indeed, one of every four of our servers sold worldwide is sold with work done in Bangalore. And that’s what gives us the confidence to do more here.

    SME Channels : Ajay Kaul, Head, GCC Dell India talking about the company’s growth strategy [smechannels YouTube channel, Feb 6, 2013]

    Watch Ajay Kaul, Head, GCC Dell India talking about the company’s growth strategy … interview taken by Sanjay Mohapatra, Editor, SME Channels

    + [8:39] I believe Dell is moving to the services business …
    + [10:38] How would you help partners create their own brands?
    + [12:20] How fast are you in integrating all the products and go to market?
    + [13:58] How do you engage your finance arm to enable the partners?
    + [16:30] What is your strategy around cloud computing for the partners?
    + [17:36] What is your investment roadmap in terms of technology for this year?

    Dell’s 7 strategies to stay top of mind for channel partners By Ajay Kaul [The DQ Week, Feb 5, 2013]

    What are the strategies that the companies can adopt to ensure that they keep their channel partner programs alive and thriving?
    Putting together an effective channel partnership program to take the company’s products and services can be just as challenging as rewarding. A good channel partner program does not end with identifying and enrolling like-minded and trustworthy resellers. It goes on to nurture and nourish these relationships through a host of incentives, training initiatives and many long-term measures.
    Those who recognize the economies of scale that such programs bring are also aware of how vital it is to stay top of mind at all times. In order to leverage the considerable boost that these can bring to revenues and sales, companies need to ensure that their resellers acknowledge them as a priority over the competition. This is easier said than done. Channel partners sell what they know best and in today’s competitive landscape, where resellers have the choice of dozens of brands, it becomes imperative to stay top of mind at all times.
    What strategies can companies adopt to ensure that they keep their channel partner programs alive and thriving? While most dealers and distributors will always be more attracted to methods that help them boost margins; they are also enthusiastic about measures that will help them address their challenges of training and retention of sales staff, competition, product and service expertise or growing consumer loyalty.

    Here are seven strategies from Dell that can help ensure a win-win environment for both reseller and your company:

    Invest in your channel partner’s success: Channel partners need to know that they are an important part of your company strategy and they need to feel the benefits of their association with you, through better margins, training and other initiatives that create success opportunities for them.
    Focus on their profitability and they will focus on yours: The conditions you create for your partners needs to be win-win for both sides. Last year, Dell announced a new GCC (Global Commercial Channel) structure, which is a single point of contact for partners, with an aim to increase productivity and improve time cycles and enable more customized programs for partners support. The new structure protects partner profitability by bringing consistent pricing across different Dell commercial businesses and offers the partners growth opportunities with solution centric offerings and a broader end customer base.
    Provide Product Support: The more your partners know of your products and services the easier they will find it to sell. Partners who have access to information and the means to understand your company offerings are more likely to push your products with their customers. Structured programs to boost product knowledge and bring to the forefront product and service USPs will equip partners with the right knowledge to sell your products.
    Continuous education programs for channel partners: Channel partners need to be constantly reminded about your product or service. What better way than through education programs? Dell offers over 100,000 training sessions a year to all partners globally and Dell’s Engineers Club further invests in the development of individual engineers and partners by bringing together technical experts and pre-sales and post-sales engineers across the IT industry to network, exchange ideas, and share industry trends and best practices with the channel partners.
    Listen to your partners: They can keep you in-tune with the pulse of the market. Structured listening programs will give partners a platform to voice recommendations and act as an additional source of market information.
    Incentivise your partners: Create exciting incentives for sales, profits, rewards & recognition. Dell’s PartnerDirect program features a structure which rewards certification and training, including new rebates for premier partners, expanded deal registration terms, financial incentives, and marketing and technical assistance. Dell has 115,000 partners globally, in its highly successful PartnerDirect model. Dell has also doubled its channel sales force and has added more enterprise specialists enabling and supporting the partners to address customer needs and optimally provide solutions within limited IT budgets.
    Make sure your program is high visibility and high impact: Don’t forget that your competition may be wooing your partners away from you. Your partner program needs to be more visible, more impactful and needs to give your partners what they need to sell for you.
    A satisfied channel partner will push your brand with their customers, protect your margins and will also be more accommodating to your needs. Needless to say, a poor channel relations strategy will have just the opposite impact on your company margins and sales.

    Dell GCC Engineers’ Club Now in India [SME Channels, Jan 11, 2013]

    To build on existing GCC initiatives to strengthen and showcase its commitment to its partner community
    Dell’s Global Commercial Channel (GCC) has launched the Dell Engineers Club in India, as part of their long-term commitment to channel partners in the country. The platform will enable technical experts across the IT industry to network, exchange ideas, and share industry trends and best practices.
    This club will also help train channel partners and their engineers to be knowledgeable in Dell’s advanced server, storage, security, networking and cloud solutions, announced the company’s press release.
    The company further announced that Dell’s long term aim is to qualify its partners to become not just the solutions provider but to be considered IT consultants for their end-customers. Dell believes in empowering their customers with the ‘Power to do more’, and therefore aims to create and offer real solutions with the intention of making technology smarter, more effective, and in service of its end-customers.
    Ajay Kaul, Director & GM (Global Commercial Channel), Dell India, said, “Dell’s GCC business is very committed to the Indian market and the Engineers Club aims to strengthen the enterprise knowledge of our partner community, helping them become consultants for their end-customers.”
    Dell offers over 100,000 training sessions a year to all partners globally and the Dell’s Engineers Club will further build on this initiative to invest in the development of individual engineers and partners.
    Dell’s Global Commercial Channel (GCC) division retains around 1700 commercial relationships in India. The division takes care of programs and policies relevant to channels, which cover all types of business entities such as public companies and large-/medium-sized companies.

    See also:
    Dell Global Commercial Channel Launches Dell Engineers Club in India [Dell India press release on BusinessWire India, Jan 10, 2013]
    After China, Dell introduces Engineers Club in India [The DQ Week, Jan 10, 2013] from which the following excerpt adds to the above important information:

    Ajay Kaul, director and GM, global commercial channel, Dell India, informed, “This program has been extended by Dell to the Indian market to cater to the market potential in India and we feel it is important for us to bring the Indian channel partners at par with their global counterparts. As a start, the Dell’s Engineers Club is by invitation only. Partners with a certain level of certification already attained from us through the Partner Direct program will be sent an invitation to join this club. In that invitation, we will include details on where and how to sign up. Once their registration is approved, they will have access to all the programs and activities under this initiative. At the start of the program, we will be looking a limited number from the top 8 and will expand the program to more partners from the top 11 cities by the end of the month.”
    With the recent acquisitions of companies like Quest Software, SonicWALL and Wyse, Dell has been able to add extensively to its solutions portfolio with leading management, security, virtualization and cloud capabilities. Hence, the focus on these enterprise solutions and services creates tremendous opportunity for its channel partners and therefore the necessity to ensure that partners receive the required training to help them understand the extended portfolio of solutions and services and provide customers with the right solutions and advice. The Dell Engineers Club is designed to provide maximum training about datacenter solutions so that the partners are better informed and can rise up to becoming IT consultants to the end-customers rather than just being a solutions provider.
    “Our channel partners play a significant role in our business, 25 to 50 percent of our commercial business, depending on country to country. In some countries, it’s 100 percent and we see it growing further. India is a very important market as far as our partner community is concerned. We engage with our partners in this region at the highest level ensuring that the programs and policies designed are favorable to their benefits which leads to their overall growth,” said Kaul.

    See also:
    DELL Partners with HCL Infosystems for Distribution of Enterprise Products [HCL Infosystems Ltd. press release on BusinessWire India, Jan 10, 2013]

      • DELL enters into a strategic partnership with Digilife Distribution and Marketing Services (DDMS), distribution arm of HCL Infosystems
      • DELL takes the next leap in enhancing its commercial and enterprise solutions offering through this new distribution partnership and which is a further expansion of Dell’s PartnerDirect program which has developed a significant amount of the commercial channel partners in India
      • Partnership to target Mid-Market customers

    Dell’s Global Commercial Channel (GCC) division retains around 19,000 commercial partners in the Asia Pacific region. The division takes care of programs and policies relevant to channels, which cover all types of business entities such as public companies and large-/medium-sized companies. In India, Dell currently engages with 1700 commercial channel partners, and this agreement will further strengthen the reach of its enterprise solutions to key markets.

    The partnership will enable DDMS to supply the complete range of Dell Enterprise Products and Services. HCL‘s DDMS will help boost the growth of Dell, through the distribution providers in the market. HCL Infosystems widespread network of distributors will further ensure a robust funnel to Dell products and services.
    In the past two years, Dell has made 15 strategic acquisitions to enhance its capabilities as an end to end solutions provider and has carefully aligned its channel program with the acquisitions it makes. To enhance Dell’s security capabilities, the company recently acquired SonicWALL, Inc. Having an immense focus on the distribution of its products and its channel partners, Dell has offered SonicWALL’s existing channel partners, an opportunity to join the company’s current PartnerDirect program, which will enable them to preserve the investments made with SonicWALL. Also, in order to offer best to the channel partner community the company will take the best of SonicWALL channel programs and model and combine it with Dell’s PartnerDirect program. This move has not only provided the best for the channel partners but also Dell has expanded its own channel team’s customer relationships by further enabling its existing partners to sell SonicWALL solutions.

    Ajay Kaul to head Dell’s Global Commercial Channel biz in India [exchange4media News Service, Nov 8, 2012]

    Dell India has announced that Ajay Kaul, Director & General Manager, will lead the Global Commercial Channel (GCC) business for Dell in India. Kaul’s focus as business leader will be to oversee the expansion of Dell’s partner community and its growth in the upcountry markets. As the GCC Business Head, Kaul will also focus on strengthening the company’s relations with its partner community.
    During his seven-year tenure at Dell India, Kaul was Director – Sales for the Public, Education and Healthcare business from February 2009 to August 2012 in the South & West Region across Central / State Government, PSU, defense and covering all products of Dell for revenue, margin and market share growth. As the Regional Enterprise Manager from 2007 to 2009, he headed the pre-sales team and managed the servers and storage business in North and East region across large enterprises and government segment. Kaul had joined Dell in May 2005 and managed key global accounts to grow revenue and profitability covering all products.
    Dell’s Global Commercial Channel (GCC) division was created in early 2011 with an aim to be a single contact point for its commercial channel partners, thereby leading to higher productivity and improved time cycles and enabling more customised programmes to support the partners in the market. The GCC team is responsible for designing and implementing profitable schemes and policies for Dell’s channel partners and collecting and using channel feedback to execute best structures for its channel partners.
    Dell currently engages with 1,700 commercial channel partners in India, which cover all types of business entities such as public companies and large-/medium-sized companies.

    Dell’s position on the Indian market two years ago, and the approach taken by the company to achieve that is well described in How Dell conquered India [CNNMoney, Feb 10, 2011] in the end of which the summary of the position is given as:

    For Dell, India has emerged as a local and global service delivery hub. It is the only market outside the U.S. with all business functions—customer care, financial services, manufacturing, R&D, and analytical services—operational at the local level and giving global support. “We evaluated market trends and growth potential, enabling us to invest ahead of the curve in India, resulting in our phenomenal growth,” says Midha. It is a growth story that resonates around the world.

    Dell India has made not only big progress relative to that position but in the enterprise business as well. See CIO CHOICE 2013 Awards Recognizes Dell for its Outstanding Performance in Server, Storage and Data Center [Dell India press release on BusinessWire India, Feb 4, 2013]

    Dell’s commitment to addressing CIO needs with their best in class technology and customer commitment wins them accolades

    Bangalore, Karnataka, India, Monday, February 04, 2013 (Business Wire India)
    Dell India has been awarded the CIO CHOICE 2013 award for their solutions in Server, Storage – Hardware, Data Centre Consultant and Data Centre Transformation Services categories. The CIO Choice Awards is a B2B platform positioned to recognize and honour products, services and solutions on the back of stated preferences of CIOs and ICT decision makers. These awards demonstrate Dell’s “best-in-class” ability and commitment to meeting CIOs evolving needs in today’s dynamic business environment.
    The process for the “CIO CHOICE award” is conducted via an independent advisory panel of eminent CIOs and an independent survey voting from across the country with CIOs and ICT decision makers.
    Sameer Garde, President and MD, India Commercial Business, said “Dell has been investing in its enterprise capabilities and building solutions that address the business goals of customers. Being honoured by the CIO Choice award so early in our transformation into an end-to-end solution provider is truly a cherished achievement and a testimony to the efforts of the Dell India team. It shows that our open, scalable and affordable solutions have resonated well with customers and that we are well on our way to becoming the preferred choice for enterprise solutions.”
    Commenting on Dell’s success in the enterprise space Venu Reddy, Research Director IDC India said, “The infrastructure market has been showing some positive sights in the current marketplace. This is due to some segment specific traction and focus by key vendors like Dell. In the server market the stabilization and growth is driven by key industries like Finance & Insurance, Distribution, and Manufacturing which have driven a 12% growth Year-on-Year for the 1st 3 quarters of 2012. While in the storage market the additional momentum has come from mid-size organizations which have started investing in key infrastructure that is helping them drive faster growth and better ROI.”
    With the strongest ever enterprise product line up, Dell today is innovating and expanding its enterprise offerings to customers. Moving out of their legacy systems is one of the biggest challenges most Indian CIO’s are faced with. Dell works closely with customers to help them move out of their existing applications to newer platforms without hurting their IT budgets.
    “Dell has been our partner in data centre management and has helped us focus our resources on our business and customers instead worrying about our IT infrastructure. Dell’s solutions in storage, servers and data centre bring more flexibility, resilience and optimize security and costs while lowering downtime. We would like to congratulate Dell on winning the CIO award, which is a demonstration of Dell’s ability to understand and deliver on CIO needs in these changing markets.”Rinosh Jacob Kurian, Enterprise Architect, UST Global
    “In today’s always-on marketplace and turbulent business environment, a partner like Dell is truly an asset. Dell helps us manage our datacenter and server and storage requirements to deliver better business results and market success. Over the past years of our association with Dell, they have demonstrated a strategic insight into the emerging global business scenario and have been instrumental in helping our IT department gear up to meet these challenges. Dell is truly deserving of the CIO Choice award, and we extend our congratulations and best wishes to the team at Dell.”Subodh Dubey, Group CIO, Usha International.

    BYOD trends vs. Mobile enterprise platform trends

    With the literal explosion of mobile computing devices there is a huge challenge both on the enterprise computing vendor and customer sides. The easiest way of looking at those challenges is analyzing the so called BYOD (Bring your own device) and mobile enterprise platforms trends on the market where customers and suppliers meet each other.

    Note as well that these are all parts of a bigger trend, the so-called “consumerization of IT” which I already covered from an overall leading vendor point of view in the Pre-Commerce and the Consumerization of IT [Sept 10, 201] post on this site. Please read that before looking at the current trends discussed here in the below detailed sections. Then I will recommend to read The Changing Of The Enterprise Guard [TechCrunch, Jan 19th, 2013] article by the CEO of Box.com, the most successfull rising star in the enterprise IT vendor space. Even the ex MS leader Steven Sinofsky was recommending it in his Twitter meassage as:

    Interesting thoughts on enterprise computing http://techcrunch.com/2013/01/19/the-changing-of-the-enterprise-guard/ … from Aaron @levie

    Note that the BYOD trend I will present mostly through the Middle-East area where to solve the BYOD issue properly for the true enterprise space is the most pressing one in the world.


    BYOD trends

    Bring your own device [Wikipedia article, started on Jan 1, 2012]

    History

    BYOD first entered in 2009 courtesy of Intel when it recognized an increasing tendency among its employees to bring their own devices to work and connect them to the corporate network.[5] However, it took until early 2011 before the term achieved any real prominence when IT services provider Unisys and software vendor Citrix Systems started to share their perceptions of this emergent trend.

    In 2012 the Equal Employment Opportunity Commission adopted a BYOD policy, but many employees continued to use their government-issued BlackBerrys because of concerns about billing, and the lack of alternative devices.[6]

    Issues

    BYOD has resulted in data breaches.[citation needed] For example, if an employee uses a smartphone to access the company network and then loses that phone, any unsecured data stored on the phone could potentially be retrieved by untrusted parties.[7]

    It is important to consider damage liability issues when considering BYOD. If an employee brings their personal device to work, and it is physically damaged through no fault of their own it is unclear whether the company is responsible for repair or replacement.[citation needed]

    Pros

    Business

    A business that adopts a BYOD policy allows itself to save money on high-priced devices that it would normally be required to purchase for their employees. Employees may take better care of devices that they view as their own property.[citation needed]Companies can take advantage of newer technology faster.[citation needed]

    Employees

    Employees who work for a business with a BYOD policy are able to decide on the technology that they wish to use for work rather than being assigned a company device. This is thought to improve morale and productivity.[8] Exclusive control of features is given to the employee.

    Cons

    Business

    Company information will often not be as secure as it would be on a device exclusively controlled by the company.[citation needed] (Security professionals have termed it ‘Bring Your Own Danger‘ and ‘Bring Your Own Disaster‘.[9]) The company may have to pay for employee devices’ phone service, which they use outside company time. BYOD is an extreme case of the end node problem.
    Employees

    Due to security issues, the employees often do not have true full control over their devices[citation needed], as the company they work for would need to ensure that proprietary and private information is secure at all times. It is an out-of-pocket expense for the employees. They would be responsible for repairs if their devices were damaged or broken at work.[citation needed]

    Businesses that fall under compliancy rules such as PCI or HIPAA must still comply when using BYOD.[citation needed]

    Prevalence

    The Middle East was reported to have one of the highest adoption rates of the practice worldwide in 2012.[10]

    [10] El Ajou, Nadeen (24 September 2012). “Bring Your Own Device trend is ICT industry’s hottest talking point at GITEX Technology Week”. AMEinfo.com. Retrieved 26 September 2012.

    Frost & Sullivan: Consumerisation of Smart Phones and Bring Your Own Device (BYOD) are the biggest trends driving the Network Security Market in the Middle East [Frost & Sullivan press release, Nov 12, 2012]

    Dubai, the U.A.E., 21 November, 2012 – With an increase in the number of Advanced Persistent Threats (APTs), information security risks are becoming a major concern for organisations globally. Enterprises are swiftly adopting and deploying applications and new services to combat the same. In their quest to obtain high levels of security assurance and develop advanced intelligence technologies, organisations in the Middle East are increasingly adopting methods such as virtualisation and cloud computing. Over the past few years, this has led to increased Government investment in information and communication technology (ICT)-related projects in the Middle East and this is expected to proliferate further in future. To address these threats to enterprise security and brainstorm best-in-class Enterprise Security Solutions and Strategies, Frost & Sullivan convened the best minds in enterprise security at its Middle East Enterprise Security Summit 2012 on November 21, at Habtoor Grand Beach Resort, Dubai, U.A.E.

    Held for the first time in the Middle East, the Summit was attended by CIOs, CISOs, CTOs, Vice Presidents, General Managers, Network Managers, Enterprise Security Architects, Internet Security Architects, Compliance Officers, and Department Heads from across a variety of industry sectors such as Banking, Finance & Insurance (BFSI); Telecom; IT; Manufacturing; Government; Education; Healthcare; Media and Entertainment; Retail; and Automotive and Logistics.

    According to Frost & Sullivan, consumerisation of smart phones and Bring Your Own Device (BYOD) are the biggest trends driving network security issues in the Middle East today. The network security market is in a high-growth stage. Frost & Sullivan anticipates that technology convergence, regulatory compliance, and continuous growth of network infrastructure will continue to drive up sales for security suppliers in the Middle East during the period 2012-2018.

    Frost & Sullivan’s Middle East Enterprise Security Summit 2012 Summit began with an inaugural address by Andy Baul-Lewis, Director, ICT Practice, Frost & Sullivan, describing the prevalent enterprise security landscape in the Middle East. “Building security for electronic assets is one of the most critical tasks facing organisations today. In a converged world, where the threats of each system are multiplied; getting advice, sharing best practice, and talking to partners is a vital part of the construction process. This is what Frost & Sullivan endeavours to provide through this interactive Summit,” stated Mr Baul-Lewis at the Summit.

    The Summit included in-depth discussions and case studies on enterprise security management. The first of these was, ‘The Evolving Role of a Chief Information Security Officer’ by Roshan Daluwakgoda, Senior Director – Security Strategy Planning Risk Assessment and DR at Emirates Integrated Telecommunication Company, du, Dubai, the U.A.E. This was followed by a thought-provoking presentation on ‘Information Security Management – When the Going Gets Tough,’ by Kamran Ahsan, Head of Information Security, Injazat Data Systems, the U.A.E. Bashar Bashaireh, Regional Director, the Middle East, Fortinet, gave a presentation ‘How to Make your Security Aware in a BYOD World’. Thameem Rizvon, IT Director, Kamal Osman Jamjoom Group LLC (KOJ) presented, ‘Learn from your Peers: Security Implementation in a Retail Environment’. The session on Secure the Cloud,’ by Joe So, VP Business Sales, Huawei;was followed by a panel discussion on ‘Security Convergence and its Impact on Business.’

    Speaking on the occasion, Kamran Ahsan stated, “Information security is increasingly emerging as a critical concern in today’s modern business environment. This trend is very much evident in the Middle East, where enterprises have experienced information-related threats such as infiltration, data leakage, and cyber warfare among others. Injazat Data Systems will highlight how enterprises can proactively address these challenges and mitigate risks associated with business assets and services of enterprises. Moreover, with the best minds in enterprise security attending this Event, we expect to have an in-depth discussion of new trends and developments in information security in the Middle East.”

    Sharing his views on the Summit, Bashar Bashaireh said, “Information Technology has become central in driving the business processes of enterprises. However, as trends such as mobility, cloud computing, and BYOD are fast gaining momentum in the U.A.E., helping drive business profit and innovation; they are also bringing forth new challenges to IT security. Organisations in the U.A.E. should act now to regain control of their IT infrastructure by strongly securing their network and applying granular control over users, devices, and applications. The Summit organised by Frost & Sullivan is a great platform for us to share with end customers our insights on the new approach aimed towards IT security.”

    Talking about Securing the Cloud, Dong Wu, Vice President, Huawei Enterprise Middle East said, “As organisations roll out cloud-based models into their business infrastructure, the issue of security becomes an ever increasing concern.  The Middle East Enterprise Security Summit is a way for Huawei and other industry leaders to come together and discuss how businesses can be better secured and protected from the fast-evolving cyber threats that exist today. At the summit, we look forward to sharing our insights on how organisations can improve their planning processes before making their move into the cloud.”

    The Summit was supported by Injazat as Platinum Partner, while Fortinet and Huawei were the Event’s Silver Partners. Telecom Review, Teknotel and Connect-World Magazine supported the Summit as Media Partners; with Tech Channel MEA as the Online Partner for the event.

    If you are interested to know more about insights shared at the Middle East Enterprise Security Summit 2012 then send an e-mail to Tanu Chopra/Deepshri Iyer, Corporate Communications, at tanu.chopra@frost.com/deepshrii@frost.com, with your full name, company name, title, telephone number, company e-mail address, company website, and country.

    For more information on the Summit, please visit: http://www.frost.com/EnterpriseSecurityMiddleEast

    About Frost & Sullivan

    Frost & Sullivan, the Growth Partnership Company, works in collaboration with clients to leverage visionary innovation that addresses the global challenges and related growth opportunities that will make or break today’s market participants.

    Our “Growth Partnership” supports clients by addressing these opportunities and incorporating two key elements driving visionary innovation: The Integrated Value Proposition and The Partnership Infrastructure.

    • The Integrated Value Proposition provides support to our clients throughout all phases of their journey to visionary innovation including research, analysis, strategy, vision, innovation, and implementation.
    • The Partnership Infrastructure is entirely unique as it constructs the foundation upon which visionary innovation becomes possible. This includes our 360-degree research, comprehensive industry coverage, career best practices, as well as our global footprint of more than 40 offices.

    For more than 50 years, we have been developing growth strategies for the global 1000, emerging businesses, the public sector, and the investment community. Is your organisation prepared for the next profound wave of industry convergence, disruptive technologies, increasing competitive intensity, Mega Trends, breakthrough best practices, changing customer dynamics, and emerging economies?

    Mobile application management [Wikipedia article, started on Oct 17, 2011]

    Mobile Application Management (MAM) describes software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings on both company-provided and “bring your own” smartphones and tablet computers.

    Mobile application management differs from Mobile device management (MDM) in the degree of control that it has over the managed device. As the names suggest; MAM focuses on application management, but stop short of managing the entire device. MDM solutions manage the down to device firmware and configuration settings and can include management of all applications and application data.[1]

    History

    Enterprise mobile application management has been driven by the widespread adoption and use of mobile devices in business settings. In 2010 IDC reported that smartphone use in the workplace will double between 2009 and 2014.[2]

    The BYOD (“Bring Your Own Device”) phenomenon is a factor behind mobile application management, with personal PC, smartphone and tablet use in business settings (vs. business-owned devices) rising from 31 percent in 2010 to 41 percent in 2011.[3] When an employee brings a personal device into an enterprise setting, mobile application management enables the corporate IT staff to download required applications, control access to business data, and remove locally cached business data from the device if it is lost, or when its owner no longer works with the company.[4]

    Use of mobile devices in the workplace is also being driven from above. According to Forrester Research, businesses now see mobile as an opportunity to drive innovation across a wide range of business processes.[5] Forrester issued a forecast in August 2011 predicting that the “mobile management services market” would reach $6.6 billion by 2015 – a 69 percent increase over a previous forecast issued six months earlier.[5]

    Citing the plethora of mobile devices in the enterprise – and a growing demand for mobile apps from employees, line-of-business decision-makers, and customers – the report states that organizations are broadening their “mobility strategy” beyond mobile device management to “managing a growing number of mobile applications.”[5]

    MAM system features

    An end-to-end MAM solution provides the ability to: control the provisioning, updating and removal of mobile applications via an enterprise app store, monitor application performance and usage, and remotely wipe data from managed applications. Core features of mobile application management systems include:

    • App delivery (Enterprise App Store)
    • App updating
    • App performance monitoring
    • User authentication
    • Crash log reporting
    • User & group access control
    • App Version management
    • App configuration management
    • Push services
    • Reporting and tracking
    • Usage analytics
    • Event management

    The Middle East angle #1:
    Mitigating the Risks of BYOD with MAM [ITP.net, Nov 14, 2012]

    Organizations need to decide how to manage BYOD, says Johnny Karam, Regional Director, Middle East and French Speaking Africa, Symantec

    According to a recent Symantec survey, 59% of enterprises are making line-of-business applications accessible from mobile devices in an effort to increase efficiency, increase workplace effectiveness and reduce time required to accomplish tasks.

    The average annual cost of mobile incidents for enterprises, including data loss, damage to the brand, productivity loss, and loss of customer trust was $429,000 for enterprise. The average annual cost of mobile incidents for small businesses was $126,000.

    According to Symantec’s State of Mobility Survey, 67% of companies are concerned with malware attacks spreading from mobile devices to internal networks. In addition, Symantec’s latest Internet Security Threat Report highlighted that mobile vulnerabilities increased by 93% in 2011.

    To manage or not to manage:

    The first question every business must ask around BYOD is: How much management of user-owned devices connecting to corporate resources does the company want? This is critical because the degree to which an enterprise is involved in managing various aspects of user-owned mobile devices has consequences. For example, a key anticipated benefit of implementing BYOD means often no longer having to fully manage employees’ mobile devices. In return, support costs are hopefully reduced.

    However, fully managing user-owned devices often results in intruding on the personal information and activity of those devices. This might include enforcing device-level authentication and encryption policies and complete device remote locking or wiping, including users’ personal content.

    Delivering corporate [apps and] resources

    Securing corporate [apps and] resources once they are delivered

    The Middle East angle #2:
    BYOD is not a new problem
    [Gitex Review 2012 published on ITP.net, Nov 18, 2012]

    Cloud and big data were the big talking points during GITEX Technology Week 2012. Leading UAE and global companies discuss those trends.

    Florian Malecki, head of product marketing at Dell SonicWALL, says that enterprises need to be prepared to allow employees to use their toys.

    Ilike to be a bit controversial over the growing BYOD trend. If you listen to the analysts; IDC, Gartner, Forrester; they are all predicting that the number of smartphones being sold by 2014-2015 will outgrow the number of laptops being sold.

    We all say that the employees want to use their own device, but if you look at what they want to use, it is either a tablet or a smartphone, so companies and IT managers have to accommodate all users needs.

    We did a survey and we looked at what devices our customers are supporting or are open to support, and there is no clear winner. If you look at it from a device point of view, there are people who want to use tablets (about 60%), people who want to use smartphones and people who want to use laptops.

    How to start
    A good way to start BYOD and try to minimise risks is by using an SSL VPN gateway. The beauty of an SSL VPN gateway is that you are able to identify the user and the user profile as well as identifying the device and setting up a profile for the device. You could have a profile that is a managed device or a personal device, but registered within the corporate ID system. Any organisation whether an SMB or enterprise, if they don’t really know where to start the BYOD journey, if they start looking at implementing an SSL VPN solution like the Dell SonicWALL solution then they probably meet 90% of employees requirements when it comes to BYOD.

    How to control BYOD
    The threat of personal devices on a corporate network is a big problem, according to Darren Gross, EMEA senior sales director, Centrify, and companies must be able to control information on those devices.

    Security compliance experts Centrify have released mobile device management software, which integrates one single identity for each individual employee within an organisation, so wherever they go the company can control where they are going and what they are doing, through policies and security settings.

    “There is a lot of competition in that space, but we are quite unique because we come from an angle of joining the system to Active Directory, so if I leave my iPad on the train, help desk can go and remotely wipe that device so there is no threat to the enterprise,” says Darren Gross, EMEA senior sales director, Centrify.

    Enterprises also need to look at mobile device configuration to prevent viruses from accessing the corporate network.

    … <LONG>

    People that use mobile devices tend to have no passcodes on them. Centrify is able to enforce passwords and encryption on a personal device accessing the corporate network.

    Cloud
    The company is also developing authentication for off premis cloud software and service type applications so for example SalesForce and WebEx.

    “Users will be able to sign on with one identity within Active Directory so you control what a user is doing and see where they are going, there is full accountability to what individuals are doing within the organisations,” said Gross.

    Disaster recovery in the region
    Yasser Zeineldin, CEO, eHosting DataFort, says the company is offering regional enterprises the opportunity to develop DR sites.

    We offer clients both in UAE and the Middle East region the ability to have a hot disaster recovery site where data is replicated between their production system and the disaster recovery system that is hosted with us. This means that in real time if there is a failure in the primary system they can switch over to the secondary system.

    <BIG DATA PART OF THE REPORT>


    Mobile enterprise computing platform

    Hal’s (Im)Perfect Vision on a possible (and much needed) further direction by Microsoft :
    There is no ARM in Windows RT [Jan 2, 2013]

    Windows RT is the name of Microsoft’s version of Windows 8 for ARM processors, right?  It’s aimed primarily at Consumers, right?  It’s role in business is primarily in the BYOD realm, right?  That’s so 2012!  Let’s talk about strategy and where I think Microsoft will go with Windows and particularly Windows RT.  And how their strategy may become more obvious in 2013.

    The name Windows RT wasn’t chosen to convey a message about Windows moving to ARM processors.  Nor was it chosen to convey that it was a Tablet OS.  The name appears to have been chosen primarily for one reason, it is an operating system devoted to running Windows RunTime apps.  It splits the mainstream Windows product into two families.  Windows for running Win32 “desktop” and Windows RunTime applications and Windows RT that drops the legacy Win32 application support.  Windows RT is Microsoft’s go forward client operating system, while Windows is the operating system Microsoft will need to keep selling and enhancing for a transition that will last a decade or more, but it will eventually be considered a legacy.

    I know I just sent a lot of people’s blood pressure through the roof because today they either (a) dislike Metro/Modern/whatever-you-call-it ,Windows RunTime, or the Start Screen and/or (b) the new environment isn’t really suitable for their usage scenario.  But keep in mind I’m talking about where things are going over several releases of the re-imagined Windows.  There will be many refinements, improvements, and changes before Windows RT replaces Windows as Microsoft’s primary client operating system offering.

    The desktop lives forever, right?  Well, on Windows yes but not on Windows RT.  Today Windows RT only needs the desktop for two reasons.  First, many traditional utilities from the File Explorer to much of system management are only available as desktop apps.  Second, Microsoft Office is only available as desktop apps.  But in each release going forward this will become less true.  A Metro File Explorer will become standard.  More and more system management will move to the new model.  And eventually Microsoft will remove the desktop from Windows RT.  Then it will be able to remove many pieces of legacy (including Win32), making Windows RT smaller, faster, and more secure (via smaller attack surface) than it’s Windows sibling.

    Microsoft started the ball rolling with Windows RT on ARM because that was the most practical thing to do.  With ARM unable to run existing x86 apps Microsoft had to decide if it would evangelize conversions of existing applications to ARM or put the energy into getting developers to write new Metro/Modern apps.  And without a library of Modern apps it was unlikely that any of the x86-oriented OEMs would create an x86 Windows RT system.   No rational amount of pricing difference on Microsoft’s part would encourage a OEM to use an operating system with no applications when they could just as simply use one with a huge, if aging, library.  ARM thus became the obvious place to introduce Windows RT.

    As the library of applications in the Windows Store grows it becomes more and more likely that Microsoft will introduce Windows RT for x86 systems.  Will that happen in 2013?  By the end of 2013 the Windows Store will likely have in excess of 150,000 Apps.  Perhaps in excess of 200,000.  Assuming that the quality is there (meaning they are the apps people want and are equal to their iPad and Android equivalents) the market for systems with no need to run legacy desktop apps will have grown dramatically.  Microsoft, many of its OEMs, and Intel (of course) will want the option of using Clover Trail (and its follow-ons) in those systems.  So it is quite possible that Microsoft makes Windows RT available for Clover Trail-based systems in 2013, and it seems a certainty for 2014.

    As a side note this is something that Paul Thurrott will probably not be happy about.  Paul has called on Microsoft to use Clover Trail in its next generation of the Surface so that it would have the full Windows experience.  But I expect that if Microsoft did use Clover Trail in a Surface (as opposed to Surface Pro) replacement that system would still run Windows RT.  Sorry Paul :-)

    If Windows RT for x86 is speculative in 2013 here is something I think is a surer bet.  Windows RT will expand into a family that mirrors the editions of Windows.  I expect that in 2013 we will see a Windows RT Enterprise (and perhaps Pro as well) edition.  Why?  Well the current edition of Windows RT is missing some key functionality that would accelerate its adoption within Enterprises.  And I’m not even talking about UI or Windows RunTime changes that would increase the application space it was applicable to.  I’m talking purely about lower level operating system features.

    Being able to participate in a domain is part of Microsoft’s secret sauce for enterprises, and today Windows RT can’t do that.  A Windows RT Enterprise edition would bring the ability to join a domain, use DirectAccess, use BitLocker, fully participate in Microsoft’s management capabilities, etc.  Whereas the solutions introduced in 2012 are acceptable for BYOD situations and some limited application scenarios, an Enterprise edition would allow Windows RT systems to participate as full members of the enterprise computing environment.

    Windows RT Enterprise will not allow side-loading of desktop applications, but it may allow side-loading of limited types of system software.  As great as DirectAccess is (and given my involvement in it I’m biased, but then I also lived with it as my “VPN” for a year so know how fantastic the user experience is) most enterprises use Cisco VPNs.  And while Windows RT is certainly adequately protected with Windows Defender, IE SmartScreen, etc. most enterprises will want at least the management capabilities of enterprise-oriented security products and probably the ability to use their corporate standard (i.e., Symantec, McAfee, etc.) products and infrastructure.  Unless Microsoft addresses these adoption of Windows RT will be much slower than desired.

    And what about requirements for access to desktop applications on Windows RT systems?  Many, perhaps most, enterprises are fine with using VDI to allow users of these systems to access desktop applications.  Some are downright enthusiastic.  But many do not want that access occurring off their corporate network.  Hence the need for the ability to join a domain, and use DirectAccess or VPNs when users need remote access.  You then run VDI over the corporate network.

    Now we get to another wildcard in all of this, Office.  Today’s situation with Office being a desktop Win32 application on Windows RT, and only being available in the Home and Student edition, represents a major drag on Microsoft’s ability to move Windows RT forward.  Microsoft needs to either allow upgrade of the edition of Office on Windows RT to an Enterprise edition (including, for example, making Outlook available) or to move Office fully to Metro/Modern (likely in multiple editions).  They may do both given the time it could take to create a true Office RT.

    An Office RT would benefit the entire Windows RT  and Windows 8 market and is the logical direction for Office to go.  But I find it hard to believe they can get to full equivalence with the Win32 Office apps in a year, let alone in a traditional longer release cycle.  We’ll see some, perhaps substantial, movement in this direction in 2013 but I don’t know how far Microsoft will get.  In the mean time they may find it prudent to release Office 2013 Enterprise (standalone and/or as based part of Office 365) for Windows RT systems.  However this rolls out, Microsoft will substantially improve the Office for Windows RT situation in 2013.

    Finally, let me reinforce a point I’ve blogged about before.  Microsoft is moving to annual (or more frequent) updates as a (at least unofficial) corporate standard for release cycles.  There may be exceptions from time to time, but I’d expect pretty much every actively developed product to have annual releases.  That means faster evolution in smaller chunks is the norm.  You don’t like how the Start Screen works today?  By the end of the year there will no doubt be improvements that address major complaints.  Windows RunTime missing an API that keeps you from creating a Metro/Modern version of your App?  You might have it later this year.  Can’t stand that the Share contract doesn’t work with Outlook?  Again, a solution may appear faster than Microsoft customers have ever imagined possible.

    2012 was an exciting year for Microsoft and its customers.  2013 may be even more exciting, and delightful.

    But there are new contenders for the enterprise IT space not based on any earlier paradigms, neither on the enterprise desktop and notebook (like Microsoft’s Professional and especially Enterprise editions of Windows) evolved from the PC platform, nor on the web browser based enterprise thin client (from the Java-like Apex code programmable Force.com PaaS platform usable along with standard HTML, JavaScript and CSS in the browser, to a wide range of JavaScript frameworks of a kind of “enterprise quality” which include even versions for mobile browsers) evolved from the web platform.

    A typical new contender, differing from both of the two earlier platforms in that by its very nature of cloud based file sharing can best exploit the power of new mobile computing devices, is the Box (service) [Wikipedia article, started on Nov 15, 2006]

    Box (formerly Box.net) is an online file sharing and Cloud content management service for enterprise companies. … A mobile version of the service is available for Android, BlackBerry, iPhone, iPad, WebOS, andWindows Phone devices.[4]

    Products

    The core of the service is based around sharing, collaborating, and working with files that are uploaded to Box. Box offers 3 account types: Enterprise, Business and Personal.[12] Depending on the type of account, Box has a number of features such as unlimited storage, custom branding, administrative controls and 3rd party integrations with applications like Google apps, Gmail, NetSuite and Salesforce. The service also has a variety of social features such as discussions, groups and an update feed.

    Sharing

    Box is a file sharing network, which saves and stores the information uploaded by the customer to their web site. They have the full legal right to demographic information about their customers, sales, and traffic to their partners and advertisers. Even though this company does not have the right to give, sell, rent, share or trade any personal information uploaded to their web site by their customers unless consent is given by the user of an account, a third party may be able to view some information. For which some terms and policies have been set forth, to protect the web site as well as the customers alike to establish a full functioning informative and well organized sharing network.[22]

    With the users consent, and if they are to choose they can share their private details with other customers such as:[22]

    To see your name, Email address, Photo, Profile information

    Chosen files to share –where comments can be made, and others can contact the user by email. People you invite as editors can also edit your shared files, upload documents and photos to your shared files, share those documents outside of Box, and give other users rights to view your shared files.[22]

    On the website its platform services for Enterprise IT are described in the following framework:

    Consolidate File Services: Consolidate All Your Content Services on Box

    Box – the single, secure solution for content access, sharing and collaboration – lets you replace a myriad of file transfer systems and unsecured, consumer-focused tools like YouSendIt and Dropbox. Bottom line: You reduce content silos, lower costs and give users the simplicity and functionality they want with the security IT requires. Learn more

    • Replace NFS, FTP, MFT and consumer file-sharing and sync tools
    • Streamline system administration and reporting
    • Reduce IT resource requirements while effortlessly meeting increasing storage needs

    Enterprise Mobility: Support Mobile Content Management

    Box works with any mobile device, giving remote workers access to critical content they need to succeed. Simultaneously, Box features a comprehensive and sophisticated security suite – and its seamless integration with third-party mobile device management tools like Good Technology and MobileIron provide an additional layer of data protection.Learn More

    • Users get anywhere, anytime access to critical content; and that content is synced across all their devices
    • IT enjoys remote device management coupled with auto logout and locking while sanctioning the use of specific mobile devices and apps
    • IT also gains a new level of content visibility, with insight into how content is managed and accessed in the organisation – and beyond

    Cloud Content Management: Discover Content Management in the Cloud

    As a Web-based service, Box is up and running in minutes and deployed in days. There’s no hardware to maintain or software to update and it complements existing content management platforms.
    Learn more

    • Start working in the cloud immediately: no on-premise installation, provisioning, maintenance or DMZ setup
    • Enable employees to access and share enterprise content quickly and securely, both internally and with external partners and vendors
    • Significantly lower hardware and storage costs

    Security and Architecture: Ensure Your Corporate Information is Secure

    It’s true: Box is a leader in content management security and makes ongoing investments in the safety of our data centres and corporate operations. Box has been issued an SSAE 16 Type II report, and our solution also features Safe Harbor certification and provides easy-to-use configuration tools, so you can tailor Box to meet your security requirements. Learn More

    • Global permission controls and detailed audit trails
    • Full data encryption plus data centre backups and redundancy
    • Guaranteed 99.9% uptime

    The Box Platform: Extend Box With Our Platform and Integrations

    Box is more than just a Web application; our comprehensive yet flexible platform lets you easily integrate, extend and customise your cloud deployment. Connect Box to the leading SaaS applications you already use, integrate it into your IT infrastructure or build apps designed to do whatever your business needs. Learn more

    • Easily connect to other business applications like Salesforce, NetSuite and Google Apps
    • Extend Box to meet additional needs with our 120+ Box Apps including eFax, DocuSign, FedEx and mobile Box Apps like Quickoffice
    • Create custom mobile, Web and desktop applications powered by Box

    Professional Services: Deploy Easily With Professional Services

    Our Customer Success team offers a comprehensive range of professional and client support services, from end-user training to systems integration and performance tuning. Learn more

    • Content migration services transfer your existing data to Box quickly and securely
    • Custom implementation road maps streamline deployment across the enterprise
    • A dedicated Customer Success representative gives you the responsive, personalised support you deserve

    The current state was described in Box Platform: Announcing v2 API in GA and Year in Review [on box blog by Chris Yeh, VP of Platform, Dec 14, 2012]

    2012 has been an amazing ride for the Box platform, and I’m excited to announce that we’re ending the year on a high note with the general availability of the Box v2 API. First released back in April in beta, we’ve made tremendous strides to bring our partners, developers and customers a simple, elegant and intuitive API that will power the next generation of business collaboration.

    Our v2 API represents a major step forward for Box. It is RESTful, implements the OAuth2 spec to standardize user authentication, has much improved error handling and it is well documented. Our Platform Manager Peter Rexer has a deep dive into all the details of the v2 API here. We’re also introducing Box developer accounts, which offer developers access to all of Box’s enterprise features through both the Box web app and the API. In celebration of our new API, we’re offering 25GB of Box free for any developer account created before January 18, 2013.

    API Momentum in 2012

    Our new API is being launched at a time of tremendous platform growth for Box. In 2012, every API metric that we tracked grew significantly. Here’s just a sample of some of the massive traction we’re seeing with the Box API:

    • 129%: growth in third party developers using Box
    • 140%: growth in number of third party API calls per month
    • 133%: growth in apps in the Box Apps Marketplace
    • 200%: growth in number of weekly users of third party apps on Box

    Of course, we wouldn’t have seen such strong platform growth and API engagement without the efforts and work driven by the amazing Box platform team and our ecosystem of third party developers. We built industry-first products including Box OneCloud and Box Embed, travelled the world meeting amazing companies along the way and got together as a community to hack some pretty cool projects. Here’s a brief look back at an amazing 2012.

    Box OneCloud

    In April, we introduced Box OneCloud for iOS, the first mobile cloud for the enterprise. OneCloud helps you discover useful productivity apps that are deeply integrated with Box for productivity on common business tasks like document editing, PDF annotation, e-signature, etc. We launched on iOS with 50 apps and shortly thereafter brought this to Android. By year’s end we’ll have nearly 300 OneCloud app integration partners across both iOS and Android. 40% of all Box’s Fortune 500 customers are using Box OneCloud.

    Box Platform on the Road – New York & London

    In New York this spring, we announced our v2 API in beta, 100 new OneCloud apps and partnerships with General Assembly and TechStars. We welcomed over 650 attendees to Skylight West to hear from Box CEO Aaron Levie, Take Two Interactive CEO Strauss Zelnick and former Editor-in-Chief of Wired Chris Anderson. Later, everyone danced to cool tunes spun by Elijah Wood. Our friends in New York include the Bizodo team, which makes a great form-filling app that puts content into Box. We also hung out with the Handshake team, which created a rich order-taking app useful in many business and retail settings. When the Handshake logo appeared on our OneCloud billboard on the 101, they tweeted that it was the startup equivalent of your voice dropping. One of the most interesting things about New York is the concentration of enterprise-focused startups. For example, we’re really pleased to support Jonathan Lehr’s NY Enterprise Technology Meetup and Nick Gavronsky’s New York City Startup Weekend, which just occurred last weekend.

    In late August, we parachuted into the middle of Carnival week in London to talk to analysts, press, London-based startups and supporting government organizations. We hosted a developer meet-up at Shoreditch House and were awestruck by the energy in London, particularly in Tech City. We spent time in Google’s shared space in London, where we first met Ben Wirtz, CEO of Unifyo, which brings together multiple sources of customer data to provide enterprises with a singular view of customers. We wandered down to Chelsea to meet Will Lovelace, CEO of Datownia, a company that allows the easy translation of Excel spreadsheets into APIs for external consumption. And we visited the lofty digs of the Chelsea Apps Factory, a super high quality app consultancy and production company.

    It’s great to meet with so many wonderful people and even better when you can get together and build some really cool things.

    Box Hack Event

    Full disclosure, our first public hack event at Box HQ was not intended to be thematically linked to astronauts shooting each other, but that’s another story. At this event, called “Redefine Work,” 150 hackers stayed overnight creating more than 40 contest entries. Participating technology partners included TokBox, Firebase, Mashery, Twilio, Parse, Iron.io and SendGrid. Our winning hack, called OMGHelp, is an application that improves the technical support experience by allowing a customer to use a smartphone camera to show a technical support person what they’re doing. If you’re interested, here is a really nice recap of our event that was created by Mashery’s Neil Mansilla on Storify.

    We closed out our active year in October with…

    BoxWorks Dev Day

    At BoxWorks, we announced a brand new technology that lets you quickly and easily extend the full Box experience anywhere you work. We call it Box Embed, our robust HTML5 framework for adding Box directly into the user interfaces of other applications. We launched with ten partners, including NetSuite, Jive, Conur, Oracle and others and we plan to continue adding to that number. Box Embed is particularly exciting to us because it’s one of the easiest ways for our partners to help make the content you have stored on Box accessible from anywhere.

    We also ran an un-conference-like Developer Day where hundreds of developers joined us to hear about the latest web development technologies and learn about enterprise development. We ran a well-attended startup camp with Boxers from various departments (design, sales, marketing in addition to developer evangelists) providing consulting. And we concluded with one of my favorite reporters/writers, Drew Olanoff of TechCrunch, interviewing one of my favorite “startup” CEOs, Jeff Lawson of Twilio, about the ways that developers should think about using APIs in their apps.

    We were fortunate to have many of our platform partners join us at BoxWorks this year. Jesse Miller and the attachments.meteam met with Box customers on the main show floor. David Klein and the SlideShark team presented in one of our sessions, as did Milind Gadekar from CloudOn.

    As you can see, we’ve had an amazing year. Thanks to all of our platform partners, big and small, for working with us. We look forward to reaching the next level in the new year.

    2013: Looking Forward

    As 2013 approaches, we’re working on making it even easier for developers to work with Box by focusing on our SDKs and other developer tools. We’re also excited to be building new platform products. On one front, we’re working on new developer-focused metadata tools. On another, we’re looking at allowing developers to hook into workflow products that will allow content to move through Box in various business flows.

    We’re sure that it will be a fun ride. Happy holidays to all and we’ll look forward to working with you in 2013!

    Regarding the most demanding enterprise customers of Box.com here are few excepts from Why Box.com is king of enterprise cloud storage [CNET, May 15, 2012]

    It may be known to some as the Dropbox-for-the-enterprise, but Box.com could be forgiven for insisting on its own identity.

    With more than 120,000 customers, including 82 percent of the Fortune 500, the company has made a name for itself as one of the leaders in the enterprise cloud storage and data management space. And though Box.com has Microsoft, and more recently, Google breathing down its neck, CEO Aaron Levie doesn’t appear the least bit nervous.

    That may be because the company has spent seven years building its business and solidifying a technology platform that gets more sophisticated — and cost-effective — every day. And as it has evolved into occupying a sizable Silicon Valley building, and employing more than 400 people, Box is now setting its sights on new businesses, including providing customers with the infrastructure on which to build cloud-based applications.

    Last week, the 27-year-old Levie sat down with CNET in a conference room at Box.com headquarters for an interview about the state of his company, the competitive landscape in the cloud storage and service space, and even the value of wearing a hoodie in a meeting with potential investors.

    How do you pitch Box.com to customers?
    Levie: So many different kinds of businesses out there are all going through the exact same challenge and transition. It’s almost counterintuitive how predictable everybody’s situation is. Because whether you’re in construction or finance or real estate or consumer or media tech, every CIO we talk with, and these are companies that are 5,000, or 10,000, or 50,000 employees, they’re going through the same kind of transition and they’re at the same junctures as organizations, where they have decades of legacy technologies that they’re still managing. And it’s, How am I going to build an IT and technology strategy for the next five to ten years. And often, if you look at how vast the change has been in the landscape, the technology strategy they’re going to end up with is very different than the one they just came from.

    So what is Box.com?
    Levie: The vision of Box is to make it easy for customers to share, manage, and access information from anywhere. That means we need lots of different kinds of technologies to make that happen, including technology that will sit on your iPhone, your Mac, your Android device or your Blackberry. And we just announced something with Nokia with their Windows Phones and tablets. We’re a 100 percent enterprise-focused company, and all the technology we’re building goes towards asking how do we make it easier or more scalable, or simpler, and just a better way for businesses to share and manage and access this data.

    Any regrets on being 100 percent enterprise?
    Levie: God, no. Our thesis is basically that if you look at the cost of storage, it goes down roughly about 50 percent every 18 to 24 months. So our hard costs are about a tenth of what they were when we started the company seven years ago. And you can predict that in the next five to ten years, we’ll have another 10x improvement in storage density and performance. Eventually you’ll get to a point where storage is infinite and free, because companies like Google, and Microsoft, and Apple can essentially subsidize the cost of storage for their consumers because it’s so cheap and the value of keeping people locked into their system is so great for them. But in the enterprise, storage is critically important, so we had to give people lots of space, but what you pay for is the security, the platform value, the collaboration, and the integration into your enterprise, and this is where we can build differentiated technology instead of just being measured on how much storage we give you and at what price.

    Who poses the biggest threat to your business?
    Levie: I would say Microsoft knows the most about the enterprise of any of these players. Google has a phenomenal brand, but it’s getting to be a broader brand, because it’s everything from your wallet to your car to your TV to your phone. The other thing that gets lost in the entire conversation because Google and Microsoft and Apple are so aggressive about this space, is the big transition companies are going to do from Oracle, IBM, EMC, and a lot of these traditional enterprise infrastructure players. Because as these dollars, and as your computing goes to the cloud, it moves away from implementing on-premise systems. It’s not going to be that Dropbox or Apple or Google loses. It’s going to be a lot of the legacy systems that we were spending lots of money on. As the $290 billion enterprise software market moves to the cloud, an entire new landscape of players and vendors are going to be the beneficiaries of that, unless these legacy vendors really get their act together.

    Exynos 5 Octa, flexible display enhanced with Microsoft vision et al. from Samsung Components: the only valid future selling at CES 2013

    [5:30 – 5:39] of the video embedded in ‘Details’ section below:
    Samsung Components [the proper name is Device Solutions Division, Samsung Electronics]: a $16B operation just for Q3 2012 alone.

    WTF are 8 cores for? How the mobile battery will cope with that? And the fundamental (technical only) answers to both questions (objections) are:
    [24:00 – 24:50] of the video embedded in ‘Details’ section below:
    demo and illustration of the big.LITTLE
    Warren East, CEO, ARM:

    [24:57] It is providing roughly twice the performance of today’s leading edge smartphones at half the power consumption when running common workloads [25:07]

    Add here just the following illustration in order to avoid the (unfortunately) quite typical misunderstanding of having 8 core in Exynos 5 Octa, when in fact there are 4 cores used for different workloads:

    WTF is a flexible display for?
    [48:53 – 54:00] of the video embedded in ‘Details’ section below:
    How Microsoft is using Samsung components to enhance their solutions, Eric Rudder, chief technical strategy officer, Microsoft:

    image
    [51:37] We actually have a prototype of Windows Phone and how would look on one of those screens [51:41]
    image

    [51:41] And Microsoft’s vision is that sensors like Kinect combined with flexible, transparent and projected displays will bring us to a point when any object can be a Surface and can be a computer. I’d like to close with a short video from Microsoft Research which extends interactivity to every surface in your living room. Last year you’ve may seen some videos with precomputed projections. What we’re demoing today is both real-time and fully interactive. And while you may find it hard to believe the footage shown here is exactly what’ve appeared in the lab without any special effects being added. Some companies talk about reality distortion field we’ve actually built one. [52:32]

    [52:35 – 53:20] IllumiRoom Projects Images Beyond Your TV for an Immersive Gaming Experience [MicrosoftResearch YouTube channel, Jan 8, 2013]

    IllumiRoom is a proof-of-concept Microsoft Research project designed to push the boundary of living room immersive entertainment by blending our virtual and physical worlds with projected visualizations. The effects in the video are rendered in real time and are captured live — not special effects added in post processing. IllumiRoom project was designed by: Brett Jones, Hrvoje Benko, Eyal Ofek and Andy Wilson

    [53:24] This is just a glimpse of what our future may hold in store for us. We’re excited that this technology can be used in many different ways: to enhance a TV or movie experience, or increase the reality of a flight simulator, or make educational scenarios more exciting. We look forward to our continued partnership with Samsung to deliver the next generation of devices and services. [53:49]


    Details

    <CES 2013 “warm-up” clips, worth to skip> [3:10]
    <Gary Shapiro intro, might be skipped> [6:00]

    Samsung Exynos 5 Octa & Flexible Display at CES 2013 Keynote [SamsungTomorrow YouTube channel, Jan 9, 2012]

    Samsung introduced its Exynos 5 Octa, Green Memory Solution, Flexible OLED and Green LCD at CES 2013. This is the keynote speech of CES 2013 with the theme of ‘Mobilizing Possibility’ presented by Dr Stephen Woo, President of Device Solutions Business for Samsung Electronics. He talks on how Samsung’s innovative components technology has been bringing future into present at CES 2013.

    Samsung Highlights Innovations in Mobile Experiences Driven by Components, in CES Keynote [Samsung press release, January 9, 2013]

    Samsung’s President Introduces Broader Partnerships, New Products and the Possibilities They Enable

    LAS VEGAS–(BUSINESS WIRE)–Samsung Electronics Co., Ltd., a world leader in advanced semiconductor solutions, today redefined the story of consumer electronics from its perspective beneath the surface of mobile devices at the 2013 International CES keynote address.

    “When you want multiple applications to perform at their best, you want the best application processor currently available—the Exynos 5 Octa.”

    Dr. Stephen Woo, president of System LSI Business, Device Solutions Division, Samsung Electronics, shared the company’s vision of “Mobilizing Possibility,” highlighting the role of components as the engine behind innovation across the mobile landscape. The keynote event illustrated possibilities that Samsung envisions offering through its component solutions, and introduced new products that will herald such expectations.

    “We believe the right component DNA drives the discovery of what’s possible,” said Woo. “Components are building blocks—the foundations on which devices are built. We at Samsung’s component solutions are creating new, game-changing components across all aspects of devices.”

    Guests from partnering companies, such as Warren East, chief executive officer, ARM; Eric Rudder, chief technical strategy officer, Microsoft; Trevor Schick, senior vice president, enterprise group supply chain procurement, HP; and Glenn Roland, vice president and head of new platforms and OEM, EA; also took part in the event, echoing Samsung’s mission to offer breakthrough products and create shared value (CSV) for both manufacturers and end-users.

    Woo opened by presenting Samsung’s goal for Mobilizing Possibility that takes big ideas off the drawing board and brings them to life for end-users, especially in the areas of processing performance, energy-efficient memory solutions and display technology. He emphasized that the limitless possibilities presented by consumer electronics will be based on component innovations by the company.

    Processing Power

    The first of Samsung’s new products announced at the keynote was the Exynos 5 Octa, the world’s first mobile application processor to implement the ARM® big.LITTLE™ processing technology based on the Cortex™-A15 CPU. Following the Exynos 5 Dual, which is already on board of market-leading products such as the Google Chromebook and Nexus 10, the successor is the newest addition to the Exynos family of application processors.

    “The new Exynos 5 Octa introduces a whole new concept in processing architecture…designed for high-end smartphones and tablets,” said Woo. “When you want multiple applications to perform at their best, you want the best application processor currently available—the Exynos 5 Octa.”

    To expand on the big.LITTLE concept, Warren East, chief executive officer, ARM, joined Woo on stage and introduced the new technology that has just become available in silicon through the Exynos 5 Octa. Housing a total of eight cores to draw from—four powerful Cortex-A15™ processors to handle processing-intense tasks along with four additional Cortex-A7™ cores for lighter workloads—the application processor offers maximum performance and up to 70 percent higher energy efficiency compared to the previous quad-core Exynos.

    Glenn Roland, vice president and head of new platforms and OEM, EA [Electronic Arts], helped Woo demonstrate the processing power of the Exynos 5 Octa by showing off one of EA’s latest 3D racing games, Need for Speed™ Most Wanted. Atop the reference device, the application processor delivered an elevated real-life gaming experience within the mobile platform, rendering stunning graphics performance and real-time response speed.

    Green Memory Capabilities

    As advanced processing power on mobile devices accelerates easier data creation by the masses, the mobile experience will increasingly become more dependent upon datacenters largely responsible for the proliferating data traffic. Growing in size and capacity, IT systems face challenges both in performance and power savings to secure sustainability moving forward. Memory devices, the main products for servers that make up these datacenters, can deliver substantial gains by adopting cutting-edge technology available from Samsung.

    Woo pointed out that managing the power consumption in these datacenters have become crucial and that Samsung’s green memory solutions with solid state drives (SSD) and advanced DRAM (dynamic random access memory) are addressing this key issue with their powerful, yet energy-efficient processing capabilities. Compared to traditional datacenters that incorporate hard disk drives (HDD), server and storage solutions equipped with green memory pull the data processing speeds up six-fold while operating with 26 percent less electricity.

    Display Technology

    As components on the surface that interact directly with users, display solutions bring the technology advancements to life and make them tangible through the device interface. Woo presented the future possibilities of Samsung’s displays along with Brian Berkeley, senior vice president of Samsung Display. While crystal-clear picture qualities become a reality, the two Samsung speakers were pleased to share that the innovations do not sacrifice energy efficiency.

    Woo and Berkeley described the 10.1-inch liquid crystal display (LCD) panel that is currently adopted by the Nexus 10. With a 2560×1600 resolution and 300 pixels per inch (ppi), the panel renders stunning picture qualities while consuming only 75 percent of the energy used in previous display solutions.

    Using Samsung’s energy-efficient green LCD technology, the company is currently developing a 10.1-inch model that would lower power consumption even further by 25 percent, while offering equal resolution qualities as its predecessor.

    Prototypes and real-life scenarios for Samsung’s line of flexible organic light emitting diode (OLED) displays were also showcased, promising various mobile application opportunities for consumer electronics manufacturers. Dubbed “YOUM,” the flexible display line-up uses extremely thin plastic instead of glass, making it bendable and virtually “unbreakable.” Berkeley featured a smartphone prototype equipped with a curved edge that showed contiguous content along the side of the device.

    “Our team was able to make a high-resolution display on extremely thin plastic instead of glass, so it won’t break even if it’s dropped,” said Berkeley. “This new form factor will really begin to change how people interact with their devices, opening up new lifestyle possibilities … [and] allow our partners to create a whole new ecosystem of devices.”

    One of Samsung’s partners that bring the company’s state-of-the-art components together is Microsoft, adding more layers of value to the final product with its software solutions, devices and services. Eric Rudder, chief technical strategy officer, Microsoft, took the complete ATIV family of devices as an example through which Samsung’s component solutions and Windows 8 together present new potential in user interfaces. Rudder reported that Microsoft Research has been continuing its work on next-generation display technologies, enabling new modes of human-computer interaction.

    Possibility for All

    Creating a better world with its resources is one of Samsung’s core values. Samsung’s flagship corporate social responsibility initiative, Samsung Hope for Children, was launched in this spirit, through which the company provides its products, expertise and financial support to tackle the needs of children around the world for education and healthcare. Woo emphasized that Samsung’s innovation in components share the same thread as a driver that truly mobilizes possibility without boundaries or barriers.

    “When [Samsung’s] technologies harmonize, amazing things happen. Advances in components are giving rise to a whole new era of possibility,” said Woo. “At Samsung, we are passionate about Mobilizing Possibility. Not just for the privileged few, but possibility for all.”

    For more information about Samsung’s 2013 International CES keynote, visit www.samsung.com/2013ceskeynote or www.samsungces.com.

    About Samsung Electronics Co., Ltd.

    Samsung Electronics Co., Ltd. is a global leader in consumer electronics and the core components that go into them. Through relentless innovation and discovery, we are transforming the worlds of televisions, smartphones, personal computers, printers, cameras, home appliances, medical devices, semiconductors and LED solutions. We employ 227,000 people across 75 countries with annual sales exceeding US$143 billion. Our goal is opening new possibilities for people everywhere. To discover more, please visit www.samsung.com.

    ARM TechCon 2012 – Warren East, CEO ARM Keynote [ARMflix, Nov 2, 2012]

    Warren East, CEO of ARM gives industry keynote at TechCon 2012 Presentation Title: Low-Power Leadership for a Smarter Future

    More essential details:
    Cortex-A7 OR Low-Power Leadership for A Smarter Future – The Legend of ARM Cortex-A7 [USD 99 Allwinner, Jan 7, 2013]
    Fast 3d party IP OR the external Intellectual Property which makes Allwinner’s unprecedented pace of further next-gen SoC introductions possible despite of the company size of only 500 employees [USD 99 Allwinner, Dec 28, 2012]
    Samsung Exynos 5250 [Dec 6, 2011]
    – for Samsung semiconductor foundry operation: see inside the Qualcomm’s critical reliance on supply constrained 28nm foundry capacity [this same ‘Experiencing the ‘Cloud’ blog, July 27 – Nov 13, 2012]
    Intel targeting ARM based microservers: the Calxeda case [this same ‘Experiencing the ‘Cloud’ blog, Dec 14, 2012]
    Intel’s biggest flop: at least 3-month delay in delivering the power management solution for its first tablet SoC [this same ‘Experiencing the ‘Cloud’ blog, Dec 20, 2012]

    Windows RT must work with more chips to take off, ARM CEO says [CNET, Jan 9, 2012]

    LAS VEGAS — Microsoft’s newest operating system that runs on cell phone chips is off to a slow start, but it’s only a matter of time before it gains more traction, the chief executive of chip technology designer ARM Holdings said.

    Warren East, speaking today in an interview with CNET at the Consumer Electronics Show in Las Vegas, said that for that to happen, Microsoft needs to make its software, dubbed Windows RT, work with more ARM-based processors. He said it eventually will do so, but it’s unclear when that will be.

    Currently, Windows RT runs only on Qualcomm and Nvidia chips (it also used to work with Texas Instruments’ processors, but that company decided to move away from providing chips for mobile devices). And only four PC makers ultimately built Windows RT products.

    “If Microsoft wants to benefit from the ARM business model and the ARM world, then they’ll have to support multiple players,” East said. “Otherwise, there’s no real advantage for them in working with ARM.”

    East today noted that when Microsoft first started talking with ARM about making a tablet/PC operating system that works with its processors, Microsoft wanted to work with only one ARM-based chip partner.

    “We said, ‘no, no, you need to work with a few, because we have found over the years it helps to work with a few, or otherwise you end up getting too channeled into the requirements of one customer,” he said.

    Microsoft Research at CES: IllumiRoom [Next at Microsoft blog, Jan 9, 2013]

    Earlier this morning at CES, Eric Rudder, Microsoft’s Chief Technology Strategy Officer, joined the Samsung keynote to share Microsoft’s vision for extending computing interactions to any surface in your home. This wasn’t a product launch but I’m excited by the potential shown in the research that we shared.

    Imagine a space like your kitchen or a classroom achieving that same level of interactivity as your phone – this will happen through a combination of embedded devices and sensors such as Kinect for Windows. Our research demo only covers educational and entertainment scenarios but the possibilities are endless.

    It’s rare for a company to pull back the curtain and share research in such raw form at the world’s largest technology tradeshow. However, we think it’s vitally important to get the next generation of students excited about Computer Science – and what better way than to show off research that makes gaming more fun! 

    While magicians never share their secrets, researchers have to publish, so, a bit of explanation about the demo is in order. You may have seen interesting 3D-mapped projections over the past few years – Microsoft partners like Nokia and Samsung have both used pre-rendered footage in recent marketing efforts. What’s new in this work is that our researchers used Kinect for Windows to map the room in real-time in order to make projected illusions fully interactive. Most importantly, the effects shown in the video were captured live as they appeared in the living room environment and are not the result of special effects added in post processing.

    For more on the science behind this demo, check out the MSR IllumiRoom project site from Hrvoje Benko, Andrew Wilson, Eyal Ofek, and Brett Jones – they’ll have more to come at CHI 2013 in April.

    IllumiRoom: Peripheral Projected Illusions for Interactive Experiences [Microsoft Research, Jan 9, 2013 ]

    image

    IllumiRoom is a proof-of-concept system from Microsoft Research. It augments the area surrounding a television screen with projected visualizations to enhance the traditional living room entertainment experience.

    IllumiRoom uses a Kinect for Windows camera and a projector to blur the lines between on-screen content and the environment we live in allowing us to combine our virtual and physical worlds. For example, our system can change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new game experiences.

    Our system uses the appearance and the geometry of the room (captured by Kinect) to adapt the projected visuals in real-time without any need to custom pre-process the graphics. What you see in the videos below has been captured live and is not the result of any special effects added in post production.

    Stay tuned for more information and a paper explaining all the details coming up at ACM CHI 2013.