Home » Posts tagged 'Marvell'
Tag Archives: Marvell
Updates as of Dec 6, 2013 (8 months after the original post):
Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:
This Cloud, Social, Big Data and Mobile we are referring to as this “New Style of IT” [when talking about the slide shown above]
HIGHLY RECOMMENDED READING:
– HP Offers Exclusive Peek Inside Impending Moonshot Servers [Enterprise Tech, Nov 26, 2013]: “The company is getting ready to launch a bunch of new server nodes for Moonshot in a few weeks”.
– So far, the most simple and understandable info is serviced in Visual Configuration Moonshot diagram set: http://www.goldeneggs.fi/documents/GE-HP-MOONSHOT-A.pdf This site includes also full visualisation for all x86 rack, desktop and blade servers.
From HP Launches Investment Solutions to Ease Organizations’ Transitions to “New Style of IT” [press release, Dec 6, 2013]
The HP accelerated migration program for cloud—helps …
The HP Pre-Provisioning Solution—lets …
New investment solutions for HP Moonshot servers and HP Converged Systems—provide customers and channel partners with quick access to the latest HP products through a simple, scalable and predictable monthly payment that aligns technology and financial requirements to business needs.
Access the world’s first software defined server [HP offering, Nov 27, 2013]
With predictable and scalable monthly payments
HP Moonshot Financing
Cloud, Mobility, Security and Big Data require a different level of technology efficiency and scalability. Traditional systems may no longer be able to handle the increasing internet workloads with optimal performance. Having and investment strategy that gives you access to newer technology such as HP Moonshot allows you to meet the requirements for the New Style of IT.
A simple and flexible payment structure can help you access the latest technology on your terms.
Why leverage a predictable monthly payment?
• Provides financial flexibility to scale up your business
• May help mitigate the financial risk of your IT transformation
• Enables IT refresh cycles to keep up with latest technology
• May help improve your cash flow
• Offers predictable monthly payments which can help you stay within budget
How does it work?
• Talk to your HP Sales Rep about acquiring HP Moonshot using a predictable monthly payment
• Expand your capacity easily with a simple add-on payment
• Add spare capacity needed for even greater agility
• Set your payment terms based on your business needs
• After an agreed term, you’ll be able to refresh your technology
From The HP Moonshot team provides answers to your questions about the datacenter of the future [The HP Blog Hub, as of Aug 29, 2013]
Q: WHAT IS THE FUNDAMENTAL IDEA BEHIND THE HP MOONSHOT SYSTEM?
A: The idea is simple—use energy-efficient CPU’s attuned to a particular application to achieve radical power, space and cost savings. Stated another way; creating software defined servers for specific applications that run at scale.
Q: WHAT IS INNOVATIVE ABOUT THE HP MOONSHOT ARCHITECTURE?
A: The most innovative characteristic of HP Moonshot is the architecture. Everything that is a common resource in a traditional server has been converged into the chassis. The power, cooling, management, fabric, switches and uplinks are all shared across 45 hot-pluggable cartridges in a 4.3U chassis.
Q: EXPLAIN WHAT IS MEANT BY “SOFTWARE DEFINED” SERVER
A: Software defined servers achieve optimal useful work per watt by specializing for a given workload: matching a software application with available technology that can provide the most optimal performance. For example, the firstMoonshot server is tuned for the web front end LAMP (Linux/Apache/MySQL/PHP) stack. In the most extreme case of a future FPGA (Field Programmable Gate Array) cartridge, the hardware truly reflects the exact algorithm required.
Q: DESCRIBE THE FABRIC THAT HAS BEEN INTEGRATED INTO THE CHASSIS
A: The HP Moonshot 1500 Chassis has been built for future SOC designs that will require a range of network capabilities including cartridge to cartridge interconnect. Additionally, different workloads will have a range of storage needs.
There are four separate and independent fabrics that support a range of current and future capabilities; 8 lanes of Ethernet; storage fabric (6Gb SATA) that enable shared storage amongst cartridges or storage expansion to a single cartridge; a dedicated iLO management network to manage all the servers as one; a cluster fabric with point to point connectivity and low latency interconnect between servers.
Martin Fink, CTO and Director of HP Labs, Hewlett-Packard Company [Oct 29, 2013]:
We’ve actually announced three ARM-based cartridges. These are available in our Discovery Labs now, and they’ll be shipping next year with new processor technology. [When talking about the slide shown above.]
Details about the latest and future Calxeda SoCs see in the closing part of this Dec 7 update
@SC13: HP Moonshot ProLiant m800 Server Cartridge with Texas Instruments [Janet Bartleson YouTube channel, Nov 26, 2013]
Details about the latest Texas Instruments DSP+ARM SoCs see after the Calxeda section in the closing part of this Dec 7 update
The New Style of IT & HP Moonshot: Keynote by HP’s Martin Fink at ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 29, published on Nov 11, 2013]
From Big Data and the future of computing – A conversation with John Sontag [HP Enterprise 20/20 Blog, October 28, 2013]
20/20 Team: Where is HP today in terms of helping everyone become a data scientist?
John Sontag: For that to happen we need a set of tools that allow us to be data scientists in more than the ad hoc way I just described. These tools should let us operate productively and repeatably, using vocabulary that we can share – so that each of us doesn’t have to learn the same lessons over and over again. Currently at HP, we’re building a software tool set that is helping people find value in the data they’re already surrounded by. We have HAVEn for data management, which includes the Vertica data store, and Autonomy for analysis. For enterprise security we have ArcSight and ThreatCentral. We have our work around StoreOnce to compress things, and Express Query to allow us to consume data in huge volumes. Then we have hardware initiatives like Moonshot, which is bringing different kinds of accelerators to bear so we can actually change how fast – and how effectively – we can chew on data.
20/20 Team: And how is HP Labs helping shape where we are going?
John Sontag: One thing we’re doing on the software front is creating new ways to interrogate data in real time through an interface that doesn’t require you to be a computer scientist. We’re also looking at how we present the answers you get in a way that brings attention to the things you most need to be aware of. And then we’re thinking about how to let people who don’t have massive compute resources at their disposal also become data scientists.
20/20 Team: What’s the answer to that?
John Sontag: For that, we need to rethink the nature of the computer itself. If Moonshot is helping us make computers smaller and less energy-hungry, then our work on memristors will allow us to collapse the old processor/memory/storage hierarchy, and put processing right next to the data. Next, our work on photonics will help collapse the communication fabric and bring these very large scales into closer proximity. That lets us combine systems in new and interesting ways. And then we’re thinking about how to package these re-imagined computers into boxes of different sizes that match the needs of everyone from the individual to the massive, multinational entity. On top of all that, we need to reduce costs – if we tried to process all the data that we’re predicting we’ll want to at today’s prices, we’d collapse the world economy – and we need to think about how we secure and manage that data, and how we deliver algorithms that let us transform it fast enough so that you, your colleagues, and partners across the world can conduct experiments on this data literally as fast as we can think them up.
About John Sontag:
John Sontag is vice president and director of systems research at HP Labs. The systems research organization is responsible for research in memristor, photonics, physical and system architectures, storing data at high volume, velocity and variety, and operating systems. Together with HP business units and partners, the team reaches from basic research to advanced development of key technologies.
With more than 30 years of experience at HP in systems and operating system design and research, Sontag has had a variety of leadership roles in the development of HP-UX on PA-RISC and IPF, including 64-bit systems, support for multiple input/output systems, multi-system availability and Symmetric Multi-Processing scaling for OLTP and web servers.
Sontag received a bachelor of science degree in electrical engineering from Carnegie Mellon University.
From Meet the innovators behind the design and development of Project Moonshot [The HP Blog Hub, June 6, 2013]
This video introduces you to key HP team members who were part of the team that brings you the innovative technology that fundamentally changes how hyperscale servers are built and operated such as:
• Chandrakant Patel – HP Senior Fellow and HP Labs Chief Engineer
• Paul Santeler – Senior Vice President and General Manager of the HyperScale Business Unit
• Kelly Pracht – Moonshot Hardware Platform Manager, HyperScale Business Unit
• Dwight Barron – HP Fellow, Chief Technologist, HyperScale Business Unit
From Six IT technologies to watch [HP Enterprise 20/20 Blog, Sept 5, 2013]
1. Software-defined everything
Over the last couple of years we have heard a lot about software defined networks (SDN) and more recently, software defined data center (SDDC). There are fundamentally two ways to implement a cloud. Either you take the approach of the major public cloud providers, combining low-cost skinless servers with commodity storage, linked through cheap networking. You establish racks and racks of them. It’s probably the cheapest solution, but you have to implement all the management and optimization yourself. You can use software tools to do so, but you will have to develop the policies, the workflows and the automation.
Alternatively you can use what is becoming known as “converged infrastructure,” a term originally coined by HP, but now used by all our competitors. Servers, storage and networking are integrated in a single rack, or a series of interconnected ones, and the management and orchestration software included in the offering, provides an optimal use of the environment. You get increased flexibility and are able to respond faster to requests and opportunities.
We all know that different workloads require different characteristics. Infrastructures are typically implemented using general purpose configurations that have been optimized to address a very large variety of workloads. So, they do an average job for each. What if we could change the configuration automatically whenever the workload changes to ensure optimal usage of the infrastructure for each workload? This is precisely the concept of software defined environments. Configurations are no longer stored in the hardware, but adapted as and when required. Obviously this requires more advanced software that is capable of reconfiguring the resources.
A software-defined data center is described as a data center where the infrastructure is virtualized and also delivered as a service. Control of the data center is automated by software – meaning hardware configuration is maintained through intelligent software systems. Three core components comprise the SDDC, server virtualization, network virtualization and storage virtualization. It remains to be said that some workloads still require physical systems (often referred to as bare metal), hence the importance of projects such as OpenStack’s Ironic which could be defined as a hypervisor for physical environments.
2. Specialized servers
As I mentioned, all workloads are not equal, but run on the same, general purpose servers (typically x86). What if we create servers that are optimized for specific workloads? In particular, when developing cloud environments delivering multi-tenant SaaS services, one could well envisage the use of servers specialized for a specific task, for example video manipulation, dynamic web service management. Developing efficient, low energy specialized servers that can be configured through software is what HP’s Project Moonshot is all about. Today, although still in its infancy, there is much more to come. Imagine about 45 server/storage cartridges linked through three fabrics (for networking, storage and high speed cartridge to cartridge interconnections), sharing common elements such as network controllers, management functions and power management. If you then build the cartridges using low energy servers, you reduce energy consumption by nearly 90%. If you build SaaS type environments, using multi-tenant application modules, do you still need virtualization? This simplifies the environment, reduces the cost of running it and optimizes the use of server technology for every workload.
Particularly for environments that constantly run certain types of workloads, such as analyzing social or sensor data, the use of specialized servers can make the difference. This is definitely an evolution to watch.
Let’s now complement those specialized servers with photonic based connections enabling flat, hyper-efficient networks boosting bandwidth, and we have an environment that is optimized to deliver the complex tasks of analyzing and acting upon signals provided by the environment in its largest sense.
But technology is going even further. I talked about the three fabrics, over time; why not use photonics to improve the speed of the fabrics themselves, increasing the overall compute speed. We are not there yet, but early experiments with photonic backplanes for blade systems have shown overall compute speed increased up to a factor seven. That should be the second step.
The third step takes things further. The specialized servers I talked about are typically system on a chip (SoC) servers, in other words, complete computers on a single chip. Why not use photonics to link those chips with their outside world? On-chip lasers have been developed in prototypes, so we are not that far out. We could even bring things one step further and use photonics within the chip itself, but that is still a little further out. I can’t tell you the increase in compute power that such evolutions will provide you, but I would expect it to be huge.
Storage is at a crossroads. On the one hand, hard disk drives (HDD) have improved drastically over the last 20 years, both in reading speed and in density. I still remember the 20MB hard disk drive, weighing 125Kg of the early 80’s. When I compare that with the 3TB drive I bought a couple months ago for my home PC, I can easily depict this evolution. But then the SSD (solid state disk) has appeared. Where a HDD read will take you 4 ms, the SDD read is down at 0.05 ms.
Using nanotechnologies, HP Labs did develop prototypes of the Memristor, a new approach to data storage, faster than Flash memory and consumes way less energy. Such a device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.
Details about the latest and future Calxeda SoCs:
Calxeda EnergyCore ECX-2000 family – ARM TechCon ’13 [ARMflix YouTube channel, recorded on Oct 30, 2013]
From ECX-2000 Product Brief [October, 2013]
The Calxeda EnergyCore ECX-2000 Series is a family of SoC (Server-on-Chip) products that delivers the power efficiency of ARM® processors, and the OpenStack, Linux, and virtualization software needed for modern cloud infrastructures. Using the ARM Cortex A15 quad-core processor, the ECX-2000 delivers roughly twice the performance, three times the memory bandwidth, and four times the memory capacity of the ground-breaking ECX-1000. It is extremely scalable due to the integrated Fleet Fabric Switch, while the embedded Fleet Engine simultaneously provides out-of-band control and intelligence for autonomic operation.
In addition to enhanced performance, the ECX-2000 provides hardware virtualization support via KVM and Xen hypervisors. Coupled with certified support for Ubuntu 13.10 and the Havana Openstack release, this marks the first time an ARM SoC is ready for Cloud computing. The Fleet Fabric enables the highest network and interconnect bandwidth in the MicroServer space, making this an ideal platform for streaming media and network-intensive applications.
The net result of the EnergyCore SoC architecture is a dramatic reduction in power and space requirements, allowing rapidly growing data centers to quickly realize operating and capital cost savings.
Scalability you can grow into. An integrated EnergyCore Fabric Switch within every SoC provides up to five 10 Gigabit lanes for connecting thousands of ECX-2000 server nodes into clusters capable of handling distributed applications at extreme scale. Completely topology agnostic, each SoC can be deployed to work in a variety of mesh, grid, or tree network structures, providing opportunities to find the right balance of network throughput and fault resiliency for any given workload.
Fleet Fabric Switch
• Integrated 80Gb (8×8) crossbar switch with through-traffic support
• Five (5) 10Gb external channels, three (3) 10Gb internal channels
• Configurable topology capable of connecting up to 4096 nodes
• Dynamic Link Speed Control from 1Gb to 10Gb to minimize power and maximize performance
• Network Proxy Support maintains network presence even with node powered off
• In-order flow delivery
• MAC learning provider support for virtualization
ARM Servers and Xen — Hypervisor Support at Hyperscale – Larry Wikelius, [Co-Founder of] Calxeda [TheLinuxFoundation YouTube channel, Oct 1, 2013]
Calxeda Launches Midway ARM Server Chips, Extends Roadmap [EnterpriseTech, Oct 28, 2013]
ARM server chip supplier Calxeda is just about to ship its second generation of EnergyCore processors for hyperscale systems and most of its competitors are still working on their first products. Calxeda is also tweaking its roadmap to add a new chip to its lineup, which will bridge between the current 32-bit ARM chips and its future 64-bit processors.
There is going to be a lot of talk about server-class ARM processors this week, particularly with ARM Holdings hosting its TechCon conference in Santa Clara.
A month ago, EnterpriseTech told you about the “Midway” chip that Calxeda had in the works and as well as its roadmap to get beefier 64-bit cores and extend its Fleet Services fabric to allow for more than 100,000 nodes to be linked together.
The details were a little thin on the Midway chip, but we now know that it will be commercialized as the ECX-2000, and that Calxeda is sending out samples to server makers right now. The plan is to have the ECX-2000 generally available by the end of the year, and that is why company is ready to talk about some feeds and speeds. Karl Freund, vice president of marketing at Calxeda, walked EnterpriseTech through the details.
The Midway chip is fabricated in the same 40 nanometer process as the existing “High Bank” ECX-1000 chip that Calxeda first put into the field in November 2011 in the experimental “Redstone” hyperscale servers from Hewlett-Packard. That 32-bit chip, based on the ARM Cortex-A9 core, was subsequently adopted in systems from Penguin Computing, Boston, and a number of other hyperscale datacenter operators who did proofs of concept with the chips. The ECX-1000 has four cores and was somewhat limited in its performance and was definitely limited in its main memory, which topped out at 4 GB across the four-core processor. But the ECX-2000 addresses these issues.
The ECX-2000 is based on ARM Holding’s Cortex-A15 core and has the 40-bit physical memory extensions, which allows for up to 16 GB of memory to be physically attached to each socket. With the 40-bit physical addressing added with the Cortex-A15, the memory controller can, in theory, address up to 1 TB of main memory; this is called Large Physical Address Extension (LPAE) in the ARM lingo, and it maps the 32-bit physical addressing on the core to a 40-bit virtual address space. Each core on the ECX-2000 has 32 KB of L1 instruction cache and 32 KB of L1 data cache, and ARM licensees are allowed to scale the L2 cache as they see fit. The ECX-2000 has 4 MB of L2 cache shared across the four cores on the die. These are exactly the same L1 and L2 cache sizes as used in the prior ECX-1000 chips.
The Cortex-A15 design was created to scale to 2.5 GHz, but as you crank up the clocks on any chip, the amount of energy consumed and heat radiated grows progressively larger as clock speeds go up. At a certain point, it just doesn’t make sense to push clock speeds. Moreover, every drop in clock speed gives a proportionately larger increase in thermal efficiency, and this is why, says Freund, Calxeda is making its implementation of the Cortex-A15 top out at 1.8 GHz. The company will offer lower-speed parts running at 1.1 GHz and 1.4 GHz for customers that need an even better thermal profile or a cheaper part where low cost is more important than raw performance or thermals.
What Calxeda and its server and storage array customers are focused on is the fact that the Midway chip running at 1.8 GHz has twice the integer, floating point, and Java performance of a 1.1 GHz High Bank chip. That is possible, in part, because the new chip has four times the main memory and three times the memory bandwidth as the old chip in addition to a 64 percent boost in clock speed. Calxeda is not yet done benchmarking systems using the chips to get a measure of their thermal efficiency, but is saying that there is as much as a 33 percent boost in performance per watt comparing old to new ECX chips.
The new ECX-2000 chip has a dual-core Cortex-A7 chip on the die that is used as a controller for the system BIOS as well as a baseboard management controller and a power management controller for the servers that use them. These Fleet Engines, as Calxeda calls them, eliminate yet another set of components, and therefore their cost, in the system. These engines also control the topology of the Fleet Services fabric, which can be set up in 2D torus, mesh, butterfly tree, and fat tree network configurations.
The Fleet Services fabric has 80 Gb/sec of aggregate bandwidth and offers multiple 10 Gb/sec Ethernet links coming off the die to interconnect server nodes on a single card, multiple cards in an enclosure, multiple enclosures in a rack, and multiple racks in a data center. The Ethernet links are also used to allow users to get to applications running on the machines.
Freund says that the ECX-2000 chip is aimed at distributed, stateless server workloads, such as web server front ends, caching servers, and content distribution. It is also suitable for analytics workloads like Hadoop and distributed NoSQL data stores like Cassandra, all of which tend to run on Linux. Both Red Hat and Canonical are cooking up commercial-grade Linuxes for the Calxeda chips, and SUSE Linux is probably not going to be far behind. The new chips are also expected to see action in scale-out storage systems such as OpenStack Swift object storage or the more elaborate Gluster and Ceph clustered file systems. The OpenStack cloud controller embedded in the just-announced Ubuntu Server 13.10 is also certified to run on the Midway chip.
Hewlett-Packard has confirmed that it is creating a quad-node server cartridge for its “Moonshot” hyperscale servers, which should ship to customers sometime in the first or second quarter of 2014. (It all depends on how long HP takes to certify the system board.) Penguin Computing, Foxconn, Aaeon, and Boston are expected to get beta systems out the door this year using the Midway chip and will have them in production in the first half of next year. Yes, that’s pretty vague, but that is the server business, and vagueness is to be expected in such a young market as the ARM server market is.
Looking ahead, Calxeda is adding a new processor to its roadmap, code-named “Sarita.” Here’s what the latest system-on-chip roadmap looks like now:
The future “Lago” chip is the first 64-bit chip that will come out of Calxeda, and it is based on the Cortex-A57 design from ARM Holdings –one of several ARMv8 designs, in fact. (The existing Calxeda chips are based on the ARMv7 architecture.)
Both Sarita and Lago will be implemented in TSMC’s 28 nanometer processes, and that shrink from the current 40 nanometer to 28 nanometer processes is going to allow for a lot more cores and other features to be added to the die and also likely a decent jump in clock speed, too. Freund is not saying at the moment which way it will go.
But what Freund will confirm is that Sarita will be pin-compatible with the existing Midway chip, meaning that server makers who adopt Midway will have a processor bump they can offer in a relatively easy fashion. It will also be based on the Cortex-A57 cores from ARM Holdings, and will sport four cores on a die that deliver about a 50 percent performance increase compared to the Midway chips.
The Lago chips, we now know, will scale to eight cores on a die and deliver about twice the performance of the Midway chips. Both Lago and Sarita are on the same schedule, in fact, and they are expected to tape out this quarter. Calxeda expects to start sampling them to customers in the second quarter of 2014, with production quantities being available at the end of 2014.
Not Just Compute, But Networking, Too
As important as the processing is to a system, the Fleet Services fabric interconnect is perhaps the key differentiator in its design. The current iteration of that interconnect, which is a distributed Layer 2 switch fabric that is spread across each chip in a cluster, can scale across 4,096 nodes without requiring top-of-rack and aggregation switches.
Both of the Lago and Sarita chips will be using the Fleet Services 2.0 intehttp://www.ti.com/product/66ak2h12rconnect that is now being launched with Midway. This iteration of the interconnect has all kinds of tweaks and nips and tucks but no scalability enhancements beyond the 4,096 nodes in the original fabric.
Freund says that the Fleet Services 3.0 fabric, which allows the distributed switch architecture to scale above 100,000 nodes in a flat network, will probably now come with the “Ratamosa” chips in 2015. It was originally – and loosely – scheduled for Lago next year. The circuits that do the fabric interconnect is not substantially different, says Freund, but the scalability is enabled through software. It could be that customers are not going to need such scalability as rapidly as Calxeda originally thought.
The “Navarro” kicker to the Ratamosa chip is presumably based on the ARMv9 architecture, and Calxeda is not saying anything about when we might see that and what properties it might have. All that it has said thus far is that it is aimed at the “enterprise server era.”
Details about the latest Texas Instruments DSP+ARM SoCs:
From Imagine the impact…TI’s KeyStone SoC + HP Moonshot [TI’s Multicore Mix Blog, April 19, 2013]
TI’s participation in HP’s Pathfinder Innovation Ecosystem is the first step towards arming HP’s customers with optimized server systems that are ideally suited for workloads such as oil and gas exploration, Cloud Radio Access Networks (C-RAN), voice over LTE and video transcoding. This collaboration between TI and HP is a bold step forward, enabling flexible, optimized servers to bring differentiated technologies, such as TI’s DSPs, to a broader set of application providers. TI’s KeyStone II-based SoCs, which integrate fixed- and floating- point DSP cores with multiple ARM® Cortex™A-15 MPCore processors, packet and security processing, and high speed interconnect, give customers the performance, scalability and programmability needed to build software-defined servers. HP’s Moonshot system integrates storage, networking and compute cards with a flexible interconnect, allowing customers to choose the optimized ratio enabling the industry’s first software-defined server platform. Bringing TI’s KeyStone II-based SoCs into HP’s Moonshot system opens up several tantalizing possibilities for the future. Let’s look at a few examples:
Think about the number of voice conversations happening over mobile devices every day. These conversations are independent of each other, and each will need transcoding from one voice format to another as voice travels from one mobile device, through the network infrastructure and to the other mobile device. The sheer number of such conversations demand that the servers used for voice transcoding be optimized for this function. Voice is just one example. Now think about video and music, and you can imagine the vast amount of processing required. Using TI’s KeyStone II-based SoCs with DSP technology provides optimized server architecture for these applications because our SoCs are specifically tuned for signal processing workloads.
Another example can be with C-RAN. We have seen a huge push for mobile operators to move most of the mobile radio processing to the data center. There are several approaches to achieve this goal, and each has pros and cons associated with them. But one thing is certain – each approach has to do wireless symbol processing to achieve optimum 3G or 4G communications with smart mobile devices. TI’s KeyStone II-based SoCs are leading the wireless communication infrastructure market and combine key accelerators such as BCP (Bit Rate Co-Processor), VCP (Viturbi Co-Processor) and others to enable 3G/4G standards compliant for wireless processing. These key accelerators offload standard-based wireless processing from the ARM and/or DSP cores, freeing the cores for value-added processing. The combination of ARM/DSP with these accelerators provides an optimum SoC for 3G/4G wireless processing. By combining TI’s KeyStone II-based SoC with HP’s Moonshot system, operators and network equipment providers can now build customized servers for C-RAN to achieve higher performance systems at lower cost and ultimately provide better experiences to their customers.
A better way to cloud: TI’s new KeyStone multicore SoCs [embeddednewstv YouTube channel, published on Jan 12,2013 (YouTube: Oct 21, 2013)]
Texas Instruments Offers System on a Chip for HPC Applications [RichReport YouTube channel, Nov 20, 2012]
A better way to cloud: TI’s new KeyStone multicore SoCs revitalize cloud applications, enabling new capabilities and a quantum leap in performance at significantly reduced power consumption
- Industry’s first implementation of quad ARM® Cortex™-A15 MPCore™ processors in infrastructure-class embedded SoC offers developers exceptional capacity & performance at significantly reduced power for networking, high performance computing and more
- Unmatched combination of Cortex-A15 processors, C66x DSPs, packet processing, security processing and Ethernet switching, transforms the real-time cloud into an optimized high performance, power efficient processing platform
- Scalable KeyStone architecture now features 20+ software compatible devices, enabling customers to more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices
ELECTRONICA – MUNICH (Nov.13, 2012) /PRNewswire/ — To most technologists, cloud computing is about applications, servers, storage and connectivity. To Texas Instruments Incorporated (TI) (NASDAQ: TXN) it means much more. Today, TI is unveiling a BETTER way to cloud with six new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption.
To TI, a BETTER way to cloud means:
- Safer communities thanks to enhanced weather modeling;
- Higher returns from time sensitive financial analysis;
- Improved productivity and safety in energy exploration;
- Faster commuting on safer highways in safer cars;
- Exceptional video on any screen, anywhere, any time;
- More productive and environmentally friendly factories; and
- An overall reduction in energy consumption for a greener planet.
TI’s new KeyStone multicore SoCs are enabling this – and much more. These 28-nm devices integrate TI’s fixed-and floating-point TMS320C66x digital signal processor (DSP) generation cores – yielding the best performance per watt ratio in the DSP industry – with multiple ARM® Cortex™-A15 MPCore™ processors – delivering unprecedented processing capability combined with low power consumption – facilitating the development of a wide-range of infrastructure applications that can enable more efficient cloud experiences. The unique combination of Cortex-A15 processors and C66x DSPcores, with built-in packet processing and Ethernet switching, is designed to efficiently offload and enhance the cloud’s first generation general purpose servers; servers that struggle with big data applications like high performance computing and video processing.
“Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”
TI’s six new high-performance SoCs include the 66AK2E02, 66AK2E05, 66AK2H06, 66AK2H12, AM5K2E02 and AM5K2E04, all based on the KeyStone multicore architecture. With KeyStone’s low latency high bandwidth multicore shared memory controller (MSMC), these new SoCs yield 50 percent higher memory throughput when compared to other RISC-based SoCs. Together, these processing elements, with the integration of security processing, networking and switching, reduce system cost and power consumption, allowing developers to support the development of more cost-efficient, green applications and workloads, including high performance computing, video delivery and media and image processing. With the matchless combination TI has integrated into its newest multicore SoCs, developers of media and image processing applications will also create highly dense media solutions.
“Visionary and innovative are two words that come to mind when working with TI’s KeyStone devices,” said Joe Ye, CEO, CyWee. “Our goal is to offer solutions that merge the digital and physical worlds, and with TI’s new SoCs we are one step closer to making this a reality by pushing state-of-the-art video to virtualized server environments. Our collaboration with TI should enable developers to deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.”
Simplified development with complete tools and support
TI continues to ease development with its scalable KeyStone architecture, comprehensive software platform and low-cost tools. In the past two years, TI has developed over 20 software compatible multicore devices, including variations of DSP-based solutions, ARM-based solutions and hybrid solutions with both DSP and ARM-based processing, all based on two generations of the KeyStone architecture. With compatible platforms across TI’s multicore DSPs and SoCs, customers can more easily design integrated, power and cost-efficient products for high-performance markets from a range of devices, starting at just $30 and operating at a clock rate of 850MHz all the way to 15GHz of total processing power.
TI is also making it easier for developers to quickly get started with its KeyStone multicore solutions by offering easy-to-use, evaluation modules (EVMs) for less than $1K, reducing developers’ programming burdens and speeding development time with a robust ecosystem of multicore tools and software.
In addition, TI’s Design Network features a worldwide community of respected and well established companies offering products and services that support TI multicore solutions. Companies offering supporting solutions to TI’s newest KeyStone-based multicore SoCs include 3L Ltd., 6WIND, Advantech, Aricent, Azcom Technology, Canonical, CriticalBlue Enea, Ittiam Systems, Mentor Graphics, mimoOn, MontaVista Software, Nash Technologies, PolyCore Software and Wind River.
Availability and pricing
TI’s 66AK2Hx SoCs are currently available for sampling, with broader device availability in 1Q13 and EVM availability in 2Q13. AM5K2Ex and 66AK2Ex samples and EVMs will be available in the second half of 2013. Pricing for these devices will start at $49 for 1 KU.
66AK2H14 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, Nov 10, 2013]
The same as below for 66AK2H12 SoC with addition of:
- The Case for 10G Ethernet in Embedded Processing
(PDF , 189KB) 13 Nov 2013
From that the below excerpt is essential to understand the added value above 66AK2H12 SoC:
Figure 1. TI’s KeyStone™ 66AK2H14 SoC
The 66AK2H14 SoC shown in Figure 1, with the raw computing power of eight C66x processors and quad ARM Cortex-A15s at over 1GHz performance, enables applications such as very large fast fourier transforms (FFT) in radar and multiple camera image analytics where a 10Gbit/s networking connection is needed. There are, and have been, several sophisticated technologies that have offered the bandwidth and additional features to fill this role. Some such as Serial RapidIO® and Infiniband have been successful in application domains that Gigabit Ethernet could not address, and continue to make sense, but 10Gbit/s Ethernet will challenge their existence.
66AK2H12 (ACTIVE) Multicore DSP+ARM KeyStone II System-on-Chip (SoC) [TI.com, created on Nov 8, 2012]
sheetmanual [351 pages]:
- 66AK2H14/12/06 Multicore DSP+ARM KeyStone II System-on-Chip (SoC) (Rev. E) (PDF , 8763 KB) 14 Nov 2013
- [Datasheet:] 66AK2Hx KeyStone Multicore DSP+ARM System-on-chips (Rev. A) (PDF , 193KB) 8 Nov 2013
- Multicore DSPs for High-Performance Video Coding
(PDF , 245KB) 22 Jan 2013
- Video Infrastructure – Applications of the K2E, K2H platforms
(PDF , 199KB) 9 Nov 2012
- Industrial Imaging: Applications of the K2H and K2E platforms
(PDF , 515) 9 Nov 2012
The 66AK2Hx platform is TI’s first to combine the quad ARM® Cortex™-A15 MPCore™ processors with up to eight TMS320C66x high-performance DSPs using the KeyStone II architecture. Unlike previous ARM Cortex-A15 devices that were designed for consumer products, the 66AK2Hx platform provides up to 5.6 GHz of ARM and 11.2 GHz of DSP processing coupled with security and packet processing and Ethernet switching, all at lower power than multi-chip solutions making it optimal for embedded infrastructure applications like cloud computing, media processing, high-performance computing, transcoding, security, gaming, analytics and virtual desktop. Using TI’s heterogeneous programming runtime software and tools, customers can easily develop differentiated products with 66AK2Hx SoCs.
Taking Multicore to the Next Level: KeyStone II Architecture [Texas Instruments YouTube channel, Feb 26, 2012]
Kick start development of high performance compute systems with TI’s new KeyStone™ SoC and evaluation module [TI press release, Nov 14, 2013]
Combination of DSP + ARM® cores and high-speed peripherals offer developers an optimal compute solution at low power consumption
DALLAS, Nov. 14, 2013 /PRNewswire/ — Further easing the development of processing-intensive applications, Texas Instruments (TI) (NASDAQ: TXN) is unveiling a new system-on-chip (SoC), the 66AK2H14, and evaluation module (EVM) for its KeyStoneTM-based 66AK2Hx family of SoCs. With the new 66AK2H14 device, developers designing high-performance compute systems now have access to a 10Gbps Ethernet switch-on-chip. The inclusion of the 10GigE switch, along with the other high-speed, on-chip interfaces, saves overall board space, reduces chip count and ultimately lowers system cost and power. The EVM enables developers to evaluate and benchmark faster and easier. The 66AK2H14 SoC provides industry-leading computational DSP performance at 307 GMACS/153 GFLOPS and 19600 DMIPS of ARM performance, making it ideal for a wide variety of applications such as video surveillance, radar processing, medical imaging, machine vision and geological exploration.
“Customers today require increased performance to process compute-intensive workloads using less energy in a smaller footprint,” said Paul Santeler, vice president and general manager, Hyperscale Business, HP. “As a partner in HP’s Moonshot ecosystem dedicated to the rapid development of new Moonshot servers, we believe TI’s KeyStone design will provide new capabilities across multiple disciplines to accelerate the pace of telecommunication innovations and geological exploration.”
Meet TI’s new 10Gbps Ethernet DSP + ARM SoC
TI’s newest silicon variant, the 66AK2H14, is the latest addition to its high-performance 66AK2Hx SoC family which integrates multiple ARM Cortex™-A15 MPCore™ processors and TI’s fixed- and floating-point TMS320C66x digital signal processor (DSP) generation cores. The 66AK2H14 offers developers exceptional capacity and performance (up to 9.6 GHz of cumulative DSP processing) at industry-leading size, weight, and power. In addition, the new SoC features a wide array of unique high-speed interfaces, including PCIe, RapidIO, Hyperlink, 1Gbps and 10Gbps Ethernet, achieving total I/O throughput of up to 154Gbps. These interfaces are all distinct and not multiplexed, allowing designers tremendous flexibility with uncompromising performance in their designs.
Ease development and debugging with TI’s tools and software
TI helps simplify the design process by offering developers highly optimized software for embedded HPC systems along with development and debugging tools for the EVMK2H – all for under $1,000. The EVMK2H features a single 66AK2H14 SoC, a status LCD, two 1Gbps Ethernet RJ-45 interfaces and on-board emulation. An optional EVM breakout card (available separately) also provides two 10Gbps Ethernet optical interfaces for 20Gbps backplane connectivity and optional wire rate switching in high density systems.
The EVMK2H is bundled with TI’s Multicore Software Development Kit (MCSDK), enabling faster development with production ready foundational software. The MCSDK eases development and reduces time to market by providing highly-optimized bundles of foundational, platform-specific drivers, optimized libraries and demos.
Complementary analog products to increase system performance
TI offers a wide range of power management and analog signal chain components to increase the system performance of 66AK2H14 SoC-based designs. For example, the TPS53xx integrated FET DC/DC converters provide the highest level of power conversion efficiency even at light loads, while the LM10011 VID converter with dynamic voltage control helps reduce system power consumption. The CDCM6208 low-jitter clock generator also eliminates the need for external buffers, jitter cleaners and level translators.
Availability and pricing
TI’s EVMK2H is available now through TI distribution partners or TI.com for $995. In addition to TI’s Linux distribution provided in the MCSDK, Wind River® Linux is available now for the 66AK2Hxx family of SoCs. Green Hills® INTEGRITY® RTOS and Wind River VxWorks® RTOS support will each be available before the end of the year. Pricing for the 66AK2H14 SoC will start at $330 for 1 KU. The 10Gbps Ethernet breakout card will be available from Mistral.
Ask the Expert: How can developers accelerate scientific computing with TI’s multicore DSPs? [Texas Instruments YouTube channel, Feb 7, 2012]
End of Updates as of Dec 6, 2013
The original post (8 months ago):
HP Moonshot: Designed for the Data Center, Built for the Planet [HP press kit, April 8, 2013]
On April 8, 2013, HP unveiled the world’s first commercially available HP Moonshot system, delivering compelling new infrastructure economics by using up to 89 percent less energy, 80 percent less space and costing 77 percent less, compared to traditional servers. Today’s mega data centers are nearing a breaking point where further growth is restricted due to the current economics of traditional infrastructure. HP Moonshot servers are a first step organizations can take to address these constraints.
For more details on the disruptive potential of HP Moonshot, visit TheDisruption.com
This is an exciting time to be in the IT industry right now. For those of you who have been around for a while — as I have — there have been dramatic shifts that have changed how businesses operate.
From the early days of the mainframes, to the explosion of the Internet and now social networks, every so often very important game-changing innovation comes along. We’re in the midst of another sea change in technology.
Inside HP IT, we are testing the company’s Moonshot servers. With these servers running the same chips found in smart phones and tablets, they are using incredibly less power, require considerably less cooling and have a smaller footprint.
We currently are running some of our intensive hp.com applications on Moonshot and are seeing very encouraging results. Over half a billion people will visit hp.com this year, and the new Moonshot technology will run at a fraction of the space, power and cost – basically we expect to run HP.com off of the same amount of energy needed for a dozen 60-watt light bulbs.
This technology will revolutionize data centers.
Within HP IT, we are fortunate in that over the past several years we have built a solid data center foundation to run our company. Like many companies, we were a victim of IT sprawl — with more than 85 data centers in 29 countries. We decided to make a change and took on a total network redesign, cutting our principle worldwide data centers down to six and housing all of them in the United States.
With the addition of four new EcoPODs to our infrastructure and these new Moonshot servers, we are in the perfect position to build out our private cloud and provide our businesses with the speed and quality of innovation they need.
Moonshot is just the beginning.The product roadmap for Moonshot is extremely promising and I am excited to see what we can do with it within HP IT, and what benefits our customers will see.
What Calxeda is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013] which is best to start with for its simple and efficient message, as well as what Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] already contained on this blog earlier:
Then we can turn to the Moonshot product launch by HP 2 days ago:
Note that the first three videos following here were released 3 days later, so don’t be surpised by YouTube dates, in fact the same 3 videos (as well as the “Introducing HP Moonshot” embedded above) were delivered on April 8 live webcast, see the first 18 minutes of that, and then follow according HP’s flow of the presentation if you like. I would certainly recommend my own presentation compiled here.
HP president and CEO Meg Whitman on the emergence of a new style of IT [HewlettPackardVideos YouTube channel, April 11, 2013]
EVP and GM of HP’s Enterprise Group Dave Donatelli discusses HP Moonshot [HewlettPackardVideos YouTube channel, April 11, 2013]
Tour the Houston Discovery Lab — where the next generation of innovation is created [HewlettPackardVideos YouTube channel, April 11, 2013]
A new era of accelerated innovation [HP Moonshot minisite, April 8, 2013]
Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale.
On the right is the Moonshot System with the very first Moonshot servers (“microservers/server appliances” as called by the industry) based on Intel® Atom S1200 processors and for supporting web-hosting workloads (see also on right part of the image below). Currently there is also a storage cartridge (on the left of the below image) and a multinode for highly dense computing solutions (see in the hands of presenter on the image below). Many more are to come later on.
With up to a 180 servers inside the box (45 now) it was necessary to integrate network switching. There are two sockets (see left) for the network switch so you can configure for redundancy. The downlink module which talks to the cartridges is on left of the below image. This module is paired with an uplink module (see on the middle of the below image as taken out, and then shown with the uplink module on the right) that is in the back of the server. There will be more options available.
– Enterprise Information Library for Moonshot
– HP Moonshot System [Technical white paper from HP, April 5, 2013] from which I will include here the following excerpts for more information:
HP Moonshot 1500 Chassis
The HP Moonshot 1500 Chassis is a 4.3U form factor and slides out of the rack on a set of rails like a file cabinet drawer. It supports 45 HP ProLiant Moonshot Servers and an HP Moonshot-45G Switch Module that are serviceable from the top.
It is a modern architecture engineered for the new style of IT that can support server cartridges, server and storage cartridges, storage only cartridges and a range of x86, ARM or accelerator based processor technologies.
As an initial offering, the HP Moonshot 1500 Chassis is fully populated 45 HP ProLiant Moonshot Servers and one HP Moonshot-45G Switch Module and a second HP Moonshot-45G Switch Module can be purchased as an option. Future offerings will include quad server cartridges and will result in up to 180 servers per chassis. The 4.3U form factor allows for 10 chassis per rack, which with the quad server cartridge amounts to 1800 servers in a single rack.
The Moonshot 1500 Chassis simplifies management with four iLO processors that share management responsibility for the 45 servers, power, cooling, and switches.
Highly flexible fabric
Built into the HP Moonshot 1500 Chassis architecture are four separate and independent fabrics that support a range of current and future capabilities:
• Network fabric
• Storage fabric
• Management fabric
• Integrated cluster fabric
The Network fabric provides the primary external communication path for the HP Moonshot 1500 Chassis.
For communication within the chassis, the network switch has four communication channels to each of the 45 servers. Each channel supports a 1-GbE or 10-GbE interface. Each HP Moonshot-45G Switch Module supports 6 channels of 10GbE interface to the HP Moonshot-6SFP network uplink modules located in the rear of the chassis.
The Storage fabric provides dedicated SAS lanes between server and storage cartridges. We utilize HP Smart Storage firmware found in the ProLiant family of servers to enable multiple core to spindle ratios for specific solutions. A hard drive can be shared among multiple server cartridges to enable low cost boot, logging, or attached to a node to provide storage expansion.
The current HP Moonshot System configuration targets light scale-out applications. To provide the best operating environment for these applications, it includes HP ProLiant Moonshot Servers with a hard disk drive (HDD) as part of the server architecture. Shared storage is not an advantage for these environments. Future releases of the servers thattarget different solutions will take advantage of the storage fabric.
We utilize the Integrated Lights-Out (iLO) application-specific integrated circuit (ASIC) standard in the HP ProLiant family of servers to provide the innovative management features in the HP Moonshot System. To handle the range of extreme low energy processors we provide a device neutral approach to management, which can be easily consumed by data center operators to deploy at scale.
The Management fabric enables management of the HP Moonshot System components as one platform with a dedicated iLO network. Benefits of the management fabric include:
• The iLO Chassis Manager aggregates data to a common set of management interfaces.
• The HP Moonshot 1500 Chassis has a single Ethernet port gateway that is the single point of access for the Moonshot Chassis manager.
• Intelligent Platform Management Interface (IPMI) and Serial Console for each server
• True out-of-band firmware update services
• SL-APM Rack Management spans rack or multiple racks
Integrated Cluster fabric
The Integrated Cluster fabric provides a high-speed interface among future server cartridge technologies that will benefit from high bandwidth node-to-node communication. North, south, east, and west lanes are provided between individual server cartridges.
The current HP ProLiant Moonshot Servertargets light scale-out applications. These applications do not benefit from the node-to-node communications, so the Integrated Cluster fabric is not utilized. Future releases of the cartridges that target different workloads that require low latency interconnects will take advantage of the Integrated Cluster fabric.
HP ProLiant Moonshot Server
HP will bring a growing library of cartridges, utilizing cutting-edge technology from industry leading partners. Each server will target specific solutions that support emerging Web, Cloud, and Massive-Scale Environments, as well as Analytics and Telecommunications. We are continuing server development for other applications, including Big Data, High-Performance Computing, Gaming, Financial Services, Genomics, Facial Recognition, Video Analysis, and more.
Figure 4. Cartridges target specific solutions
The first server cartridge now available is HP ProLiant Moonshot Server, which includes the Intel® Atom Processor S1260. This is a low power processor that is right-sized for the light workloads. It has dedicated memory and storage, with discrete resources. This server design is idealfor light scale-out applications. Light scale-out applications require relatively little processing but moderately high I/O and include environments that perform the following functions:
• Dedicated web hosting
• Simple content delivery
The HP ProLiant Moonshot Server can hot plug in the HP Moonshot 1500 Chassis. If service is necessary, it can be removed without affecting the other servers in the chassis. Table 1 defines the HP ProLiant Moonshot Server specifications.
Table 1. HP ProLiant Moonshot Server specifications
One Intel® Atom Processor S1260
8 GB DDR3 ECC 1333 MHz
Integrated dual-port 1Gb Ethernet NIC
500 GB or 1 TB HDD or SSD, non-hot-plug, small form factor
• Canonical Ubuntu 12.04
• Red Hat Enterprise Linux 6.4
• SUSE Linux Enterprise Server 11 SP2
With that HP CEO Seeks Turnaround Unveiling ‘Moonshot’ Super-Server: Tech [Bloomberg, April, 2013] as well as HP Moonshot: Say Goodbye to the Vanilla Server [Forbes, April 8, 2013]. HP however is much more eyeing the ARM based Moonshot servers which are expected to come later, because of the trends reflected on the left (source: HP). The software defined server concept is very general.
There are a number of quite different server cartridges expected to come, all specialised by server software installed on it. Typical specialised servers, for example, are the ones on which CyWee from Taiwan is working on with Texas Instruments’ new KeyStone II architecture featuring both ARM Cortex-A15 CPU cores and TI’s own C66x DSP cores for a mixture of up to 32 DSP and RISC cores in TI’s new 66AK2Hx family of SoCs, first of which is the TMS320TCI6636 implemented in 28nm foundry technology. Based on that CyWee will deliver multimedia Moonshot server cartridges for cloud gaming, virtual office, video conferencing and remote education (see even the first Keystone announcement). This CyWee involvement in HP Moonshot effort is part of HP’s Pathfinder Partner Program which Texas Instruments also joined recently to exploit a larger opportunity as:
TI’s 66AK2Hx family and its integrated c66x multicore DSPs are applicable for workloads ranging from high performance computing, media processing, video conferencing, off-line image processing & analytics, video recorders (DVR/NVR), gaming, virtual desktop infrastructure and medical imaging.
But Intel was able to win the central piece of the Moonshot System launch (originally initiated by HP as the “Moonshot Project” in November 2011 for disruption in terms of power and TCO for servers, actually with a Calxeda board used for research and development with other partners), at least as it was productized just two days ago:
Raejeanne Skillern from Intel – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel]
However ARM was not left out either just relegated in the beginning to highly advanced and/or specialised server roles with its SoC partners, and coming later in the year:
- Applied Micro with networking and connectivity background having now the X-Gene ARM 64-bit Server on a Chip platform as well which features 8 ARM 64-bit high-performance cores developed from scratch according to an architecture license (i.e. not ARM’s own Cortex-A50 series core), clocked at up to 2.4GHz and also has 4 smaller cores for network and storage offloads (see AppliedMicro on the X-Gene ARM Server Platform and HP Moonshot [SiliconANGLE blog [April 9, 2013]). Sample reference boards to key customers were shipped in March (see Applied Micro’s cloud chip is an ARM-based, switch-killing machine [GigaOM, April 3, 2013]). In the latest X-Gene Arrives in Silicon [Open Compute Summit Winter 2013 presentation, Jan 16, 2013] video you can have the most recent strategic details (upto 2014 with FinFET implementation of a “Software defined X-Gene based data center components”, should be assumed that at 16nm). Here I will include a more product-oriented AppliedMicro Shows ARM 64-bit X-Gene Server on a Chip Hardware and Software [Charbax YouTube channel, Nov 3, 2012] overview video:
Vinay Ravuri, Vice President and General Manager, Server Products at AppliedMicro gives an update on the 64bit ARM X-Gene Server Platform. At ARM Techcon 2012, AppliedMicro, ARM and several open-source software providers gave updates on their support of the ARM 64-bit X-Gene Server on a Chip Platform.
More information: A 2013 Resolution for the Data Center [Applied Micro on Smart Connected Devices blog from ARM, Feb 4, 2013] about “plans from Oracle, Red Hat, Citrix and Cloudera to support this revolutionary architecture … Dell’s “Iron” server concept with X-Gene … an X-Gene based ARM server managed by the Dell DCS Software suite …” etc.
- Texas Instruments with digital signal processing (DSP) background, as it was already presented above.
- Calxeda with integration of storage fabric and Internet switching background, with details coming later, etc.:
This is what is empasized by Lakshmi Mandyam from ARM – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]
She is also mentioning in the talk the achievements which could put ARM and its SoC partners into a role which Intel now has with its general Atom S1200 based server cartridge product fitting into the Moonshot system. Perspective information on that is already available on my ‘Experiencing the Cloud’ blog here:
– The state of big.LITTLE processing [April 7, 2013]
– The future of mobile gaming at GDC 2013 and elsewhere [April 6, 2013]
– TSMC’s 16nm FinFET process to be further optimised with Imagination’s PowerVR Series6 GPUs and Cadence design infrastructure [April 8, 2013]
– With 28nm non-exclusive in 2013 TSMC tested first tape-out of an ARM Cortex™-A57 processor on 16nm FinFET process technology [April 3, 2013]
The absence of Microsoft is even more interesting as AMD is also on this Moonshot bandwagon: Suresh Gopalakrishnan from AMD – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013]
already showing a Moonshot fitting server cartridge with AMD’s four next-generation SoCs (while Intel’s already productized cartridge is not yet at an SoC level). We know from CES 2013 that AMD Unveils Innovative New APUs and SoCs that Give Consumers a More Exciting and Immersive Experience [press release, Jan 7, 2013] with the:
“Temash” … elite low-power mobility processor for Windows 8 tablets and hybrids … to be the highest-performance SoC for tablets in the market, with 100 percent more graphics processing performance2 than its predecessor (codenamed “Hondo.”)
“Kabini” [SoC which] targets ultrathin notebooks with exceptional battery life and offers impressive levels of performance in both dual- and quad-core options. “Kabini” is expected to deliver an increase of more than 50 percent in performance3 over the previous generation of AMD essential computing APUs (codenamed “Brazos 2.0.”)
Both APUs are scheduled to ship in the first half of 2013
so AMD is really close to a server SoC to be delivered soon as well.
The “more information” sections which follow her are:
- The Announcement
- Software Partners
- Hardware Partners
1. The Announcement
HP Launches New Class of Server for Social, Mobile, Cloud and Big Data [press release, April 8, 2013]
Software defined servers designed for the data center and built for the planet
… Built from HP’s industry-leading server intellectual property (IP) and 10 years of extensive research from HP Labs, the company’s central research arm, HP Moonshot delivers a significant improvement in energy, space, cost and simplicity. …
The HP Moonshot system consists of the HP Moonshot 1500 enclosure and application-optimized HP ProLiant Moonshot servers. These servers will offer processors from multiple HP partners, each targeting a specific workload.
With support for up to 1,800 servers per rack, HP Moonshot servers occupy one-eighth of the space required by traditional servers. This offers a compelling solution to the problem of physical data center space.(3) Each chassis shares traditional components including the fabric, HP Integrated Lights-Out (iLo) management, power supply and cooling fans. These shared components reduce complexity as well as add to the reduction in energy use and space.
The first HP ProLiant Moonshot server is available with the Intel® Atom S1200 processor and supports web-hosting workloads. HP Moonshot 1500, a 4.3u server enclosure, is fully equipped with 45 Intel-based servers, one network switch and supporting components.
HP also announced a comprehensive roadmap of workload-optimized HP ProLiant Moonshot servers incorporating processors from a broad ecosystem of HP partners including AMD, AppliedMicro, Calxeda, Intel and Texas Instruments Incorporated.
Scheduled to be released in the second half of 2013, the new HP ProLiant Moonshot servers will support emerging web, cloud and massive scale environments, as well as analytics and telecommunications. Future servers will be delivered for big data, high-performance computing, gaming, financial services, genomics, facial recognition, video analysis and other applications.
The HP Moonshot system is immediately available in the United States and Canada and will be available in Europe, Asia and Latin America beginning next month.
Pricing begins at $61,875 for the enclosure, 45 HP ProLiant Moonshot servers and an integrated switch.(4)
(4) Estimated U.S. street prices. Actual prices may vary.
– HP Moonshot System [Family data sheet, April 8, 2013]
– HP Moonshot – The Disruption [HP Event registration page at ‘thedisruption.com’ with embedded video gallery, press kit and more, originally created on April 12, 2010, obviously updated for the April 8, 2013 event]
Alert for Microsoft:
[4:42] We defined the industry standard server market [reference to HP’s Compaq heritage] and we’ve been the leader for years. With Moonshot we bring to find the market and taking it to the next level. [4:53]
Alert for Microsoft: how and when will you have a system like this with all the bells and whistles as presented above, as well as the rich ecosystem of hardware and software partners given below
Alert for Microsoft:
[0:11] In HP approach Linaro is about forming an enterprise group. What they were hoping for, what’s happened is to get a bunch of companies who are interested in taking the ARM architecture into the server space. [0:26]
Canonical joins Linaro Enterprise Group (LEG) and commits Ubuntu Hyperscale Availability for ARM V8 in 2013 [press release, Nov 1, 2012]
- Canonical continues its leadership of commercial deployment for ARM-based servers through membership of Linaro Enterprise Group (LEG)
- Ubuntu, the only commercially supported OS for ARM v7 today, commits to support ARM v8 server next year
- Ubuntu extends its position as the natural choice for hyperscale server computing with long term support
… “Canonical has been supporting our work optimising and consolidating the Linux kernel since our founding in June 2010”, said George Grey, CEO of Linaro. “We’re very happy to welcome them as a member of the Linaro Enterprise Group, building on our relationship to help accelerate development of the ARM server software ecosystem.” …
… “Calxeda has been thrilled with Canonical’s leadership in developing the ARM ecosystem”, said Karl Freund, VP marketing at Calxeda. “These guys get it. They are driving hard and fast, already delivering enterprise-class code and support for Calxeda’s 32-bit product today to our mutual clients. Working together in LEG will enable us to continue to build on the momentum we have already created.” …
What Canonical is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
HP Moonshot and Ubuntu work together [Ubuntu partner site, April 9, 2013]
… Ubuntu, as the lead operating system platform for x86 and ARM-based HP Moonshot Systems, featured extensively at the launch of the program in April 2013. …
… Ubuntu Server is the only OS fully operational today across HP Moonshot x86 and ARM servers, launched in April 2013.
Ubuntu is recognised as the leader in scale out and Hyperscale. Together, Canonical and HP are delivering massive reductions in data-center energy, space and costs. …
“Canonical has been working with HP for the past two years
on HP Moonshot, and with Ubuntu, customers can achieve higher performance with greater manageability across both x86 and ARM chip sets” Paul Santeler, VP & GM, Hyperscale Business Unit, HP
Ubuntu & HP’s project Moonshot [Canonical blog, Nov 2, 2011]
Today HP announced Project Moonshot – a programme to accelerate the use of low power processors in the data centre.
The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86), the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.
Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.
The HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.
The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?
Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.
The good news is that Canonical has been working with ARM and Calxeda for several years now and we released the first version of Ubuntu Server ported for ARM Cortex A9 class processors last month.
The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM. This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.
As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.
The biggest enterprise alert for Microsoft because of what was discussed in Will Microsoft Stand Out In the Big Data Fray? [Redmondmag.com, March 22, 2013]: What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013] especially as it is a brand new offering, see NuoDB Announces General Availability of Industry’s First & Only Cloud Data Management System at Live-Streamed Event [press release, Jan 15, 2013] now available in archive at this link: http://go.nuodb.com/cdms-2013-register-e.html
Extreme density on HP’s Project Moonshot [NuoDB Techblog, April 9, 2013]
A few months ago HP came to us with something very cool. It’s called Project Moonshot, and it’s a new way of thinking about how you design infrastructure. Essentially, it’s a composable system that gives you serious flexibility and density.
A single Moonshot System is 4.3u tall and holds 45 independent servers connected to each other via 1-Gig Ethernet. There’s a 10-Gig Ethernet interface to the system as a whole, and management interfaces for the system and each individual server. The long-term design is to have servers that provide specific capabilities (compute, storage, memory, etc.) and can scale to up to 180 nodes in a single 4.3u chassis.
The initial system, announced this week, comes with a single server configuration: an Intel Atom S1260 processor, 8 Gigabytes of memory and either a 200GB SSD or a 500GB HDD. On its own, that’s not a powerful server, but when you put 45 of these into a 4.3 rack-unit space you get something in aggregate that has a lot of capacity while still drawing very little power (see below). The challenge, then, is how to really take advantage of this collection of servers.
NuoDB on Project Moonshot: Density and Efficiency
We’ve shown how NuoDB can scale a single database to large transaction rates. For this new system, however, we decided to try a different approach. Rather than make a single database scale to large volume we decided to see how many individual, smaller databases we could support at the same time. Essentially, could we take a fully-configured HP Project Moonshot System and turn it into a high-density, low-power, easy to manage hosting appliance.
To put this in context, think about a web site that hosts blogs. Typically, each blog is going to have a single database supporting it (just like this blog you’re reading). The problem is that while a few blogs will be active all the time, most of them see relatively light traffic. This is known as a long-tail pattern. Still, because the blogs always need to be available, so too the backing databases always need to be running.
This leads to a design trade-off. Do you map the blogs to a single database (breaking isolation and making management harder) or somehow try to juggle multiple database instances (which is hard to automate, expensive in resource-usage and makes migration difficult)? And what happens when a blog suddenly takes off in popularity? In other words, how do you make it easy to manage the databases and make resource-utilization as efficient as possible so you don’t over-spend on hardware?
As I’ve discussed on this blog NuoDB is a multi-tenant system that manages individual databases dynamically and efficiently. That should mean that we’re a perfect fit for this very cool (pun intended) new system from HP.
After some initial profiling on a single server, we came up with a goal: support 7,200 active databases. You can read all about how we did the math, but essentially this was a balance between available CPU, Memory, Disk and bandwidth. In this case a “database” is a single Transaction Engine and Storage Manager pair, running on one of the 45 available servers.
When we need to start a database, we pick the server that’s least-utilized. We choose this based on local monitoring at each server that is rolled up through the management tier to the Connection Brokers. It’s simple to do given all that NuoDB already provides, and because we know what each server supports it lets us calculate a single capacity percentage.
It gets better. Because a NuoDB database is made of an agile collection of processes, it’s very inexpensive to start or stop a database. So, in addition to monitoring for server capacity we also watch what’s going on inside each database, and if we think it’s been idle long enough that something else could use the associated resources more effectively we shut it down. In other words, if a database isn’t doing anything active we stop it to make room for other databases.
When an SQL client needs to access that database, we simply re-start it where there are available resources. We call this mechanism hibernating and waking a database. This on-demand resource management means that while there are some number of databases actively running, we can really support a much larger in total (remember, we’re talking about applications that exhibit a long-tail access pattern). With this capability, our original goal of 7,200 active databases translates into 72,000 total supported databases. On a single 4.3u System.
The final piece we added is what we call database bursting. If a single database gets really popular it will start to take up too many resources on a single server. If you provision another server, separate from the Moonshot System, then we’ll temporarily “burst” a high-activity database to that new host until activity dies down. It’s automatic, quick and gives you on-demand capacity support when something gets suddenly hot.
I’m not going to repeat too much here about how we drove our tests. That’s already covered in the discussion on how we’re trying to design a new kind of benchmark focused on density and efficiency. You should go check that out … it’s pretty neat. Suffice it say, the really critical thing to us in all of this was that we were demonstrating something that solves a real-world problem under real-world load.
You should also go read about how we setup and ran on a Moonshot System. The bottom-line is that the system worked just like you’d expect, and gave us the kinds of management and monitoring features to go beyond basic load testing.
We were really lucky to be given access to a full Moonshot System. It gave us a chance to test out our ideas, and we actually were able to do better than our target. You can see this in the view from our management interface running against a real system under our benchmark load. You can see there that when we hit 7200 active databases we were only at about 70% utilization, so there was a lot more room to grow. Huge thanks to HP for giving us time on a real Moonshot System to see all those idea work!
Something that’s easy to lose track of in all this discussion is the question of power. Part of the value proposition from Project Moonshot is in energy efficiency, and we saw that in spades. Under load a single server only draws 18 Watts, and the system infrastructure is closer to 250 Watts. Taken together, that’s a seriously dense system that is using very little energy for each database.
We were psyched to have the chance to test on a Moonshot System. It gave us the chance to prove out ideas around automation and efficiency that we’ll be folding into NuoDB over the next few releases. It also gave us the perfect platform to put our architecture through its paces and validate a lot about the flexibility of our core architecture.
We’re also seriously impressed by what we experienced from Project Moonshot itself. We were able to create something self-contained and easy to manage that solves a real-world problem. Couple that with the fact that a Moonshot System draws so little power, the Total Cost of Ownership is impressively low. That’s probably the last point to make about all this: the combination of our two technologies gave us something where we could talk concretely about capacity and TCO, something that’s usually hard to do in such clear terms.
In case it’s not obvious, we’re excited. We’ve already been posting this week about some ideas that came out of this work, and we’ll keep posting as the week goes on. Look for the moonshot tag and please follow-up with comments if you’re curious about anything specific and would like to hear more!
Project Moonshot by the Numbers [NuoDB Techblog, April 9, 2013]
To really understand the value from HP Project Moonshot you need to think beyond the list price of one system and focus instead on the Total Cost of Ownership. Figuring out the TCO for a server running arbitrary software is often a hard (and thankless?) task, so one of the things we’ve tried to do is not just demonstrate great technology but something that naturally lets you think about TCO in a simple way. We think the final metrics are pretty simple, but to get there requires a little math.
If you’re a CIO, and just want to know the bottom line, then we’ll ruin the suspense and cut to the chase. It will cost you about $70,500 up-front, $1,800 in your first year’s electricity bills and take 8.3 rack-units to support the web-front end and database back-end for 72,000 blogs under real-world load.
Cost of a Single Database
Recall that we set the goal at 72,000 databases within a single system. At launch the list price for a fully-configured Moonshot System is around $60,000, so we start out at 83 cents per-database. In practice were seeing much higher capacity in our tests, but let’s start with this conservative number.
Now consider the power used by the system. From what we’ve measured through the iLO interfaces a single server draws no more than 18 Watts at peak load (measured against CPU and IO activity). The System itself (fans, switches etc.) draws around 250 Watts in our tests. That means that under full load each database is drawing about .015 Watts.
NuoDB is a commercial software offering, which means that you pay up-front to deploy the software (and get support as part of that fee). For anyone who wants to run a Moonshot System in production as a super-dense NuoDB appliance we’ll offer you a flat-rate license.
Put together, we can say that the cost per database-watt is 1.22 cents. That’s on a 4.3 rack-unit system. Awesome.
Quantify the Supported Load
As we discussed in our post on benchmarking, we’re trying to test under real-world load. As a simple starting-point we chose a profile based on WordPress because it’s fairly ubiquitous and has somewhat serious transactional requirements. In our benchmarking discussion we explain that a typical application action (post, read, comment) does around 20 SQL operations.
Given 72,000 databases most of these are fairly inactive, so on average we’ll say that each database gets about 250 hits a day (generous by most reports I’ve seen). That’s 18,000,000 hits a day or 208 hits per-second. 4,166 SQL statements a second isn’t much for a single database, but it’s pretty significant given that we’re spreading it across many databases some of which might have to be “woken” on-demand.
HP was generous enough not only to give us time on a Moonshot System but also access to some co-located servers for driving our load tests. In this case, 16 lower-powered ARM-based Calxeda systems that all went through the same 1-Gig ethernet connection to our Moonshot System. These came from HP’s Discovery Lab; check out our post about working with the Moonshot System for more details.
From these load-drivers we able to run our benchmark application with up to 16 threads per server, simulating 128 simultaneous clients. In this case a typical “client” would be a web server trying to respond to a web client request. We averaged around 320 hits per-second, well above the target of 208. From what we could observe, we expect that given more capable network and client drivers we would be able to get 3 or 4 times that rate easily.
We have the cost of the Moonshot System itself. We also know that it can support expected load from a fairly small collection of low-end servers. In our own labs we use systems that cost around $10,000, fit in 3 rack-units and would be able to drive at least the same kind of load we’re citing here. Add a single switch at around $500 and you have a full system ready to serve blogs. That’s $70,500 total in 8.3 rack units, still under $1 per database.
I don’t know what power costs you have in your data center, but I’ve seen numbers ranging from 2.5 to 25 cents per Kilowatt-Hour. In our tests, where we saw .015 Watts per-database, if you assume an average rate of 13.75 cents per KwH that comes out to .00020625 cents per-hour per-database in energy costs. In one year, with no down-time, that would cost you $1,276.77 in total electricity fees.
Just as an aside, according to the New York Times, Facebook uses around 60,000,000 Watts a year!
One of the great things about a Moonshot System is that the 45 servers are already being switched inside the chassis. This means that you don’t need to buy switches & cabling, and you don’t need to allocate all the associated space in your racks. For our systems administrator that alone would make him very happy.
What I haven’t been talking about in all of this are the intangible costs. This is where figuring out TCO becomes harder.
For instance, one of the value-propositions here is that the Moonshot System is a self-contained, automated component. That means that systems administrators are freed up from the tasks of figuring out how to allocate and monitor databases, and how to size the data-center for growth. Database developers can focus more easily on their target applications. CIOs can spend less time staring at spreadsheets … or, at least, can allocate more time to spreadsheets on different topics.
Providing a single number in terms of capacity makes it easy to figure out what you need in your datacenter. When a single server within a Moonshot System fails you can simply replace it, and in the meantime you know that the system will still run smoothly just with slightly lower capacity. From a provisioning point of view, all you need to figure out is where your ceiling is and how much stand-by capacity you need to have at the ready.
NuoDB by its nature is dynamic, even when you’re doing upgrades. This means that you can roll through a running Moonshot System applying patches or new versions with no down-time. I don’t know how you calculate the value in saved cost here, but you probably do!
Comparisons and Planned Optimizations
It’s hard to do an “apples-to-apples” comparison against other database software here. Mostly, this is because other databases aren’t designed to be dynamic enough to support hibernation, bursting and capacity-based automated balancing. So, you can’t really get the same levels of density, and a lot of the “intangible” cost benefits would go away.
Still, to be fair, we tried running MySQL on the same system and under the same benchmarks. We could indeed run 7200 instances, although that was already hitting the upper-bounds of memory/swap. In order to get the same density you would need 10 Moonshot Systems, or you would need larger-powered expensive servers. Either way, the power, density, automation and efficiency savings go out the window, and obviously there’s no support for bursting to more capable systems on-demand.
Unsurprisingly, the response time was faster on-average (about half the time) from MySQL instances. I say “unsurprisingly” for two reasons. First, we tried to use schema/queries directly from WordPress to be fair in our comparison, and these are doing things that are still known to be less-optimized in NuoDB. They’re also in the path of what we’re currently optimizing and expect to be much faster in the near-term.
The second is that NuoDB clients were originally designed assuming longer-running connections (or pooled connections) to databases that always run with security & encryption enabled. We ran all of our tests in our default modes to be fair. That means we’re spending more time on each action setting up & tearing down a connection. We’ve already been working on optimizations here that would shrink the gap pretty substantially.
In the end, however, our response time is still on the order of a few hundred milliseconds worst-case, and is less important than the overall density and efficiency metrics that we proved out. We think the value in terms of ease of use, density, flexibility on load spikes and low-cost speaks for itself. This setup is inexpensive by comparison to deploying multiple servers and supports what we believe is real-world load. Just wait until the next generation of HP Project Moonshot servers roll out and we can start scaling out individual databases at the same time!
– Benchmarking Density & Efficiency [NuoDB Techblog, April 9, 2013]
– Database Hibernation and Bursting [NuoDB Techblog, April 8, 2013]
– An Enterprise Management UI for Project Moonshot [NuoDB Techblog, April 9, 2013]Regarding the cloud based version of NuoDB see:
– NuoDB Partners with Amazon [press release, March 26, 2013]
– NuoDB Extends Database Leadership in Scalability & Performance on a Private Cloud [press release, March 14, 2013] “… the industry’s first and only patented, elastically scalable Cloud Data Management System (CDMS), announced performance of 1.84 million transactions per second (TPS) running on 32 machines. … With NuoDB Starlings release 1.0.1, available as of March 1, 2013, the company has made advancements in performance and scalability and customers can now experience 26% improvement in TPS per machine.”
– Google Compute Engine: interview with NuoDB [GoogleDevelopers YouTube channel, March 21, 2013]
Actually Calxeda was best to explain the preeminence of software over the SoC itself:
Karl Freund from Calxeda – HP Moonshot 2013 – theCUBE [siliconangle YouTube channel, April 8, 2013], see also HP Moonshot: It’s a lot closer than it looks! [Calxeda’s ‘ARM Servers, Now!’ blog, April 8, 2013]
as well as ending with Calxeda’s very practical, gradual approach to ARM based served market with things like:
[16.03] Our 2nd generation platform called Midway, which will be out later this year [in the 2nd half of the year], that’s probably the target for Big Data. Our current product is great for web serving, it’s great for media serving, it’s great for storage. It doesn’t have enough memory for Big Data … in a large. So we’ll getting that 2nd generation product out, and that should be a really good Big Data platform. Why? Because it’s low power, it’s low cost, but it’s also got a lot of I/O. Big Data is all that moving a lot of data around. And if you do that more cost effectively you save a lot of money. [16:38]
mentioning also that their strategy is using standard ARM cores like the Cortex-A57 for their H1 2014 product, and focus on things like the fabric and the management, which actually allows them to work with a streamlined staff of around 150 people.
Detailed background about Calxeda in a concise form:
– Redeﬁning Datacenter Efficiency: An Overview of Calxeda’s architecture and early performance measurements [Karl Freund, Nov 12, 2012] from where the core info is:
- Founded in 2008
- $103M Funding
- 1st Product Announced with HP, Nov 2011
- Initial Shipments in Q2 2012
- Volume production in Q4 2012
* The power consumed under normal operating conditions
under full application load (ie, 100% CPU utilization)
A small Calxeda Cluster: a Simple Example
• Start with four ServerNodes
• Consumes only 20W total power
• Connected via distributed fabric switches
• Connect up to 4 SATA drives per node
• Then scale this to thousands of ServerNodes
EnergyCard: a Quad-Node Reference Design
- Four-node reference platform from Calxeda
- Available as product and/or design
- Plugs into OEM system board with passive fabric, no additional switch HW
EnergyCard delivers 80Gb Bandwidth to the system board. (8 x 10Gb links)
It is also important to have a look at what were the Open Source Software Packages for Initial Calxeda Shipments [Calxeda’s ‘ARM Servers, Now!’ blog, May 24, 2012]
We are often asked what open-source software packages are available for initial shipments of Calxeda-based servers.
Here’s the current list (changing frequently). Let us know what else you need!
Then Perspectives From Linaro Connect [Calxeda’s ‘ARM Servers, Now!’ blog, March 20, 2013] sheds more light on the recent software alliances which make Calxeda to deliver:
– From Larry Wikelius, Co-Founder and VP Ecosystems, Calxeda:
The most recent Linaro Connect (Linaro Connect Asia 2013 – LCA), held in Hong Kong the first week of March, really put a spotlight on the incredible momentum around ARM based technology and products moving into the Data Center. Yes – you read that correctly – the DATA CENTER!
When Linaro was originally launched almost three years ago the focus was exclusively on the mobile and client market – where ARM has and continues to be dominant. However, as Calxeda has demonstrated, the opportunity for the ARM architecture goes well beyond devices that you carry in your pocket. Calxeda was a key driver in the formation of the Linaro Enterprise Group (LEG), which was publicly launched at the previous LinaroConnect event in Copenhagen in early November, 2012.
LEG has been an exciting development for Linaro and now has 13 member companies that include server vendors such as Calxeda, Linux distribution companies Red Hat and Canonical, OEM representation from HP and even Hyperscale Data Center end user Facebook. There were many sessions throughout the week that focused on Server specific topics such as UEFI, ACPI, Virtualization, Hyperscale Testing with LAVA and Distributed Storage. Calxeda was very active throughout the week with the team participating directly in a number of roadmap definition sessions, presenting on Server RAS and providing guidance in key areas such as application optimization and compiler focus for Servers.
Linaro Connect is proving to be a tremendous catalyst for the the growing eco-system around the ARM software community as a whole and the server segment in particular. A great example of this was the keynote presentation given jointly by Mark Heath and Lars Kurth from Citrix on Tuesday morning. Mark is the VP of XenServer at Citirix and Lars is well know in the OpenSource community for his work with Xen. The most exciting announcement coming out of Mark’s presentation is that Citrix will be joining Linaro as a member of LEG. Citrix will be certainly prove to be another valuable member of the Linaro team and during the week attendees were able to appreciate how serious Citrix is about supporting ARM servers. The Xen team has not only added full support for ARM V7 systems in the Xen 4.3 release but they have accomplished some very impressive optimizations for the ARM platform. The Xen team has leveraged Device Tree for optimal device discovery. Combined with a number of other code optimizations they showed a dramatically smaller code base for the ARM platform. We at Calxeda are thrilled to welcome Citrix into LEG!
As an indication of the draw that the Linaro Connect conference is already having on the broader industry the Open Compute Project (OCP) held their first International Event co-incident with LCA at the same venue. The synergy between Linaro and OCP is significant with the emphasis on both organizations around Open Source development (one software and one hardware) along with the dramatically changing design points for today’s Hyperscale Data Center. In fact the keynote at LCA on Wednesday morning really put a spotlight on how significant this is likely to be. Jason Taylor, Director of Capacity Engineering and Analysis at Facebook, presented on Facebook’s approach to ARM based servers. Facebook’s consumption of Data Center equipment is quite stunning – Jason quoted from Facebook’s 10-Q filed in October 2012 which stated that “The first nine months of 2012 … $1.0 billion for capital expenditures” related to data center equipment and infrastructure. Clearly with this level of investment Facebook is extremely motivated to optimize where possible. Jason focused on the strategic opportunity for ARM based severs in a disaggregated Data Center of the future to provide lower cost computing capabilities with much greater flexibility.
Calxeda has been very active in building the Server Eco-System for ARM based servers. This week in Hong Kong really underscored how important that investment has become – not just for Calxeda but for the industry as a whole. Our commitment to Open Source software development in general and Linaro in particular has resulted in a thriving Linux Infrastructure for ARM servers that allows Calxeda to leverage and focus on key differentiation for our end users. The Open Compute Project, which we are an active member in and have contributed to key projects such as the Knockout Storage design as well as the Open Slot Specification, demonstrates how the combination of an Open Source approach for both Software and Hardware can compliment each other and can drive Data Center innovation. We are early in this journey but it is very exciting!
Calxeda will continue to invest aggressively in forums and industry groups such as these to drive the ARM based server market. We look forward to continue to work with the incredibly innovative partners that are members in these groups and we are confident that more will join this exciting revolution. If you are interested in more information on these events and activities please reach out to us directly at email@example.com.
The next Linaro Connnect is scheduled for early July in Dublin. We expect more exciting events and topics there and hope to see you there!
They are also referring on their blog to Mobile, cloud computing spur tripling of micro server shipments this year [IHS iSuppli press release, Feb 6, 2013] which showing the general market situation well into the future as:
Driven by booming demand for new data center services for mobile platforms and cloud computing, shipments of micro servers are expected to more than triple this year, according to an IHS iSuppli Compute Platforms Topical Report from information and analytics provider IHS (NYSE: IHS).
Shipments this year of micro servers are forecast to reach 291,000 units, up 230 percent from 88,000 units in 2012. Shipments of micro servers commenced in 2011 with just 19,000 units. However, shipments by the end of 2016 will rise to some 1.2 million units, as shown in the attached figure.
The penetration of micro servers compared to total server shipments amounted to a negligible 0.2 percent in 2011. But by 2016, the machines will claim a penetration rate of more than 10 percent—a stunning fiftyfold jump.
Micro servers are general-purpose computers, housing single or multiple low-power microprocessors and usually consuming less than 45 watts in a single motherboard. The machines employ shared infrastructure such as power, cooling and cabling with other similar devices, allowing for an extremely dense configuration when micro servers are cascaded together.
“Micro servers provide a solution to the challenge of increasing data-center usage driven by mobile platforms,” said Peter Lin, senior analyst for compute platforms at IHS. “With cloud computing and data centers in high demand in order to serve more smartphones, tablets and mobile PCs online, specific aspects of server design are becoming increasingly important, including maintenance, expandability, energy efficiency and low cost. Such factors are among the advantages delivered by micro servers compared to higher-end machines like mainframes, supercomputers and enterprise servers—all of which emphasize performance and reliability instead.”
Server Salad Days
Micro servers are not the only type of server that will experience rapid expansion in 2013 and the years to come. Other high-growth segments of the server market are cloud servers, blade servers and virtualization servers.
The distinction of fastest-growing server segment, however, belongs solely to micro servers.
The compound annual growth rate for micro servers from 2011 to 2016 stands at a remarkable 130 percent—higher than that of the entire server market by a factor of 26. Shipments will rise by double- and even triple-digit percentages for each year during the period.
Key Players Stand to Benefit
Given the dazzling outlook for micro servers, makers with strong product portfolios of the machines will be well-positioned during the next five years—as will their component suppliers and contract manufacturers.
A slew of hardware providers are in line to reap benefits, including microprocessor vendors like Intel, ARM and AMD; server original equipment manufacturers such as Dell and Hewlett-Packard; and server original development manufacturers including Taiwanese firms Quanta Computer and Wistron.
Among software providers, the list of potential beneficiaries from the micro server boom extends to Microsoft, Red Hat, Citrix and Oracle. For the group of application or service providers that offer micro servers to the public, entities like Amazon, eBay, Google and Yahoo are foremost.
The most aggressive bid for the micro server space comes from Intel and ARM.
Intel first unveiled the micro server concept and reference design in 2009, ostensibly to block rival ARM from entering the field.
ARM, the leader for many years in the mobile world with smartphone and tablet chips because of the low-power design of its central processing units, has been just as eager to enter the server arena—dominated by x86 chip architecture from the likes of Intel and a third chip player, AMD. ARM faces an uphill battle, as the majority of server software is written for x86 architecture. Shifting from x86 to ARM will also be difficult for legacy products.
ARM, however, is gaining greater support from software and OS vendors, which could potentially put pressure on Intel in the coming years.
Read More > Micro Servers: When Small is the Next Big Thing
Then there are a number of Intel competitive posts on Calxeda’s ‘ARM Servers, Now!’ blog:
– What is a “Server-Class” SOC? [Dec 12, 2012]
– Comparing Calxeda ECX1000 to Intel’s new S1200 Centerton chip [Dec 11, 2012]
which you can also find in my Intel targeting ARM based microservers: the Calxeda case [‘Experiencing the Cloud’ blog, Dec 14, 2012] with significantly wider additional information upto binary translation from x86 to ARM with Linux
– ARM Powered Servers: 2013 is off to a great start & it is only March! [Smart Connected Devices blog of ARM, March 6, 2013]
– Moonshot – a shot in the ARM for the 21st century data center [Smart Connected Devices blog of ARM, April 9, 2013]
– Are you running out of data center space? It may be time for a new server architecture: HP Moonshot [Hyperscale Computing Blog of HP, April 8, 2013]
– HP Moonshot: the HP Labs team that did some of the groundbreaking research [Innovation @ HP Labs blog of HP, April 9, 2013]
– HP Moonshot: An Accelerator for Hyperscale Workloads [Moor Insights White Paper, April 8, 2013]
– Comparing Pattern Mining on a Billion Records with HP Vertica and Hadoop [HP Vertica blog, April 9, 2013] by team of HP Labs researchers show how the Vertica Analytics Platform can be used to find patterns from a billion records in a couple of minutes, about 9x faster than Hadoop.
PCs and cloud clients are not parts of Hewlett-Packard’s strategy anymore [‘Experiencing the Cloud’, Aug 11, 2011 – Jan 17, 2012] see the Autonomy IDOL related content there
– ENCO Systems Selects HP Autonomy for Audio and Video Processing [HP Autonomy press release, April 8, 2013]
HP Autonomy today announced that ENCO Systems, a global provider of radio automation and live television audio solutions, has selected Autonomy’s Intelligent Data Operating Layer (IDOL) to upgrade ENCO’s latest-generation enCaption product.
ENCO Systems provides live automated captioning solutions to the broadcast industry, leveraging technology to deliver closed captioning by taking live audio data and turning it into text. ENCO Systems is capitalizing on IDOL’s unique ability to understand meaning, concepts and patterns within massive volumes of spoken and visual content to deliver more accurate speech analytics as part of enCaption3.
“Many television stations count on ENCO to provide real-time closed captioning so that all of their viewers get news and information as it happens, regardless of their auditory limitations,” said Ken Frommert, director, Marketing, ENCO Systems. “Autonomy IDOL helps us provide industry-leading automated closed captioning for a fraction of the cost of traditional services.”
enCaption3 is the only fully automated speech recognition-based closed captioning system for live television that does not require speaker training. It gives broadcasters the ability to caption their programming, including breaking news and weather, any time, day or night, since it is always on and always available. enCaption3 provides captioning in near real time-with only a 3 to 6 second delay-in nearly 30 languages.
“Television networks are under increasing pressure to provide real-time closed captioning services-they face fines if they don’t, and their growing and diverse viewers demand it,” said Rohit de Souza, general manager, Power, HP Autonomy. “This is another example of a technology company integrating Autonomy IDOL to create a stronger, faster and more accurate product offering, and demonstrates yet another powerful way in which IDOL can be applied to help organizations succeed in the human information era.”
– Using Big Data to change the game in the Energy industry [Enterprise Services Blog of HP, Oct 24, 2012]
… Tools like HP’s Autonomy that analyzes the unstructured data found in call recordings, survey responses, chat logs, e-mails, social media posts and more. Autonomy’s Intelligent Data Operating Layer (IDOL) technology uses sophisticated pattern-matching techniques and probabilistic modeling to interpret information in much the same way that humans do. …
– Stouffer Egan turns the tables on computers in keynote address at HP Discover [Enterprise Services Blog of HP, June 8, 2012]
For decades now, the human mind has adjusted itself to computers by providing and retrieving structured data in two-dimensional worksheets with constraints on format, data types, list of values, etc. But, this is not the way the human mind has been architected to work. Our minds have the uncanny ability to capture the essence of what is being conveyed in a facial expression in a photograph, the tone of voice or inflection in an audio and the body language in a video. At the HP Discover conference, Autonomy VP for United States, Stouffer Egan showed the audience how software can begin to do what the human mind has being doing since the dawn of time. In a demonstration where Iron Man came live out of a two-dimensional photograph, Egan turned the tables on computers. It is about time computers started thinking like us rather than us forcing us to think like them.
Egan states that the “I” in IT is where the change is happening. We have a newfound wealth of data through various channels including video, social, click stream, audio, etc. However, data unprocessed without any analysis is just that — raw data. For enterprises to realize business value from this unstructured data, we need tools that can process it across multiple media. Imagine software that recognizes the picture in a photograph and searches for a video matching the person in the picture. The cover page of a newspaper showing a basketball star doing a slam dunk suddenly turns live pulling up the video of this superstar’s winning shot in last night’s game. …
2. Software Partners
HP Moonshot is setting the roadmap for next generation data centers by changing the model for density, power, cost and innovation. Ubuntu has been designed to meet the needs of Hyperscale customers and, combined with its management tools, is ideally suited be the operating system platform for HP Moonshot. Canonical has been working with HP since the beginning of the Moonshot Project, and Ubuntu is the only OS integrated and fully operational across the complete Moonshot System covering x86 and ARM chip technologies.
What Canonical is saying about HP Moonshot
As mobile workstyles become the norm, the scalability needs of today’s applications and devices are increasingly challenging what traditional infrastructures can support. With HP’s Moonshot System, customers will be able to rapidly deploy, scale, and manage any workload with dramatically lower space and energy constraints. The HP Pathfinder Innovation Ecosystem is a prime opportunity for Citrix to help accelerate the development of innovative solutions that will benefit our enterprise cloud, virtualization and mobility customers.
We’re committed to helping enterprises achieve the most from their Big Data initiatives. Our partnership with HP enables joint customers to keep and query their data at scale so they can ask bigger questions and get bigger answers. By using HP’s Moonshot System, our customers can benefit from the improved resource utilization of next generation data center solutions that are workload optimized for specific applications.
|Today’s interactive applications are accessed 24×365 by millions of web and mobile users, and the volume and velocity of data they generate is growing at an unprecedented rate. Traditional technologies are hard pressed to keep up with the scalability and performance demands of these new applications. Couchbase NoSQL database technology combined with HP’s Moonshot System is a powerful offering for customers who want to easily develop interactive web and mobile applications and run them reliably at scale.||
Our partnership with HP facilitates CyWee’s goal of offering solutions that merge the digital and physical worlds. With TI’s new SoCs, we are one step closer to making this a reality by pushing state-of-the-art video to specialized server environments. Together, CyWee and HP will deliver richer multimedia experiences in a variety of cloud-based markets, including cloud gaming, virtual office, video conferencing and remote education.
HP’s new Moonshot System will enable organizations to increase the energy efficiency of their data centers while reducing costs. Our Cassandra-based database platform provides the massive scalability and multi-datacenter capabilities that are a perfect complement to this initiative, and we are excited to be working with HP to bring this solution to a wide range of customers.
Big data comes in a wide range for formats and types and is a result of the connected everything world we live in. Through Project Moonshot, HP has enabled a new class of infrastructure to run more efficient workloads, like Apache Hadoop, and meet the market demand of more performance for less.
The unprecedented volume and variety of data introduces unique challenges to organizations today… By combining the HP Moonshot system with Autonomy IDOL’s unique ability to understand concepts in information, organizations can dramatically reduce the cost, space, and energy requirements for their big data initiatives, and at the same time gain insights that grow revenue, reduce risk, and increase their overall Return on Information.
Big Data is not just for Big Companies – or Big Servers – anymore – it’s affecting all sectors of the market. At HP Vertica we’re very excited about the work we’ve been doing with the Moonshot team on innovative configurations and types of analytic appliances which will allow us to bring the benefits of real-time Big Data analytics to new segments of the market. The combination of the HP Vertica Analytics Platform and Moonshot is going to be a game-changer for many.
HP worked closely with Linaro to establish the Linaro Enterprise Group (LEG). This will help accelerate the development of the software ecosystem around ARM Powered servers. HP’s Moonshot System is a great platform for innovation – encouraging a wide range of silicon vendors to offer competing ‘plug-and-play’ server solutions, which will give end users maximum choice for all their different workloads.
What Linaro is saying about HP Moonshot[HewlettPackardVideos YouTube channel, April 8, 2013]
Organizations are looking for ways to rapidly deploy, scale, and manage their infrastructure, with an architecture that is optimized for today’s application workloads. HP Moonshot System is an energy efficient, space saving, workload-optimized solution to meet these needs, and HP has partnered with MapR Technologies, a Hadoop technology leader, to accelerate innovation and deployment of Big Data solutions.
NuoDB and HP are shattering the scalability and density barriers of a traditional database server. NuoDB on the HP Moonshot System delivers unparalleled database density, where customers can now run their applications across thousands of databases on a single box, significantly reducing the total cost across hardware, software, and power consumption. The flexible architecture of HP Moonshot coupled with NuoDB’s hyper-pluggable database design and its innovative “database hibernation” technology makes it possible to bring this unprecedented hardware and software combination to market.
What NuoDB is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 9, 2013]
As the leading solution provider for the hosting market, Parallels is excited to be collaborating in the HP Pathfinder Innovation Ecosystem. The HP Moonshot System in concert with Parallels Plesk Panel and Parallels Containers provides a flexible and efficient solution for cloud computing and hosting.
Red Hat Enterprise Linux on HP’s converged infrastructure means predictability, consistency and stability. Companies around the globe rely on these attributes when deploying applications every day, and our value proposition is just as important in the Hyperscale segment. When customers require a standard operating environment based on Red Hat Enterprise Linux, I believe they will look to the HP Moonshot System as a strong platform for high-density Hyperscale implementations.
What Red Hat is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
HP Project Moonshot’s promise of extreme low-energy servers is a game changer, and SUSE is pleased to partner with HP to bring this new innovation to market. For more than twenty years, SUSE has adapted its enterprise-grade Linux operating system to achieve ever-increasing performance needs that succeed both today and tomorrow in areas such as Big Data and cloud computing.
What SUSE is saying about HP Moonshot [HewlettPackardVideos YouTube channel, April 8, 2013]
3. Hardware Partners
AMD is excited to continue our deep collaboration with HP to bring extreme low-energy, ultra dense, specialized server solutions to the market. Both companies share a passion to bring innovative workload optimized solutions to the market, enabling customers to scale-out to new levels within existing energy and space constraints. The new low-power x86 AMD Opteron™ APU is optimized in the HP Moonshot System to dramatically lower TCO in quickly emerging media oriented workloads.
What AMD is saying about HP Moonshot
It is exciting to see HP take the lead in innovating low-energy servers for the cloud. Applied Micro’s ARM 64-bit X-Gene Server on a Chip will enable performance levels seen in today’s deployments while offering higher densities, greatly improved I/O, and substantial reductions in the total cost of ownership. Together, we will unleash innovation unlike anything we’ve seen in the server market for decades.
In the current economic and power realities, today’s server infrastructure cannot meet the needs of the next billion data users, or the evolving needs of currently supported users. Customers need innovative SoC solutions which deliver more integration and optimization than has historically been required by traditional enterprise workloads. HP’s Moonshot System is a departure from the one size fits all approach of traditional enterprise and embraces a range of ARM partner solutions that address different performance, workloads and cost points.
What ARM is saying HP Moonshot
Calxeda and HP’s new Moonshot System are a powerful combination, and sets a new standard for ultra-efficient web and application serving. Fulfilling a journey started together in November 2011, Project Moonshot creates the foundation for the new age of application-specific computing.
What Calxeda is saying about HP Moonshot
HP Moonshot System is a game changer for delivering optimized server solutions. It beautifully balances the need for mixing different processor solutions optimized for different workloads under a standard hardware and software framework. Cavium’s Project Thunder will provide a family of 64-bit ARM v8 processors with dense and scalable sever class performance at extremely attractive power and cost metrics. We are doing this by blending performance and power efficient compute, high performance memory and networking into a single, highly integrated SoC.
What Cavium is saying about HP Moonshot
Intel is proud to deliver the only server class, 64-bit SoC technology that powers the first and only production shipping HP ProLiant Moonshot Server today. 64-bit Intel Atom processor S1200 family features extreme low power combined with required datacenter class capabilities for lightweight web scale workloads, such as low end dedicated hosting and static web serving. In collaboration with HP, we have a strong roadmap of additional server solutions shipping later this year, including Intel’s 2nd generation 64-bit SoC, “Avoton” based on leading 22nm manufacturing technology, that will deliver best in class energy efficiency and density for HP Moonshot System.
What Intel is saying about HP Moonshot
|What Marvell is saying about HP Moonshot|
HP Moonshot System’s high density packaging coupled with integrated network capability provides the perfect platform to enable HP Pathfinder Innovation Ecosystem partners to deliver cutting edge technology to the hyper-scale market. SRC Computers is excited to bring its history of delivering paradigm shifting high-performance, low-power, reconfigurable processors to HP Project Moonshot’s vision of optimizing hardware for maximum application performance at lowest TCO.
What SRC Computers is saying about HP Moonshot
The scalability and high performance at low power offered through HP’s Moonshot System gives customers an unmatched ability to adapt their solutions to the ever-changing and demanding market needs in the high performance computing, cloud computing and communications infrastructure markets. The strong collaboration efforts between HP and TI through the HP Pathfinder Innovation Ecosystem ensure that customers understand and get the most benefit from the processors at a system-level.
What TI is saying about HP Moonshot
Linux client market share gains outside the Android? Instead of gains will it shrink to 5% in the next 3 years?
The Linux Foudation quite proundly referred to ReadWriteMobile: The ‘Year of the Linux Desktop’? That’s So 2012 [Feb 3, 2013]
For those Linux enthusiasts still pining for the mythical “Year of the Linux Desktop,” the wait is over. In fact, it already happened. In 2012 Microsoft’s share of computing devices fell to 20% from a high of 97% as recently as 2000, as a Goldman Sachs report reveals [”Clash of the titans” downloadable from here, dated Dec 7, 2012]. While Apple has taken a big chunk of Microsoft’s Windows lead, it’s actually Google that plays Robin Hood in the operating system market, now claiming 42% of all computing devices with its free “Linux desktop” OS, Android.
Read more at ReadWriteMobile.
from which I will include here the following chart:
for which Goldman Sachs commented as:
The compute landscape has undergone a dramatic transformation over the last decade with consumers responsible for the massive market realignment. While PCs were the primary internet connected device in 2000 (139mn shipped that year), today they represent just 29% of all internet connected devices (1.2bn devices to ship in 2012), while smartphones and tablets comprise 66% of the total. Further, although Microsoft was the leading OS provider for compute devices in 2000 at 97% share, today the consumer compute market (1.07bn devices) is led by Android at 42% share, followed by Apple at 24%, Microsoft at 20% and other vendors at 14%.
Note from Goldman Sachs: Microsoft has gone from 97 percent share of compute market to 20 percent [The Seattke Times Dec 7, 2012]:
I asked Goldman Sachs about what happened in the 2004-2005 time frame — as seen in the above chart — that made Apple’s vendor share jump, Microsoft’s share plummet and the “other” category to go from zero to 29 percent. Goldman Sachs replied that it has to do with more mainstream adoption of non-PC consumer computing devices but declined to elaborate beyond that.
Microsoft was put into the “Challenged” category (along with Google BTW) by Golmann Sachs noting that:
… we estimate that Microsoft would have to sell roughly 5 Windows Phones or roughly two Windows 8 RT tablets to offset the loss of one traditional Windows PC sale, which we estimate has an overall blended selling price of $60 for business and consumer.
but a kind of more positive than negative outlook was predicted for the company by
… we expect the recent launches of Windows Phone 8 and Windows 8 tablets to help the company reclaim some share in coming years.
Apple, at the same time, was into the “Beneficiaries” category (along with Facebook and Samsung BTW) by Goldmann Sachs for the reason of:
… we believe loyalty to the company’s ecosystem is only increasing and this should translate into continued growth going forward. In particular, we see the potential for Apple to capture additional growth as existing iOS users move to multiple device ownership and as the company penetrates emerging regions with new devices such as the iPad miniAAPL and lower priced iPhones. As a result, we believe Apple’s market share in phones has room to rise much further, and that its dominant tablet market share appears to be more resilient than most expect. We expect these factors to continue to drive the stock higher.
This is, however, not going to happen if taking a judgement from the stock market reflections since then with 13.7% drop in Apple’ share price vs. that of Dec 7 (the report publishing date) and a whopping 34.5% drop vs. its last peak on Sept 19, 2012 (at $702.1):
source: Yahoo! Finance
Why Did $AAPL Stock Go Down After Beating Earnings Estimates And $AMZN Stock Go Up After Missing? [Techcrunch, Jan 29, 2013] had the following explanation:
The moves in different directions for Amazon and Apple have been about expectations and guidance. Wall Street has higher expectations for Apple and ‘different’ expectations for Amazon. Wall Street wants Apple’s ‘gross margins’ to grow. They don’t expect Amazon’s ‘profits’ to grow. It sounds silly, but if Apple has reported lower profits and a huge gross margin increase the stock might have shot up. If Amazon had reported record profits today on decreasing margins, Wall Street might have panicked.
Wall Street has stopped caring about Apple’s profits today. They were displeased with forward guidance. Growth rates have slowed measurably at Apple which is understandable for a company of its’ size. Wall Street is worried that growth is slowing and competition from Google and Samsung are taking a toll. Apple has given Wall Street so many wonderful surprises so magic has become the norm. Now that Apple is boring, they have run for the hills.
That moode didn’t change even after Apple CEO Tim Cook was trying to assure investors at the Goldman Sachs Internet and Technology Conference on Feb 12, just a week ago. Read the Wrap up: Apple CEO Tim Cook’s Goldman Sachs Conference keynote [AppleInsider, Feb 13, 2013] from which I will quote only the following excerpts as the most notable ones:
Cook went on to say that introducing a “budget device” was not something Apple would be comfortable with, and instead pointed to the strategy seen with the iPhone lineup. In that model, new variants like the iPhone 5 are sold at the highest price while preceding versions like the iPhone 4S and iPhone 4 are sold at discounted rates.
According to Cook, the iPad is “the poster child of the post-PC revolution” and has driving the push to tablets since its introduction in 2010.
While Apple’s tablet has been the downfall for a number of PC alternatives, such as netbooks, the device is also said to be hurting the company’s own Mac computer sales. During the last quarter of 2012, Mac sales dropped 22 percent year-to-year on low demand and supply constraints. Apple’s iPad business, however, grew by nearly 50 percent over the same period.
“The cannibalization question raises its head a lot,” Cook said. “The truth is: we don’t really think about it that much. Our basic belief is: if we don’t cannibalize, someone else will. In the case of iPad particularly, I would argue that the Windows PC market is huge and there’s a lot more there to cannibalize than there is of Mac, or of iPad.”
Cook noted that burgeoning markets like China and Brazil will be major players in future growth, and the company is banking on its ability to draw customers in to the Apple ecosystem with “halo products.”
“Through the years, we’ve found a very clear correlation between people getting in and buying their first Apple product and some percentage of them buying other Apple products.”
At the same conference Microsoft, similarly to Apple, declared a ‘no change’ strategy despite of the obvious failure of its Windows 8 and Windows Phone efforts so far. In the No “Plan B” for Microsoft’s mobile ambitions: CFO [Reuters, Feb 13, 2013] report one can read:
“We’re very focused on continuing the success we have with PCs and taking that to tablets and phones,” Microsoft’s Chief Financial Officer Peter Klein said
“It’s less ‘Plan B’ than how you execute on the current plan,” said Klein. “We aim to evolve this generation of Windows to make sure we have the right set of experiences at the right price points for all customers.”
Gartner estimates that Microsoft sold fewer than 900,000 Surface tablets in the fourth quarter, which is a fraction of the 23 million iPads sold by Apple. Microsoft has not released its own figures but has not disputed Gartner’s.
Windows phones now account for 3 percent of the global smartphone market, Gartner says, which is almost double their share a year ago but way behind Google’s Android with 70 percent and Apple with 21 percent.
To grab more share, Klein said Microsoft was working with hardware makers to make sure Windows software is available on devices ranging from phones to tablets to larger all-in-one PCs.
“It’s probably more nuanced than just you lower prices or raise prices,” said Klein. “It’s less a Plan B and more, how do you tweak your plan, how do you bring these things to market to make sure you have the right offerings at the right price points?”
So the last 3 months went against Goldmann Sachs’ November 2012 predictions. The only question now remains whether those 3 months brought any changes in the non-Apple and non-Microsoft territories which would question other parts of the Goldmann Sachs’ forecast as well?
There were no negative changes just strengthening of the already established dominant position against both Apple and Microsoft:
1. Mainstream tablets 7-inch at US$199, say Taiwan makes [DIGITIMES, Feb 19, 2013]
Google’s Nexus 7 and Amazon’s Kindle Fire HD have reshuffled the global tablet market and consequently 7-inch with a price cap of US$199 has become the mainstream standard for tablets, according to Taiwan-based supply chain makers.
Cumulative sales of the Nexus 7 have reached six million and are expected to reach eight million units before the expected launch of the second-generation model in June 2013, the sources said. The Nexus 7 and Kindle Fire have driven vendors to develop inexpensive 7-inch tablet models instead of 10-inch ones, the sources indicated.
In order to be as reach US$199, 7-inch tablets are equipped with basic required functions such as access to the Internet and watching video, the sources noted. While Google, Amazon, Samsung Electronics and Asustek Computer are competitive at US$199 for 7-inch tablets, white-box or other vendors need to launch 7-inch models at lower prices such as US$149, the sources said. Fox example, China-based graphics card vendor Galaxy Microsystems has cooperated with Nvidia to launch a 7-inch tablet in the China market at CNY999 (US$160).
2. Digitimes Research: 68.6% of touch panels shipped in 4Q12 from the Greater China area [DIGITIMES, Feb 19, 2013] meaning that in supply chain terms there is a growing concentration on suppliers not only from Greater China but especially from mainland China:
Taiwan- and China-based touch panel makers held a 68.6% global market share for touch panels shipped during the fourth quarter of 2012, according to Digitimes Research.
China-based panel makers saw the biggest share in the handset touch panel market during the fourth quarter due to smartphone demand in China, while Taiwan-based panel makers only held a 27.5% share in the market largely due to lower-than-expected sales of the iPhone 5, said Digitimes Research.
In terms of touch panels used in tablets, Taiwan-based panel makers saw a drop in their global market share to 59.9% during the period largely due to the iPad mini using DITO thin-film type touch screens provided from Japan-based touch panel makers. China-based panel makers meanwhile held 18.6% in the market due to demand for white-box tablets in China, added Digitimes Research.
Meanwhile, Digitimes Research found that Taiwan-based TPK provided 70.9% of all touch panels used in notebook applications in 2012.
3. Touch Panel Market Projected for a 34% Growth in 2013 from 2012 [Displaybank, sent in a newsletter form, Feb 19, 2013] published to promote Touch Panel Market Forecast and Cost/Issue/Industry Analysis for 2013 [Jan 30, 2013]
The touch panel market is growing rapidly due to the increasing sale of smartphones and tablet PCs. The touch panel market size in 2012 was 1.3 billion units, a 39.4% growth over 2011. The market is projected to grow 34% in 2013, growing to more than 1.8 billion units.
Smartphone and tablet PCs, major applications that use touch panels, are expected to continue to grow at a high rate. In addition, most IT devices that use display panels have either switched to or will start using the touch panels soon. Therefore the touch panel market will show a double digit growth annually until 2016, by unit. The market size is expected to reach more than 2.75 billion units by 2016.
With the explosion in the sale of smartphones and tablet PCs during the past few years, our lives have changed dramatically. They are now common place in our lives, and have a huge influence in the IT industry in general. With the introduction of Windows 8 OS in October 2012, upsizing of touch panels has begun. The impact of this event on the immediate growth of the touch panel market and the long-term effect is so immense that it cannot be estimated at the moment.
The financial crisis that started in 2008 left much of the IT industry hobbling worldwide. But only the touch panel market is enjoying a boom. Many new players are pouring into the industry, and those on the sidelines are waiting for the opportune moment to enter. As more players enter the competitive landscape, touch panel prices are falling rapidly. In addition, to gain competitiveness and to differentiate itself in the market has led players to develop and improve structure, technique and process, and seek out new materials.
The introduction of Windows 8 is leading the increase in touch capable Notebook and AIO PCs. It is still too early for the touch interface to completely displace keyboard and mouse, but the touch functionality does add convenience to some operations. We are sure to see an increase in specialized apps that capitalize on such functions. Therefore, touch functions will complement traditional input methods. As the technology is still in early implementation stages, it is used only in select high-end Ultrabooks. But it’s only a matter of time before touch functions make its way to mid-end products.
Forecasting the future of touch panel industry is not only difficult, but also outright confusing in the current landscape due to the rapid expansion; the increase in number of devices that use touch panels; more players in the market; and rapid development of new products and new processes. In serving clients, Displaybank has released “Touch Panel Market Forecast and Cost/Issue/Industry Analysis for 2013” to provide industry outlook by application, product, and capacitive touch structure. The report also includes the supply chain of set makers and touch panel manufacturers; and cost analysis of major capacitive touch panels by size and type. This report will serve as a guide to bring clarity and understanding of rapidly transforming touch panel industry.
4. Cheaper components could allow 7-inch tablets priced below US$150, says TrendForce [DIGITIMES, Dec 14, 2012]
Viewing that Google and Amazon have launched 7-inch tablets at US$199, other vendors can offer 7-inch tablets at below US$150 only by adopting cheaper components, according to Taiwan-based TrendForce.
As panels and touch modules together account for 35-40% of the total material costs of a 7-inch tablet, replacing the commonly used 7-inch FFS panels with 7-inch TN LCD panels accompanied by additional wide-view angle compensation could save over 50% in panel costs, TrendForce indicated. In addition, replacing a G/G (glass/glass) or OGS (one glass solution) touch module with a G/F/F (glass/film/film) one, although inferior in terms of transmittance and touch sensitivity, can cut costs by about 70%. Thus, the adoption of a TN LCD panel and a G/F/F touch module for a 7-inch tablet could reduce material costs by about US$25, TrendForce said.
Given that the type of DRAM affects standby time only as far as user experience is concerned, costs can be reduced through replacing 1GB mobile DRAM priced at about US$10 with 1GB commodity DRAM priced at about US$3.50, TrendForce noted. As for NAND flash, 8GB and 4GB eMMC cost US$6 and US$4, respectively, and therefore the latter should be the preferred choice to save costs.
For CPUs, China-based IC design houses, including Allwinner Technology, Fuzhou Rockchip Electronics, Ingenic Semiconductor, Amlogic and Nufront Software Technology (Beijing), provide 40-55nm-based processors at about US$12 per chip which could be alternatives to chips used in high-end tablets which cost about US$24, TrendForce indicated.
While the sales performance of tablets below US$150 is yet to be seen, such cheap models are expected to put pressure upon China-based white-box vendors, and in turn intensify price competition in the tablet market in 2013, TrendForce commented.
5. Strong demand from non-iPad tablet sector to boost short-term performance of IC vendors [DIGITIMES, Jan 28, 2013]
Demand for IC parts from the tablet industry in China has been stronger than expected in the first quarter of 2013, which could help boost the short-term performance of IC design houses, while offsetting the impact of slow demand from China’s smartphone sector caused by high inventory levels, according to industry sources.
Entry-level tablets meet market demand in terms of pricing and functionality, particularly in China, said the sources, adding that demand for entry-level tablets in China and other emerging markets could top 4-5 million a month in 2013 compared to 2-3 million in the second half of 2012.
MediaTek, while seeing demand for its handset solutions from China decrease in the first quarter of 2013, has also enjoyed emerging IC demand from the tablet sector, with plans to release chipset solutions for the segment in the second quarter of the year, the source revealed.
Since the growth momentum for tablets in 2013 is expected to come from non-iPad vendors in China and other emerging markets, Taiwan-based suppliers of LCD driver, analog and touch-controller ICs as well as those of Wi-Fi, audio and Bluetooth chips will benefit from the trend thanks to cost advantages and strong business ties in these markets, the sources commented.
6. Allwinner A31 SoC is here with products and the A20 SoC, its A10 pin-compatible dual-core is coming in February 2013 [Dec 10, 2012] and The upcoming Chinese tablet and device invasion lead by the Allwinner SoCs [Dec 4, 2012], both from my own separated trend tracking site devoted to the ‘Allwinner phenomenon’ coming from mainland China and having the potential of drastically altering the 2013 device market (not taken into account at all by Goldmann Sachs report):
that already resulted in huge growth of the mainland China Android tablet manufacturing in 2012, as well shown by this chart:which has already fundamentally affected the worldwide tablet market in 2012:
7. What Allwinner started in 2012 with the single core A10/A13 SoCs and which was further boosted by the quad-core Cortex-A7 A31 SoC on Dec 5, 2012 with the release of Onda V972 and V812 tablets (for US$ 208 and US$144 respectively) is an incredible strategic inflection point for the whole ICT industry, which ALL SoC vendors should compete with. Rockchip shown as the #2 on the mainland China market just followed the suite:
8. Now the most ambitious external challenger Marvell Announces Industry’s Most Advanced Single-chip Quad-core World Phone Processor to Power High-performance, Smartphones and Tablets with Worldwide Automatic Roaming on 3G Networks [press release, Feb 19, 2013] which is going to add to the competition the integrated on the SoC 3.5G modems:
Marvell’s PXA1088 is the industry’s most advanced single-chip solution to feature a quad-core processor with support for 3G field-proven cellular modems including High Speed Packet Access Plus (HSPA+), Time division High Speed Packet Access Plus (TD-HSPA+) and Enhanced Data for GSM Environment (EDGE).
The Marvell PXA1088 solution incorporates the performance of a quad-core ARM Cortex-A7 with Marvell’s mature and proven WCDMA and TD-SCDMA modem technology to provide a low-cost [elsewhere stated by Marvell that this SoC is for the phones space in the “$100 range”] 3G platform for both smartphones and tablets. The advanced application processor technology of the PXA1088 enables a breakthrough end user experience for multimedia and gaming applications with universal connectivity. Marvell’s complete mobile platform solution includes the Avastar® 88W8777 WLAN + Bluetooth 4.0 + FM single-chip SoC and the L2000 GNSS Hybrid Location Processor, and an integrated power management and audio codec IC.
Marvell’s PXA1088 is backward pin-to-pin compatible with its dual-core single-chip Unified 3G Platform, the PXA988/PXA986, enabling device partners to upgrade their next-generation mobile devices to quad-core without additional design cost.
Currently, the PXA1088 platform is sampling with leading global customers. Products based on this platform are expected to be commercially available in 2013 [elsewhere stated by Marvell that “We’ll start seeing PXA1088-based phones in the first half of this year”].
9. Yesterday we had two significant advancements described in the Ubuntu and HTC in lockstep [Feb 19, 2013] post here. Especially the Ubuntu related part is remarkable as first time we had a new platform which can span the whole spectrum of devices: from smartphones, to tablets, to desktops, to TVs – actually all from a smartphone capability expanded via docking and other means to a screen, to a TV, a keyboard, and a mouse. This is certainly an extreme case of the new Ubuntu capability which can have implementation in different devices as well. Even in that case, however, the source and binary codes could be the same. This is also cleverly using the already well established Android drivers and Android Board Support Package (BSP) infrastructure of the most cost-efficient ARM SoC vendors. Note that this is furthest from any “license violation” attacks as the original OHA terms and conditions are stating the Apache V2 licencing which:
The Apache license allows manufacturers and mobile operators to innovate using the platform without the requirement to contribute those innovations back to the open source community. Because these innovations and differentiated features can be kept proprietary … Because the Apache license does not have a copyleft clause, industry players can add proprietary functionality to their products based on Android without needing to contribute anything back to the platform. As the entire platform is open, companies can remove functionality if they choose.
10. Finally today came Google Glass: showing how radically the user experience might be changing in the next 2-3 years:
Conclusion: There are even more uncalculated by Goldmann and Sachs advancements in the non-Apple and non-Microsoft spaces than in Apple and Microsoft ones. Just in these 3 months! Therefore it would be ridiculous if Goldmann and Sachs’ “consumer compute platform share” forecast as shown in the chart above will be fullfilled!
26 years of Wyse and Citrix collaboration resulted in an advanced infrastructure solution bringing the Windows desktop into a virtualised cloud environment and accessible from any cloud computing client device, including even thin client and zero client devices, or ones presenting a HTML5 browser functionality only. The infrastructure is getting a universal device management capability as well. And the most important hallmark of this infrastructure solution is complete security meaning immunity from viruses et al. In addition to the Windows desktop applications the next wave of web applications as well as SaaS applications (such as those provided by Salesforce.com) are made easily accessible and usable from any of those device and access points. The hallmark here is the possibility of continuing usage at the point where it has been left off from another device and access point. True flexibility from the user point of view.
For more introductory information please watch these two videos:
The detailed elaboration of the “Thin/Zero Client and Virtual Desktop Futures” topic will go through the following sections of the post:
- Wyse entry-level solution for education
- A glimpse into the Wyse portfolio and their large public / enterprise markets
- Essential technology and market information
A highly important preview from it:
XenDesktop and Metro Receiver [CitrixTV YouTube channel, May 9, 2012]
- Note: following that video it is absolutely important to watch the SYN229: What’s new with Citrix Receiver for desktop users video next to it because of the need to understand the Virtual Desktop Future assured by the upcoming Citrix Receiver universal client as represented best by the following image:
delivered in the [18:53 – 23:05] timeframe of the video.
Finally to understand the whole picture from/through a very practical demonstration of the whole range of possibilities watch these videos:
– The Future is Now (17 minutes – part 1 of 2)
– The Future is Now (28 minutes – part 2 of 2)
– Citrix Receiver on the Wyse Xenith, connecting to a XenDesktop virtual desktop
- Wyse Product/Technology Details
- Dell Wyse (i.e. the Dell acquisition of Wyse)
– for introduction to that see: Dell Completes Acquisition of Cloud Client Computing Leader Wyse Technology [Dell press release, May 25, 2012]
Before going into those detailed sections here is a highly important introduction as well (in order to understand the future potential of this advanced infrastructure solution):
Wyse Technology’s President and CEO Tarkan Maner speaks with Edie Lush at Hub Davos [hubculture YouTube channel, Jan 26, 2012]
– [00:40] Presumably the entry-level zero (which has no OS – see much below) client, Wyse E01 is shown as “working on only 2 watts” (the spec much below says upto 3 watts) and “costing less than $50, start at $35” (the current single unit retail price of E01 is $76 however, while the list price is $99 – see much below).
– That device is even presented as needing only the data center. Currently however entry-level zero client devices such as E01 (and the latest E02) require Microsoft MultiPoint Server (see much belo). So he is definitely pointing to an upcoming solution.
– [03:00] He mentions South-Africa with “10 million devices this year” as an educational example. So that kind of upcoming solution could definitely be in the works already. The power consumption difference might also indicate such a new entry-level device.
Management team [Wyse webpage, April 2, 2012]:
President, CEO and Chief Customer Advocate, Tarkan Maner
Tarkan Maner is the President and CEO at Wyse Technology, the global leader in Cloud Client Computing. Cloud Client Computing is the ultimate end user computing solution for our time, replacing the outdated, unsecure, unreliable, un-green and expensive client/server-centric systems. Cloud Client Computing delivers the security, manageability, availability, reliability, scalability, flexibility, and user experience with the lowest energy usage and total cost of ownership. Cloud Client Computing simply connects all the dots: Cloud client software, hardware and services.
Wyse provides its customers and partners with the broadest and deepest portfolio of Cloud Clients, including Thin, Zero and Cloud PC clients, supported by the leading cloud-centric firmware, virtualization, management and mobility software in the industry. Wyse independently partners with the leading data center, networking and collaboration solution providers within its global partner ecosystem to help organizations and people reach the clouds – in a private, public, government or, even in a personal cloud. Wyse’s mission is to enable any user, anywhere, to connect to any content via any app in any work environment without constraints, conflicts or compromises.
Tarkan believes that Cloud Client Computing not only drives better economic and productivity results for organizations, but, also drives societal change throughout the world. Cloud Client Computing reduces the cost, eliminates the complexity and enables the reach of computing to the next six billion users via billions of devices pervasive in every aspect of our lives.
Tarkan in the news
- Forbes OpEd – Cloud Computing for Public Sector
- Top Five Cloud Myths, Trends, and Recommendations
- How to Succeed at Innovation and Differentiation
- Opinion: Seeking ‘game changers’ that will create jobs
- Tarkan at WEF 2011 in Davos – Future of Manufacturing
- Tarkan at WEF 2012 in Davos – Cloud Client Computing for a New World
- Voices from the New Generation: The Explorer
Wyse entry-level solution for education
Post-PC Era Expands as Wyse and Serbian Government Partner for Nation-wide Cloud Client Computing Deployment in Education [Wyse press release, Sept 28, 2011]
More than 30,000 Students Gain Access to Latest Learning Technology with Wyse and Microsoft Solutions in Schools across Serbia
LONDON, UK and SAN JOSE, Calif. – 09/28/2011 – Wyse Technology, the global leader in cloud client computing, today announced a major implementation of its zero client technology in the Digital School project to transform classroom teaching in Serbia. In one of the largest projects of its kind in Europe, all elementary schools in Serbia will be outfitted with a new IT infrastructure based on Wyse zero clients and Microsoft Windows MultiPoint Server 2010, enabling every student to have access to the latest computing software, educational applications and online resources.
Committed to modernizing the country’s educational system, among other reforms, the Ministry of Telecommunications and Information Society, identified the need for a better information technology and communications infrastructure to support teaching and learning in classrooms.
Working with its technology partner company ComTrade, the solution is based on Windows MultiPoint Server 2010 and enables multiple users to simultaneously share a single computer while each using their own monitor, keyboard and mouse. This is an ideal solution for educational customers that want to extend IT access to more students, easily and affordably. The solution is designed for simple implementation and ease-of-use for teachers, provides the familiar Windows 7 desktop experience, and requires no advanced IT expertise.
The ministry selected Wyse E01 zero clients because they maximize the advantages of Windows MultiPoint Server. The zero clients simply plug into the host computer which automatically configures and enables a student to start work immediately. Unlike comparable devices for Windows MultiPoint Server, the Wyse E01 zero client supports USB peripherals such as, webcams, and USB flash drives, allowing a more flexible computer-based teaching and learning experience.
Jasna Matic State Secretary for Digital Agenda, former Minister for Telecommunications and Information Society said , “Enhancing ICT for education is a major goal of the Government with this programme delivering on our promise to give every student access to their own computer at school. With cutting edge technology from Microsoft and Wyse, our schools have a solid foundation for delivering education to the highest standards.”
Deployment of the Microsoft and Wyse education solution started in December 2010 and will be completed this year.
For more information about Wyse E01 zero clients, please visit, http://www.wyse.com/products/hardware/zeroclients/E01/index.asp
Windows MultiPoint Server 2011 is a low-cost computing solution that creates a 1:1 user to computer experience built on Windows Server. With MultiPoint Server 2011, one PC can provide up to 20 computing sessions at a fraction of the cost.
Wyse® E class™ – Affordable computing for education [Wyse brochure, Jan 23, 2012]
2. Features Wyse E class zero clients, one per desktop and each one linked by a USB [E01] or Ethernet [E02] cable.
3. Low cost, fast and simple to set up delivery of Windows desktops.
Windows MultiPoint Server 2011 Quick configuration guide
4 ~ 6 users 8 ~ 20 users CPU Intel CPU i5/i7 Intel CPU i5/i7 Memory 4 GB 8 GB Hard drive 250 GB 500 GB Graphics/1 On board Intel HD Graphics 2000 or similar same Graphics/2 PCI-Express Card ATI Radeon™ HD 4600 / 4770 / 5750 nVidia GeForce 8x, 9x Series / GT220,GT240 same
Software Microsoft Windows Multipoint Server 2011 Zero client Wyse E01 [retail: $76+] and E02 [$99] Zero Client Licenses (Microsoft Academic VL) Microsoft MultiPoint Server License [$115]Microsoft MultiPoint CAL License per device [$29]
Technical specifications Wyse E01
[E02 difference is Ethernet networking + 2 USB 2.0 port instead of 4 with E01 + 98 x 98 x 20 millimeters dimensions and 128g + standing position]
Server OS Windows MultiPoint Server 2011 I/O peripheral support One VGA (DB-15)
Four USB 2.0 ports (1 on left side, 3 on right side)
One Mic In / One Line Out
USB keyboard (not included)
USB mouse (not included)
Networking One USB in to connect to host computer (cable included)
Maximum distance between each Wyse E01 zero client and the host computer is 5 meters (16 feet 5 inches)
Display Up to 1680 x 1050 @ 60Hz / 32bits or 1600 x 1200 @ 60Hz / 32bits Audio Output: 1/8-inch mini jack, full 16 bit stereo
Input: 1/8-inch mini jack, 8 bit microphone
Physical characteristics Height: 21.5mm (0.85 inches)
Width: 132mm (5.20 inches)
Depth: 87mm (3.43 inches)
Shipping Weight 145g (0.32 lbs) Power Worldwide auto-sensing 100-240 VAC, 50/60 Hz. power supply
Average power usage with device connected to 1 keyboard with 1 mouse and 1 monitor:
less than 3 Watts
Temperature Range Vertical position: 50° to 104° F (10° to 40° C) Humidity 20% to 80% condensing
10% to 95% non-condensing
- Wyse Extends Client Virtualization Leadership in Education Market with the Introduction of a New Zero Client for Schools [Wyse press release, Feb 24, 2010]
$99 Wyse E01 Zero Client and Windows MultiPoint Server 2010 Optimize IT and Financial Resources for Schools in Tough Economy
“We’re happy to be launching with strong support from Wyse, which has committed to developing innovative and effective solutions like the Wyse E01 Zero Client for the MultiPoint platform,” said Ira Snyder, general manager, Windows MultiPoint Server at Microsoft Corp. “MultiPoint Server can deliver a familiar Windows computing experience to educational institutions around the world, helping them get the best value out of technology investments while providing the very best education for their students.”
- The New $99 Wyse Zero Client Provides Simple and Cost-Effective Computing Access for Education and SMBs Worldwide [Wyse press release, Jan 11, 2012]
Wyse Expands E Class Zero Client Offering for Windows MultiPoint Server
Wyse Technology … today announced the introduction of the Wyse E02 zero client in support of Microsoft’s Shape the Future program
The Wyse E01 zero client and the Wyse E02 zero client work with Windows MultiPoint Server 2011 to enable multiple students or SMB users to share a single server. The E02 is easy for teachers to set up and use in the classroom, providing an excellent Windows 7 desktop experience for their students. While the Wyse E01 zero client provides students access to the shared server via USB cabling up to 5 meters, the E02 goes a step further to provide access via Ethernet, at a distance of up to 100 metersfrom the Windows MultiPoint Server.
“Providing students with affordable access to technology is one way Microsoft is helping to ultimately create greater opportunities and more enriched lives for youth around the world. The Wyse E02 zero client, combined with Windows MultiPoint Server, is an excellent example of how we are working to deliver on this mission,” said Microsoft’s Shape the Future Senior Director, Joice Fernandes.
Appropriate and sustainable technology solutions for education in Africa [in The eLearning Africa 2012 Report (p. 17), May 23, 2012]
Widening access to reliable information technology is key to how we can help our children develop educationally. This is especially true in the fast developing economies of Africa where the expectation for access to ICT in the school has increased as more citizens use information technology like mobile phones in their everyday lives.
However, in our view, the ambitious eLearning goals in Africa can only be achieved with classroom technology that is intrinsically sustainable. But, in the African context, what do I mean by sustainability? First of all this is not about ticking the box of some green IT policy set by a government. The reality of extending digital classrooms into urban or rural Africa is that IT provision must take account of the absence of reliable power supplies. Any interruptions can be managed with novel solutions around battery back-ups or solar energy to power a classroom in a remote setting.
Even when reliable power supplies are available, low power consumption is going to remain important in how schools manage their budgets. This makes thin or zero client computers very attractive as they typically only use between 3 and 15 watts of power.
Sustainability in African eLearning is much more than about energy efficiency. It also refers to how IT in schools needs to be easy to set up and manage because it is unrealistic to expect a school to always have access to IT management skills on the ground. As African educators plan their expansion of eLearning, they need to ensure the classroom technology is largely self-sufficient and simple to set up, manage and use in the classroom. The centralised management and robust plug-and-play functionality of classroom labs that use virtualisation technology answers this requirement, ensuring that investments in school classroom labs deliver the maximum educational benefit over a long period of time.
In investing in digital classrooms African educators are demonstrating incredible foresight in what new generations of Africans need to improve their lives. They need to guard against making ICT decisions that trap them in the past. While budgets are always going to be tight, African educators must be ambitious about ICT in education and take advantage of the latest 21st century thinking on virtualised and cloud computing.
Another important dimension of sustainability is the degree to which the ICT is future-proofed in how it can keep pace with future developments in applications and data. Educators are already using solutions like this to transform ICT in their schools and colleges. In South Africa more than 1.5 million students already have ICT access thanks to classroom labs that utilise Wyse cloud computing technology.
Sustainability in African eLearning is vitally important in making ICT widely accessible to students across the Continent. Indeed, African countries look set to trail-blaze other economies in their innovative use of cloud client computing on a massive scale.
David Angwin is Vice President, Field Marketing for Wyse Technology,
and based in the United Kingdom
Wyse Cloud Client Computing Highlights Sustainable E-Learning for Students at eLearning Africa 2012 [Wyse press release, May 23, 2012]
Showcases Latest Digital Classroom Solutions to Widen Availability of School Labs and One-to-One Computing for High Quality IT Enhanced Teaching and Learning in African Schools and Colleges
SAN JOSE, CA and COTONOU, Benin – 05/23/2012 – Wyse, the global leader in cloud client computing, today announced its participation in the eLearning Africaconference and exhibition. As the event’s platinum sponsor for the second year running, Wyse will discuss how advanced cloud client computing can help African educators meet their goals for widening access to technology-enhanced education, development and training. eLearning Africa runs from 23rd – 25th May 2012 in Cotonou, Benin, under the patronage of the Government of Benin.
Working across the continent with its local technology partners, Wyse has developed and deployed a range of solutions that are ideally suited to widening access to IT-enhanced education and training in Africa. The technologies involved are tailored to the continent’s requirements for classroom ICT that is exceptionally reliable, affordable and energy efficient while not compromising on access to the latest applications and data for teaching and learning.
Delegates to eLearning Africa will have the opportunity to see the latest in digital classroom solutions co-developed by Wyse and Microsoft. This includes an entry level shared computing solution for school IT labs that combines Wyse E01 and Wyse E02 zero clients with Microsoft Windows MultiPoint Server 2011; and the Wyse WSM cloud software solution, which offers a centrally-managed, scalable one-to-one computing environment for students that scales across classrooms, labs and schools. Both solutions address the requirement for classroom IT that is secure and easy to set up and run, while delivering a great desktop experience for the students.
Mark Jordan, vice president and general manager, EMEA Sales, Wyse Technology will be delivering a keynote in the opening plenary session on 23rd May 2012. He will address how cloud solutions can play a pivotal role in helping IT enhanced education transform the prospects of African students. Tarkan will be speaking alongside S.E. Max Ahouêkê, Ministère de la Communication et des Technologies de l’Information et de la Communication (MCTIC), Benin; and Prof Sugata Mitra, Professor of Educational Technology, Newcastle University, UK and Visiting Professor, MIT Media Lab, Cambridge, USA.
The event will be ideal opportunity to be updated on how African customers are advancing their e-learning strategies with Wyse cloud client computing solutions. For example in South Africa more than 1.5 million students already have ICT access thanks to classroom labs that utilize Wyse cloud computing technology. In Nigeria, a new network of examination centers relies on a Wyse cloud client computing infrastructure to enable examinations to be delivered, taken and scored entirely electronically, saving time and money while also improving reliability and service with accurate results delivered in hours rather than months.
Education is Wyse’s second largest market, with ten of the world’s top fifteen universities using Wyse solutions to reduce costs and improve learning. They and other educational institutions benefit from Wyse’s position as the only cloud vendor to offer desktop virtualization solutions for every budget and scale of implementation, ranging from ten to upwards of ten thousand units.
A glimpse into the Wyse portfolio
and their large public / enterprise markets
Health care with Citrix and Wyse Xenith next-generation zero-client devices at Seattle Children’s Hospital [WyseTechnology YouTube Channel, May 23, 2011]
Microsoft HIMSS 2011 – Interview with Andre Beuchat of Wyse Technology [WyseTechnology YouTube Channel, May 10, 2011]
Japan’s Largest Bank Turns to Wyse for VDI and Mobility [Wyse blog, April 10, 2012]
Today, Wyse announced that Bank of Tokyo-Mitsubishi is deploying 50,000 Wyse devices. The combination of Wyse’s desktop and mobile hardware, virtualization software and overall Wyse domain expertise in cloud and virtualization is the reason why the Bank of Tokyo-Mitsubishi selected Wyse for its VDI implementation. Bank of Tokyo executive Mizuhiko Tokunaga commented that “… the deciding points were the technological edge of their unique software, Wyse ThinOS, their specialization in VDI, and the sense of trust we felt toward Wyse as a company. Wyse has been a global market leader for a long time, and it shows.”
The Bank of Tokyo-Mitsubishi, the largest bank in Japan and eighth largest in the world, began what was considered the largest systems integration project in the world in 2008 when it started this ambitious project to strengthen security across all 773 branches in Japan and 73 abroad. For more information on this initiative and how Bank of Tokyo is using Wyse, visit: http://www.wyse.com/about/press/release/1917
Cloud Computing involves using information technology as a service over the network.
- Services with an API accessible over the Internet
- Using compute and storage resources as a service
- Built on the notion of efficiency above all
- Using your own datacenter servers, or renting someone else?s in granular increments, or a combination
We at Wyse believe cloud computing has the potential to change how we invent, develop, deploy, scale, update, maintain, and pay for applications and the infrastructure on which they run.
Essential technology and market information
SYN229: What’s new with Citrix Receiver for desktop users [CitrixTV YouTube channel, May 10, 2012] — absolutely important to watch in order to understand how the virtual desktop future would be assured by the upcoming Citrix Receiver universal client experience across different end-user access points (PC, Mac, tablets, smartphones, thin clients and web browsers) for Windows, web and SaaS applications (at least go forward to the [18:53 – 23:05] timeframe in the video) !!!
Wyse, Marvell, and the Citrix System-on-Chip Initiative [Wyse blog, May 10, 2012]
Yesterday Marvell announced participation in the Citrix System-on-Chip (SoC) initiative with the Marvell® ARMADA® 510 SoC for seamless integration with Citrix HDX in a complete silicon solution. The SoC combines a high-performance, low-power SoC with a hardware graphics processing unit and video decoding acceleration hardware. The end result is excellent processing power for high-end apps like HD multimedia in a very efficient, cost-effective footprint.
Wyse already uses the Marvell ARM SoC in our industry-leading T class thin clients. Combining Marvell’s high performance SoC with software optimized for Citrix HDX enables Wyse to offer compact, efficient, and powerful thin clients like the Linux-based T50 thin client and the super-secure T10 thin client based on Wyse ThinOS. In addition, our newly announced Xenith 2 zero client for Citrix XenDesktop and HDX is also based on the ARM SoC, and sets a new price/performance standard for Citrix zero clients in its class.
Wyse Zero [Engine] and Wyse ThinOS [Wyse webpage, Feb 24, 2012]
Built for VDI Optimized for Citrix XenApp, Citrix XenDesktop, Microsoft Terminal Server and VMware View virtual desktop environments Lightning fast Super-fast start-up provides access to virtual desktops in under 20 seconds Super Secure No attack surface provides immunity to viruses and malware Easy-to-manage Hands-off, scalable device management with Wyse Device Manager; easy FTP-based configuration and automatic updates Smart card support Seamless smart-card roaming ideal for workstation-based environments Rich user experience Integrated Wyse TCX Suite for enhanced audio, video and multimedia
Wyse ThinOS is the most optimized, management-free solution for Citrix XenApp, Citrix XenDesktop, Microsoft Terminal Server and VMware Viewvirtual desktop environments. With an unpublished API and no attack surface, Wyse ThinOS is immune to malware and viruses that make other operating systems vulnerable to attack. This super-fast, purpose-built thin computing OS boots up in seconds, updates itself automatically and delivers simple, scalable administration to eliminate time-consuming maintenance tasks related to configuration, management and updates. With full support for Wyse Virtual Desktop Accelerator (VDA), ThinOS neutralizes the effects of network latency and packet loss, even in remote-branch and field-based applications.
- What’s new in Wyse ThinOS with David Angwin, Wyse Technology Watch video »
Wyse Zero [Engine]
Already used in millions of thin clients, zero clients, and handheld smart devices, Wyse Zero [Engine] simplifies the development of cloud-connected smart devices, enabling seamless user access to cloud computing services and virtual desktops. Wyse Zero [Engine] addresses limitations with current embedded options, such as the typical security vulnerabilities of Windows and Linux-based operating systems, and slow initialization due to their large size. With a rich array of networking, management and protocol technology packaged in an engine less than 4MB in size, Wyse Zero reduces costs and simplifies management and updates. With no underlying OS to slow it down, it starts up instantly for a more satisfying user experience. And unlike Windows or Linux-based embedded products that require extensive protection, Wyse Zero [Engine] is original technology and therefore virtually immune to malware, viruses and hackers.
Wyse Announces Private Beta of Cloud-Based Service to Secure and Simplify Corporate Access for Users Across All Devices [Wyse press release, May 8, 2012]
Project Stratus Directly Tackles Consumerization of IT Challenges with Intelligent, Integrated and Cross-Platform User and Device Management
05/08/2012 – Wyse Technology, the global leader in cloud client computing, today announced the Project Stratus private beta program. Project Stratus provides IT administrators with an intelligent and dynamic cloud-based console to securely manage and enable corporate access to any device, regardless if that device is owned by the company or by the individual. Initial support will focus on securing and provisioning corporate access to smartphones, tablets, thin clients, and zero clients with plans to quickly expand support to additional devices used in the workplace.
Project Stratus delivers a unified console that goes beyond standard device management solutions by providing a complete view of the IT infrastructure serving end-users. The console provides visibility not only into users and their devices, but also into their relationship with the IT ecosystem. The result for IT is valuable insight into usage models, trends, and the means to identify areas of investment to more securely and effectively provide corporate services to end users.
“The biggest challenges to IT in a BYOD world has to do with the securing of corporate access to all devices being used by employees. With Project Stratus, our goal is to eliminate the need to have a separate, silo’ed console for each device type and instead allow IT admins to set an access policy for a user that will apply regardless of what device they are using—providing for the first time a one-stop shop for device and access management,” said Hector Angulo, Product Manager at Wyse.
“For a company such as ours that relies on a distributed and mobile workforce, the means to simplify and secure our mobile devices is very appealing,” according to Adam Bari, Managing Director at IPM. “We are very much looking forward to deploying Project Stratus to better manage our mobile computing infrastructure.”
Wyse will be showcasing Project Stratus at Citrix Synergy™ 2012 in San Francisco, May 9th – 11th in Wyse Booth #206 at the Moscone Center. Companies interested in taking part of the private beta can sign-up by going to http://www.wyse.com/stratus
Key features of Project Stratus include:
• Simplicity. Streamlined, discoverable interface with user-centric policy management to help automate user access regardless of what device they are using, including easy exception handling– natural and intuitive management for today’s dynamic IT world
• TCO Reduction. Cloud-hosted service eliminates costly on-premise servers and enables instant deployment and scaling — drastically reduces the total cost of operations and ownership
• Real-time Analytics. Dynamic and instantly personalized data feeds always present admins with the most relevant insight to help expedite the task at hand – powerful analytic engine exposes most important activities, events, and trends
• Actionable. Pro-active alerts notify admins about compliance violations and other potential issues with option to take contextual actions in-place (i.e. warn user, block, ignore) or automate future mitigation (i.e. automatically approve roaming exception request for all members of ‘executive’ group)
• Time-Saving. User and device pages that provide instant visibility into any managed asset, including who is using the device, what it is interacting with, and any potential performance or security issues in order to expedite issue identification and resolution
• Unified Console. Visibility and management of all devices used in the enterprise, with support for smartphones, tablets, thin clients, and zero clients — one-stop shop for all devices, no more hassle of dealing with many consoles
• Security. Enterprise-ready, multi-tenant architecture with fully encrypted communication ensures only you have access to your data
HDX Ready Thin Clients [Citrix microsite, May 9, 2012]
The HDX Ready designation is reserved for thin client devices that have been verified to work with all of the XenDesktop and XenApp HDX features. HDX refers to High Definition User eXperience – a term coined by Citrix to describe capabilities in XenDesktop that optimize the user experience when accessing hosted virtual desktops and applications. The HDX Ready category assists IT managers to easily identify thin client devices that deliver the best possible high definition user experience with XenDesktop and XenApp.
There is a trade-off between a thin client’s cost and its capabilities. Not all users require the functionality of all of HDX features of XenDesktop or XenApp. Devices that are not deemed HDX Ready may still be useful for certain user types and use cases, generally at a lower price point than HDX Ready devices. The Citrix Ready thin client designation exists for those devices that support connectivity to XenDesktop or XenApp but only a subset of HDX functionality. Information regarding HDX feature coverage by a particular thin client device is available on the Citrix Ready website
Citrix HDX SoC spurs innovation and cuts the cost of thin clients in half [The Citrix Blog, May 9, 2012]
Thousands of Citrix customers are already using thin client devices to access virtual desktops and apps delivered by Citrix infrastructure. These customers who have successfully deployed thin clients are getting the benefits of reducing or even eliminating their device management footprint, decreased their dependency on lifecycle management, and have reduced their power consumption by efficiently leveraging computing resources in the datacenter or server room.
There are also many customers who look at the cost of desktop virtualization and can easily justify supporting mobile workers and BYO programs. However, when it comes to replacing desktops in their offices, they may find it harder to justify purchasing a thin client when the price of the endpoint also, after all the dust settles, might be close to the replacement cost of a PC.
Delivering cost reduction
Last October, at Synergy Barcelona 2011, Citrix announced the HDX System on Chip initiative in partnership with Texas Instruments and NComputing, to create new SoC reference designs based on ARM chipsets to accelerate HDX user experience technologies in silicon. By using optimized hardware-based acceleration rather than decoding and rendering virtual desktop traffic on a general purpose processors in software, these SoCs can deliver the user experience of thin client devices costing twice as much or more while reducing power consumption, heat, and footprint. However, don’t mistake hardware-acceleration for “all-hardware.” Devices built on the HDX SoC initiative still run a Citrix Receiver in an embedded OS that permits updates to provide devices new functionality over time, further extending the expected lifecycle.
Taking cues from the living room
This direction of optimized delivery of high definition experience is no different than what many of us are seeing play out in our living rooms. Instead of collecting massive collections of videos to store in cabinets or home servers, cloud providers like NetFlix, Amazon, Apple, Hulu, Pandora, and others store media for us, allowing us us to stream in many cases real time content to our homes. This media can be displayed from TV’s using integrated “internet streaming,” from most any smartphone, tablet, or computer, or through the addition of a $50 appliance from companies like Roku that we plug into our TVs. It is this revolution in cloud entertainment services and the drive for low-cost, low-powered – long battery life devices overtaking the consumer electronics industry that Citrix can now leverage to optimize end point devices for desktop virtualization.
To learn more about these exciting, market-changing, transformative new devices being unveiled by HP, Atrust, Centerm, NComputing, and ThinLinX, please check out the HDX SoC 2012 partner page here.
Dell Wyse: acqusition of Wyse Technologies by Dell
(a summary of the many original materials compiled in the closing part of this post)
- Wyse – a leader in Desktop Virtualization
- Wyse – ranking number one worldwide in thin clientunit share in the fourth quarter of 2011
- Differentiated IP and device management, thin client operating systems, and mobility software that is customized to offer the best user experience with Microsoft, Citrix and VMware virtual desktop infrastructures.
- Much of their software value is captured in the hardware itself. Their ThinOS and the IP around the ThinOS has allowed them to drive greater performance using less memory. So Wyse solutions require less memory and processing power than other comparable thin client solutions, making them more cost competitive and effective for customers.
- Wyse as an independent entity has really been gaining momentum to grow into a number one market share position. In fact, they are growth accelerated in their last fiscal year to 45 percent
- Dell’s view on that:
– The momentum around alternative computing is a trend that they see many customers continuing to experiment with and in many cases, beginning to deploy, although the adoption rates are still relatively low for desktop virtualization.
– They don’t see the entire world going to thin clients. They still think there’s a healthy PC demand in the industry and there’s a balance of alternative computing that allows people to take advantage of securing their information, managing the assets in a very differentiated way. Even a common thin client deployment today is on a standard PC that’s been virtualized.
– This is an opportunity particularly in the verticals around financial services, government healthcare, and the financial services sector to really take a leadership position. This is really specific use cases. For example, in regulated industries like healthcare and financial services, the value of centralizing your data to better have access and control is a specific use case that this thin client desktop virtualization lends itself to.
– They needed it because it is also a different workload to move forward their cloud computing strategy.
– Again, they don’t think a zero client or a thin client is an answer for all customers. They think in their mind that the bigger message here is they now have a range of devices, an incredibly strong portfolio of thin client devices and zero client devices from Wyse, the standard Dell set of PCs, which do virtualization, and now the ability to manage those in a very differentiated way with the key software assets that they’re bringing on board that expand themselves to tablets, expands itself to mobile phones.
- Wyse portfolio includes a wide selection of industry leading thin and zero client devices designed easily to integrate into a virtualized or web based infrastructure
- It compliments and extends the desktop virtualization capabilities that Dell has today.
- Also a big part of this transaction is the synergy that Dell would get from their datacenter solutions business, including servers, storage, networking services, and software. For every thin client hardware dollar that exists in the IT industry, there’s $5 of enterprise servers, storage, networking services that go along with that.
- This could also remove the barrier for some companies that did not have the right level of datacenter portfolio and datacenter ecosystem to exploit the thin client alternative of enterprise computing: i.e. deploying desktop virtualization centric cloud client portfolios and platforms.
- Wyse is a company that has 31 years of experience. They have the intellectual property, they have the software and 150 R&D engineers which 140 are in software. Wyse and one other competitor basically had almost 50 percent of the market. Wyse are pretty close partners with Microsoft, and they do a lot of work with VMware, with Citrix as well. As these providers provide desktop virtualization methodology and technology between the datacenter and end use computing platforms Wyse add to that value and they partner heavily with them and obviously that’s going to continue.
- [Wyse:] And also, one other piece to add, we provide some of the software we provide is differentiated in the marketplace, is the leader in this space also from the cloud, both on the infrastructure management side from the cloud, with a product called Wyse Stratus. So, many of you on the phone are using today, Wyse PocketCloud, the market leading product for content management from the cloud on any mobile device and also from your web browser, connecting your apps and content inside the content voice data video from your choice of your cloud, private or public.
- The software stack that brings together the edge device, the management software that manages that, that sits into the cloud or sits into the datacenter, and the ability to build that software from essentially ground zero to being able to acquire those capabilities and that experience and the technology with it, puts Dell in a leadership position. The differentiated technology that they are getting with the integration of Brad Anderson’s [Dell president, Enterprise Solutions] and Steve Schuckenbrock’s [Dell president of Services] businesses, allow them a unique position to do this for their customers. All this allows them to move quite quickly in the marketplace, much quicker than they could have done it on their own.
- IDC: worldwide thin client demand will grow 15 percent per year to approximately 3 billion by 2015
- IDC: the overall end to end solutions market with thin clients is expected to exceed 15 billion by 2015
Citrix Announces New Innovations in Desktop Virtualization Lowering Cost and Accelerating the Transformation to Virtual Desktops [Citrix press release, May 9, 2012]
New XenDesktop, VDI-in-a-Box & AppDNA capabilities drive adoption
San Francisco, CA » 5/9/2012 » Today, at Citrix Synergy™, the conference where mobile workstyles and cloud services meet, Citrix announced a set of new innovations that help organizations transform their Windows desktops and apps into a cloud-like service that can be managed centrally and delivered to any device in any location. New releases of Citrix desktop virtualization products and new game-changing Citrix HDX Ready SoC-based endpoint devices from key partners are helping to ease the transition to virtual desktops, drive down the acquisition costs and provide expanded capabilities targeting broad use cases from the call center, to high-end engineering and mobile workers in enterprises, the public sector and SMBs, enabling organizations of all sizes to deliver anywhere, anytime access to desktops, applications and data to users.
With the tremendous explosion of new devices, operating systems and applications, organizations are struggling to keep up with the challenge of managing desktops and applications in this new highly mobile world. At the same time, trends such as consumerization and bring your own device (BYOD) programs are putting added strain on IT resources. Citrix is raising the bar once again delivering new innovations across its desktop virtualization products and working with partners to drive down the costs of virtual desktops.
Easier On-ramp to Desktop Virtualization
- New Remote PC Option in XenDesktop FlexCast– The new RemotePC option is part of the FlexCast® delivery technology in the Citrix XenDesktop® product line. Using the new RemotePC capability, XenDesktop customers will be able to quickly turn existing office PCs into distributed VDI hubs without setting up additional servers and storage in the datacenter. This innovative new solution makes it easy for IT to give end users fast, secure remote access to all the apps and data on their office PC from any device. Once IT is ready to move to a more full-service VDI implementation, these distributed RemotePC images can be easily moved into the datacenter to run in a traditional hosted VDI model for better consolidation, security and management efficiency. Remote PC functionality will be included in XenDesktop 5.6 Feature Pack 1, which will ship in June, 2012.
- New AppDNA Software Release – To ease the transition to Windows 7 and a virtual desktop infrastructure, the new release of Citrix AppDNA software brings a simplified overall installation, setup and user environment to accommodate a broader range of enterprises, the channel and global SIs. Citrix AppDNA also provides even more in-depth application details so enterprises can accurately assess, rationalize and act on applications before a project begins. The AppDNA 6.1 software will be available in Q2, 2012. (see announcement blog for more detail)
Reducing the Acquisition Costs of Virtual Desktops
- First Wave of Game-changing Endpoints Arrives – The first results of the Citrix HDX System-on-Chip initiative that was announced at Citrix Synergy Barcelona are being delivered to the market. The initiative was designed to enable an entirely new generation of devices that deliver high-definition virtual desktops and apps at game-changing price points and form factors. These devices reduce the cost of high-performance HDX Ready thin clients by more than half, further driving down the cost of desktop virtualization. New devices from ATrust, Centerm, HP, NComputingand ThinLinx are being announced today at Citrix Synergy San Francisco and are built for Citrix XenDesktop, and Citrix VDI-in-a-Box. (See announcement blog for more detail)
- Personalized VDI for Less than the Cost of PCs – The Project Aruba technology preview delivers a cost-efficient yet complete VDI solution by extending the simple affordable Citrix VDI-in-a-Box™ with layering technology using personal vDisks to deliver highly personalized virtual desktops that retain the cost-efficiencies of pooled desktops. Project Aruba also provides a validated blueprint for service providers looking to deliver cost-effective VDI-based Desktops-as-a-Service.
Citrix has also made available a license migration path from VDI-in-a-Box to XenDesktop for customers that want to extend beyond VDI to leverage the full flexibility of XenDesktop while preserving their investment. The end-user experience is consistent across both products as both VDI-in-a-Box and XenDesktop use the same HDX stack and Citrix Receiver. (See announcement blog for more detail)
Delivering Expanded Functionality for Broad Use Cases
Citrix is delivering new innovations that create a very seamless experience for end-users, delivering a more complete solution than other alternatives on the market.
- Empowering Point-to-Point Unified Communications for Cisco and Microsoft– With the introduction of HDX Real Time technologies for voice and video collaboration, industry-leading unified communications (UC) solutions including Cisco VXI Unified Communications and Microsoft Lync 2010 can process voice and video locally and create a peer-to-peer connection for the ultimate user experience while taking the load off datacenter processing and bandwidth resources. XenDesktop delivers new levels of efficiency and quality of service for the most demanding use cases. HDX Real Time will be available with XenDesktop 5.6 Feature Pack 1 in June, 2012. – Support for HDX Real-Time with select Cisco VXI clients was recently announced in April, 2012 representing the first optimized UC solution for desktop virtualization on the market. This solution represents one of the first deliverables from the recent collaboration agreement between Cisco and Citrix to optimize HDX for Cisco networks.- The new Optimization Pack for Microsoft Lync 2010 will be included in XenDesktop 5.6 Feature Pack 1. This pack supports Microsoft Lync 2010 for point to point voice and video communications to Windows and Linux devices and will extend across all Citrix Receiver™-enabled devices over the coming months.- Beyond traditional unified communications support, XenDesktop also optimizes voice and video collaboration for cloud-based solutions including Citrix GoToMeeting® by compressing voice and video traffic on the client before transmission over the network.
- Cutting Network Bandwidth for Demanding 3D Engineering Environments – Whether collaborating with design engineers across oceans using advanced CAD/CAM or GIS apps or consulting medical imaging at a patient’s bedside with an iPad, the secure, high performance delivery of GPU accelerated 3D applications and desktops with XenDesktop has never been more powerful or efficient. Using new deep compression codec technology that reduces bandwidth requirements by 50 percent, XenDesktop with HDX 3D Pro technologies secures sensitive intellectual property and privacy-sensitive data while improving collaboration and performance eliminating the need to synchronize and transfer massive data files. Meanwhile, users leverage state-of-the-art graphics processing hardware in the datacenter to access designs and images from any device, anywhere. HDX 3D Pro will be available with XenDesktop 5.6 Feature Pack 1 in June, 2012. (See the announcement blog for more detail)
- New XenClient Enterprise and Acquisition of Virtual Computer – Citrix announced the acquisition of Virtual Computer, provider of enterprise-scale management solutions for client-side virtualization. Citrix will combine the newly-acquired Virtual Computer technology with its market-leading XenClient® hypervisor to create the new Citrix XenClient Enterprise edition. The new XenClient Enterprise, available in Q2, 2012, will combine all the power of the XenClient hypervisor with a rich set of management functionality designed to help enterprise customers manage large fleets of corporate laptops across a distributed enterprise. The combined solution will give corporate laptop users the power of virtual desktops “to go”, while making it far more secure and cost-effective for IT to manage thousands of corporate laptops across today’s increasingly mobile enterprise.
- Simplifying Printing with New HDX Universal Print Server – Now, Citrix desktop virtualization products tame the complexity of printing by completing a universal printing architecture with the Citrix HDX Universal Print Server. Combined with the previously available Universal Print Driver, administrators may now install a single driver in the virtual desktop image or application server to permit local or network printing from any device, including thin clients and tablets, leveraging HDX optimization technology to reduce bandwidth load over wide area networks and manage printing communications outside of the virtual desktop channel for enhanced quality of service. HDX Universal Print server will be available with XenDesktop 5.6 Feature Pack 1 in June, 2012. (See the announcement blog for more details)
“Citrix is helping to drive down the costs of virtual desktops, and advancing technology around user experience and manageability to move desktop virtualization adoption forward at a rapid pace. Though product innovation and strong partner ecosystems we are addressing barriers on all fronts including acquisition costs, migration complexity and delivering complete solutions for all customer segments from large enterprises to SMBs.”
– John Fanelli, Vice President of Product Marketing, Enterprise Desktops and Applications at Citrix
- Announcement: New XenDesktop Release Accelerates Migration to Windows 7 and Beyond
- Announcement: Dell and Citrix Deliver a Simple VDI Appliance for the Mass Market
Follow Us Online
NOW to understand the whole picture from/through a very practical demonstration of the whole range of possibilities watch these videos:
Citrix Receiver on the Wyse Xenith, connecting to a XenDesktop virtual desktop [citrixvideos YouTube channel, April 10, 2011]
Wyse Product/Technology Details
Wyse Changes Everything with Announcement of Xenith 2 Zero Client for Citrix VDI-Based Deployments [[Dell] Wyse press release, May 9, 2012]
Leading Zero Client Improves Performance for VDI Installations Using Citrix Desktop Virtualization Solutions
SAN JOSE, CA – 05/09/2012 –
Wyse Technology, the global leader in cloud client computing, today announced the Wyse Xenith 2, based on the ultra-secure Wyse zero framework. This breakthrough zero client was revealed today at Citrix Synergy™ 2012, the premier event on cloud computing, virtualization and networking. Wyse, the leading shipper of fixed and mobile desktop zero clients in the world, will be demonstrating the Xenith 2 at Wyse Booth #206 from May 9-10, 2012.
Following on the success of the Wyse Xenith and Wyse Xenith Pro, the Wyse Xenith 2 is the ideal Citrix zero client solution for both enterprise and SMB organizations. The Wyse Xenith 2 zero client is purpose-built for Citrix XenDesktop® blending the amazing cost benefits of the ARM System-on-Chip (SoC) architecture, with a non-Windows Citrix Receiver compatible client, developed in cooperation with Citrix. Improving on the success of the Xenith, with 30% faster performance and lower power consumption, the result is a super secure, very affordable, true high-fidelity desktop experience. For users requiring a diverse variety of applications, including HD multimedia, the Wyse Xenith 2 delivers a new standard in price and performance in a compact zero client and delivers an unprecedented combination of simplicity, performance and security for office-based workers.
The Wyse Xenith 2 requires no local configuration or management and can offer customers of all sizes a more secure client while helping reduce management and overall client cost. Full AES 128 bit encryption enables encryption of network certificates on the client, which is a truly ironclad level of security. Leveraging the Wyse zero framework, the Wyse Xenith 2 is able to provide a secure, ‘instant on’ experience for end users—booting up and logging into a Citrix XenDesktop® in less than 10 seconds. With no exposed API’s and no attack surface, the Wyse Xenith 2 zero client is malware and virus immune, removing client security concerns.
“Wyse Xenith has been a game-changer for us,” according to Wes Wright, Chief Technology Officer at Seattle Children’s Hospital. “Not only are we saving $6 million in hardware replacement costs, more than $1 million in staff time, and $300,000 per year in energy savings, we also have devices that are faster, more secure and more reliable than anything we had before. With Xenith 2, Wyse is simply adding more appeal to an endpoint device family that makes Citrix XenDesktop a great end-to-end VDI solution.”
Like the Wyse Xenith and Wyse Xenith Pro, the Wyse Xenith 2 changes everything, including the economics of desktop computing. Wyse Xenith 2 eliminates the complications of management and security issues associated with traditional client devices, while ensuring an unparalleled high-definition user experience, further lowering the barriers for mainstream adoption of desktop virtualization.
“As customers look to the flexibility of desktop virtualization, Citrix is enabling these enterprises to transform their traditional Windows computing environments into a cloud-like service, delivering anywhere, anytime access to desktops, applications and data. Through collaborative relationships like the one with Wyse, we are further driving down the costs of virtual desktop deployments and accelerating adoption. The Xenith 2 achieves this goal by providing a secure, affordable solution that is optimized to deliver a high-definition virtual desktop experience through Citrix Receiver,” said Sumit Dhawan, group vice president and general manager, Receiver and Gateways at Citrix Systems.
“By tightly-integrating with Citrix, we’re delivering a zero client that is second to none in performance, security, manageability, and ease of use for this class of VDI endpoint,” according to Param Desai, VP, Product Management at Wyse Technology. “All of this plus it is more affordable than ever before.”
“Vendors like Wyse continue to push the envelope in zero client technology,” according to Bob O’Donnell, Program VP, Clients and Displays at IDC. “The ability to improve device performance while adding additional functionality and reducing cost bodes well for future zero client customers.”
Top Product Benefits
• Secure. Stateless zero client has zero attack surface for viruses & malware; no local disk and no APIs. Xenith 2 also offers single sign-on and is integrated with Imprivata support. Full AES 128 bit encryption enables encryption of network certificates on the device.
• Powerful. The Wyse Xenith 2 includes a Citrix Receiver client and achieves unparalleled user experience, great graphics performance and high fidelity multimedia due to Wyse’s innovative performance optimizations for ARM SoC and available only on Xenith 2 and T10. Xenith 2 starts up in 6 seconds.
• Affordable. Sets a new level of price / performance.
• Easy to manage. Integrated out of the box with XenDesktop management console in addition to also being managed by Wyse Stratus as part of a comprehensive device management from the cloud. Xenith 2 also comes with auto detection of server and configuration and is a completely stateless device, always using the latest zero engine delivered directly from a central configuration file server and the XenDesktop server.
• Compact. Requires very little space or none — includes VESA mount for back of display mounting. Xenith 2 is 30 percent smaller than original Xenith and utilizes only 7 watts in full operation.
• Zero-compromise user experience. Network-based QoS ensures quality (HDX multi-stream). Devices offers true 720P 25+ fps HD for wmv and H.264 with HW decoding engines. Dual display with rotation and l-shaped [which is unique and essential for financial services environments with an additional screen for spreadsheet viewing in vertical] display capabilities. New WAN support with local echo and bandwidth reporting allowing remote and at home users greater flexibility and performance..
Pricing and Availability
The Wyse Xenith 2 will be available soon with an estimated customer price TBD. For more information, please visit:
Establishing a new price/performance standard for zero clients for Citrix, the new Wyse Xenith 2 provides an exceptional user experience at a highly affordable price for Citrix XenDesktop and XenApp environments. With zero attack surface, the ultra-secureXenith 2 offers network-borne viruses and malware zero target for attacks. Xenith 2 boots up in just seconds and delivers exceptional performance for Citrix XenDesktop and XenApp users while offering usability and management features found in premium Wyse cloud client devices. Xenith 2 delivers outstanding performance based on its system-on-chip (SoC) design optimized with its Wyse Zero architecture, and a built-in media processor delivers smooth multimedia, bi-directional audio and Flash playback. Flexible mounting options let you position Xenith 2 vertically or horizontally on your desk, on the wall or behind your display. Using about 7 watts of power in full operation, the Xenith 2 creates very little heat for a greener, more comfortable working environment.
|Operating System:||Wyse Zero™ Engine|
|Processor:||Marvell® ARMADA™ PXA 510 v7 1.0 GHz system-on-chip (SoC)|
|Memory:||0MB Flash / 1GB RAM DDR3|
|I/O peripheral support:||• One DVI-I port, DVI to VGA (DB-15) adapter included
• Dual display support with optional DVI-I to DVI-D plus VGA-monitor splitter cable (sold separately)
• Four USB 2.0
|Networking:||• 10/100/1000 Base-T Gigabit Ethernet
• Optional internal wireless 802.11 b/g
|Display:||• VESA monitor support with Display Data Control (DDC) for automatic setting of resolution and refresh rate
• Dual monitor supported with ‘L shaped’ display rotation
• Single: 1920×1200@60Hz; color depth: 32 bpp
• Dual: Up To 1920×1080@60Hz; color depth: 32 bpp
|Audio:||• Output: 1/8-inch mini jack, full 16-bit stereo, 48KHz sample rate
• Input: 1/8-inch mini jack, 8-bit microphone
|Included:||• Enhanced USB keyboard with PS/2 mouse port and Windows keys
• PS/2 mouse
|Power:||• Worldwide auto-sensing 100-240 VAC, 50/60 Hz.
• Energy Star V5.0
• Phase V external and EuP compliant power adapter
|Power consumption:||Under 7.2 Watts (average)|
|Dimensions:||• Height: 1 inch (25mm)
• Width: 6.9 inches (177mm)
• Depth: 4.69 inches (119mm) Weight: 1 lb (450g)
|Shipping Weight:||1.003 lbs. (.455kg)|
|Mountings:||• Stand for horizontal use and VESA/wall mounting (included)
• Optional vertical stand
|Temperature Range:||• Operating
• Horizontal position: 50° to 95° F (10° to 35° C)
• Vertical position: Power button up: 50° to 104° F (10° to 40° C)
• Storage: 14° to 140° F (-10° to 60° C)
|Humidity:||• 20% to 80% condensing
• 10% to 95% non-condensing
|Security:||Built-in Kensington security slot (cable sold separately)|
|Safety Certifications:||• Ergonomics: German EKI-ITB 2000, ISO 9241-3/-8
• Safety: cULus 60950, TÜV-GS, EN 60950
• RF Interference: FCC Class B, CE, VCCI, C-Tick
• Environmental: WEEE, RoHS Compliant
|Warranty:||3-year limited hardware warranty|
Marvell Joins Citrix System-on-Chip Initiative to Bring Citrix HDX Technology for Thin Clients to Market [Marvell press release, May 9, 2012]
Santa Clara, California (May 9, 2012) – Marvell (Nasdaq: MRVL) today announced participation in the Citrix System-on-Chip (SoC) initiative to enable an entirely new generation of thin clients for high-definition virtual applications and desktops at a low cost. The Marvell® ARMADA® 510 SoC seamlessly integrates Citrix HDX capabilities into a complete silicon solution. The first of many ARMADA chips to be verified as part of the Citrix SoC initiative, the ARMADA 510 is a high-performance, highly integrated, low-power SoC comprised of an ARM v6/v7-compliant superscalar processor core, a hardware graphics processing unit, video decoding acceleration hardware and a broad range of peripherals, answering the need for fast processing and a rich multimedia user experience.
“The future of enterprise computing is in the convergence between mobile devices and digital content – it’s imperative that end users have access to the content they need from any device, whether it’s a thin client, tablet or smartphone. Citrix has been abreast of this monumental shift in the computing landscape for years – and now the Citrix SoC initiative makes it even easier for companies to deliver a new category of mobile-enterprise friendly devices to users quickly and affordably,” said Jack Kang, director of marketing for mobile at Marvell Semiconductor, Inc. “Working closely with Wyse, Marvell is proud to integrate the performance enhancements from Citrix SoC initiative onto Wyse’s performance rich Citrix HDX Ready T50 device based on Marvell’s ARMADA 510. Marvell is also working closely with Citrix to verify its full portfolio of highly scalable enterprise silicon solutions, from cloud servers to mobile and consumer end point devices, and we look forward to further collaborations with Citrix Ready partners to deliver new and exciting products throughout the enterprise.”
“Citrix XenDesktop delivers the capabilities to enable enterprise customers to begin or accelerate their migration to Windows 7 and beyond, while gaining the mobility, flexibility, and management benefits of desktop virtualization.” said Ankur Shah, principal product manager at Citrix Systems. “We welcome Marvell to the Citrix System-on-Chip initiative. Marvell’s broad portfolio of technology will enable a wide variety of devices to leverage the benefits of Citrix desktop virtualization technology.”
”Wyse is excited about Marvell’s partnership with Citrix on the Citrix SoC initiative,” said Kiran Rao, director of product management at Wyse Technology. “The end-to-end approach, incorporating Marvell’s high performance hardware with software optimized for HDX technology, enables Wyse to quickly bring innovative devices to market that provide a superior end user experience. Wyse’s compact, affordable Citrix HDX Ready T50 and T10 thin clients, as well as the new Xenith 2 zero client, powered by Marvell’s ARMADA 510 SoC will further expand access to cloud-based desktop virtualization using Citrix XenDesktop in the enterprise and beyond.”
Wyse and Microsoft discuss cloud PCs and OS licensing [WyseTechnology YouTube channel, May 19, 2011]
Comparison of the current Z class products: Wyse Z90DE7, Wyse Z90D7, Wyse Z90S7, Wyse Z50D, Wyse Z50S, Wyse Z90DW
All with dual-core AMD G-T56N. The 4 Windows® Embedded Standard 7 based ones at 1.6 or 1.65 GHz while the 2 Wyse-enhanced SUSE Linux based ones at 1.5 and 1.6 GHz respectively. Memory is 2/4/8GB Flash + 2/4GB RAM, DDR3, depending on the model. Memory on 3 models is expandable, and on 3 Windows® Embedded Standard 7 based ones SSD storage is also supported. Power consumption is under 15 Watts (average) for all. Dimensions are 200 x 47 x 225 millimeters. Weight is 1.1kg.
Wyse Introduces World’s Fastest Thin Client Family [Wyse press release, Aug 29, 2011]
Wyse, Cloud Client Computing, Z class, World, Fastest, Available, VMworld 2011
SAN JOSE, Calif. – 08/29/2011 – Today at VMworld® 2011, Wyse Technology, the global leader in cloud client computing, today announced that its fastest thin clients ever, the [Windows® Embedded Standard 7 based] Wyse Z90D7 and Z90DW are now shipping. In addition, Wyse today introduced two new Linux-based members of its Z class family – the Wyse Z50S and Wyse Z50D. The Wyse Z50 is the high performance thin client family based on Wyse Enhanced SUSE Linux Enterprise, the industry’s only enterprise-quality Linux operating system that combines the security, flexibility, and market-leading usability of SUSE Linux Enterprise from Novell, with Wyse’s thin computing optimizations in management and user experience.
In connection with the availability of these breakthrough thin clients, Wyse also announced the results of independent testing, recently conducted by The Tolly Group, of the Wyse Z class versus the competition. Wyse made this announcement in connection with VMworld® 2011, the global conference for virtualization and cloud computing held in Las Vegas, August 29th through September 1st at The Venetian. As part of VMworld 2011, Wyse is demonstrating their award-winning virtualization, management, and cloud software and a wide range of thin, zero, mobile and cloud PC client hardware at Booth #1111.
At the heart of the Wyse Z class thin clients lie an entirely new engine, one where all the major system elements – CPU cores, vector engines, and a unified video decoder for HD decoding tasks – live on the same piece of silicon. This design concept eliminates one of the fundamental constraints that limit performance.
The Wyse Z class delivers a combination of performance, simplicity, and connectivity never before seen in a thin client. With available dual-core AMD G-series Fusion accelerated processing units, the Wyse Z class is the world’s best-performing thin client, able to support the most processing-intensive applications including 3D solids modeling, HD graphics simulation, and unified communications with ease. They also include the first SuperSpeed USB 3.0 connectivity in a thin client, enabling the newest peripherals and speeds up to 10 times faster than USB 2.0. With Wyse Z class thin clients, users have more display options than ever before including DisplayPort and DVI.
The Wyse Z class also includes advanced networking capabilities, with support for gigabit Ethernet, and available integrated A/B/G/N dual band Wi-Fi. They are compliant with the ENERGY STAR Version 5.0 Thin Client specification.
Independent testing by The Tolly Group recently confirmed the Z90D7s substantial leadership position in thin client performance compared to rival products. In support of rich video-based Web applications, for example, the Z90D7 boasted a clear advantage in video playback quality while using just a fraction of its processing and memory capability. That equates to a clearly superior user experience on a much more energy-efficient platform. In addition, the Z90D7 scored up to five times higher in industry-standard performance ratings (CPU Mark, 3D Graphics Mark, and PassMark ratings) than the competition. Among secure, cost-effective, yet powerful thin clients, these independent tests confirmed that the Wyse Z class is the clear winner.
“Being able to combine power and performance in such an easily-managed device is something we are extremely proud of,” said Param Desai, Sr. Director, Product Management, with Wyse Technology. “With the availability of Wyse Z class we’ve more than doubled the performance capabilities of competing top-of-the-line thin clients with similar energy requirements.”
Built on the same exact advanced single and dual core processor hardware platform as the Wyse Z90 thin clients, the upcoming Linux-based Wyse Z50 promises more of the same industry leading power and capability on an enterprise-class Linux operating system.
“We are very familiar with the performance of Wyse products having deployed several Z90 devices throughout our campus,” according to Ryan Foster, Network Engineer at Montgomery County Community College in Southeast Pennsylvania. “We were particularly impressed with the improvements to our desktop security, and by the capabilities of these devices handling multimedia files such as audio, video and Flash.”
“The Wyse Z Class and VMware View™ combine to take advantage of PCoIP® in ways that will enhance the end-user experience,” said Vittorio Viarengo, vice president, End-User Computing, VMware. “Better security, easier management and significant energy savings all combine in a high-performance thin client that will benefit both IT and end users.”
“Wyse has made innovative use of the AMD G-Series Accelerated Processing Unit which combines a multi-core CPU, a discrete-class DirectX® 11 capable GPU and HD video decoding in one tiny piece of silicon,” said Buddy Broeker, director of embedded solutions at Advanced Micro Devices “The Wyse Z class takes full advantage of the processor’s unprecedented level of graphics integration that delivers a unique combination of performance and efficiency.”
For more information on Wyse Z90 including independent report results, please visit:http://www.wyse.com/products/hardware/thinclients/Z90
The Wyse Z50 will be available later this year.
More videos about the PocketCloud:
- Wyse PocketCloud wins 2011 Appy Award for Productivity category [WyseTechnologyYouTube channel, March 3, 2011]
- Wyse PocketCloud 2.0 Features for iOS Devices [WyseTechnologyYouTube channel, Oct 13, 2010]
- Wyse PocketCloud demo on an iPad [WyseTechnologyYouTube channel, Dec 14, 2010]
- Wyse PocketCloud Features for Android Devices [WyseTechnologyYouTube channel, Dec 8, 2010]
- Introducing Wyse PocketCloud [WyseTechnologyYouTube channel, Sept 15, 2009]
- Wyse PocketCloud [WyseTechnology YouTube channel, Sept 15, 2009]
Focus on Dell [May 24, 2012]
Dell Completes Acquisition of Cloud Client Computing Leader Wyse Technology [Dell press release, May 25, 2012]
- With Wyse, Dell assumes a leadership position in Thin Clients
- Dell’s new Desktop Virtualization capabilities combined by Dell’s leadership position in Server, Storage and Networking solutions successfully positions the company as true end-to-end IT vendor
Dell today announced it has completed its acquisition of Wyse Technology, the global leader in cloud client computing. The combination of Wyse’s capabilities with Dell’s existing desktop virtualization offerings position the company as the leader in the desktop virtualization, enabling it to offer true end-to-end IT solutions for customers and partners.
Dell has made significant strategic investments over the past three years to expand its enterprise technology and services capabilities. The Dell Wyse portfolio with current Dell desktop virtualization offerings, leading data center products such as servers and storage, and Dell’s services division, provides customers and partners with a single vendor that can match the full range of their cloud computing and desktop virtualization needs.
The Dell Wyse solution portfolio includes industry-leading thin, zero and cloud client computing solutions with advanced management, desktop virtualization and cloud software supporting desktops, laptops and next generation mobile devices. Dell Wyse has more than 180 patents, both issued and pending, covering its solutions, software and differentiated intellectual property. Dell’s existing offerings include Desktop Virtualization Solution Simplified and Desktop Virtualization Solution Enterprise.
Dell recognizes it’s critical for the desktop virtualization solutions strategy to embrace simple device management, enhance security, scale, and boost user productivity, while providing the flexibility to support anytime, anywhere access on any device.
Dell plans to preserve Wyse’s channel offerings and all existing Wyse channel partners will be eligible for our PartnerDirect Program. Dell will combine the best of both companies’ channel deal registration programs, extend this new deal registration program to all partners, and introduce a program in which partners can grow and nurture a customer relationship.
“We’re excited to officially welcome Wyse to Dell and help extend its industry-leading efforts to a broader range of customers and partners,” said Jeff Clarke, Dell vice chairman and president, Global Operations and End User Computing Solutions. “We believe the Dell Wyse capabilities, combined with our previous desktop virtualization offerings and the strength of the Dell enterprise portfolio, provides the most comprehensive and competitive DVS solution available today.”
“Wyse and Dell share the vision and passion in helping our customers and partners create a frictionless user experience via the cloud,” said Tarkan Maner, Vice President and General Manager Dell Wyse, Cloud Client Computing. “Combining our relentless IP innovation and tight operational skills, and most importantly our laser focus on customer and partner advocacy, Dell cloud client computing will develop and deliver the most advanced solutions globally, from the data center to the end user. We are and will be completely focused on the best user experience for any user, for any content, using any app, on any device, anytime, anywhere; without any conflict, compromise and constraint.”
“As a current customer who has deployed Wyse cloud client computing solutions with Dell PowerEdge servers and Dell EqualLogic storage, Western Wayne School District is excited about the combination of Dell and Wyse,” said Brian Seaman, Network Administrator at Western Wayne School District in Pennsylvania. “Like most school districts, Western Wayne operates in a budget constrained environment and our move to desktop virtualization technologies supported with strong enterprise infrastructure has enabled us to do more with less in service of our students and community. In working with Dell and Wyse to scope and deploy our computing environment, Western Wayne now has the right technology to help us achieve our vision of educating our students of today to become the productive citizens of tomorrow.”
“End point computing models continue to evolve and are accelerating tremendous innovation and efficiencies across enterprise desktop and personal computing,” said Bob O’Donnell, vice president, Clients and Displays, IDC. “One area of strong customer growth is in the desktop virtualization space and we expect to see adoption rates continue to grow over the next several years. As use models continue to mature, so do the vendors who offer solutions in this product space. Dell’s acquisition of Wyse results in an industry-leading solutions and services provider with a formidable end-to-end technology stack from the end point to the datacenter to the cloud.”
Dell to Acquire Wyse Technology Conference Call This slideshow could not be started. Try refreshing the page or viewing it in another browser. Dave Johnson, Senior Vice President, Dell Corporate Strategy: We at Dell continue to execute on our strategy to develop and expand our solutions capability built on Dell’s intellectual property. These solutions are open with a focus on enhancing customer productivity, delivering results faster and eliminating unnecessary complexity. We’re making great progress in delivering solid results on this strategy. Today’s announcement is an important next step to our end user computing strategy. It enhances our portfolio in the critical area of client computing and further supports our efforts to help our customers innovate end to end IT solutions from the edge to the core of the cloud. The acquisition of Wyse Technology compliments and expands Dell’s existing desktop virtualization capabilities, allowing us to offer industry leading and differentiated solutions to a fast-growing segment of the end user computing space. In addition, it also provides synergies with our enterprise solutions business. Our ability to now offer an industry leading cloud client computing solution will provide opportunities for Dell to further accelerate the growth of our servers, storage and network portfolios. IDC estimates that worldwide thin client demand will grow 15 percent per year to approximately 3 billion by 2015, and that the end to end datacenter infrastructure stack for these solutions is expected to exceed 15 billion by 2015. And with Dell’s portfolio, we’ll be able to participate in this broader opportunity. Wyse Technology is a leader in the high growth and strategic area of cloud client computing, ranking number one worldwide in thin client unit share in the fourth quarter of 2011. Wyse delivered approximately $375 million in annual revenue over the trailing 12 months. Wyse has approximately 500 employees with 150 employees in research and development, most of which are software engineers. In addition, it has approximately 250 sales specialists that are solely focused on selling Wyse cloud client computing end to end solutions. They have more than 3000 channel partners that sell Wyse technology on a global basis. This transaction expected to be accretive to Dell’s non-GAAP earnings in the second half of fiscal year 2013. Dell’s reputation as a trusted adviser to our customer, our distribution and sales capabilities combined with Wyse’s innovative solutions in cloud computing will help address customers’ needs and is a great strategic fit, both operationally and culturally for Dell. Finally, Dell has a strong track record of integrating acquisitions of this size. Based on experience with similar acquisitions, we expect this transaction to be accretive to earnings on a non-GAAP basis in the second half of this year. We’re really excited about welcoming Wyse to Dell and even more excited about the opportunities for our customers. Jeff Clarke, Vice Chairman, Global Operations and End User Computing Solutions: We see a growing opportunity in cloud client computing. This includes thin and zero client hardware, client infrastructure management software, virtualization end user optimization software, datacenter networking and implementation and managed services. It compliments and extends the desktop virtualization capabilities that Dell has today. These solutions offer customers an alternative compute model and helps enterprises enhance security, streamline desktop management and boost user productivity. Examples of the benefits that a cloud client computing solution can provide include, We have discussed our strategy and end user computer was first to strengthen our core business by implementing sustainable supply chain improvements and the results of which were evident in FY ’12. Our next goal was to deliver solutions and include compelling devices plus the tools to secure, manage that hardware, software and data. You’ve seen the results of that with some of our recent product announcements, as well as the strong growth of our transactional services business in FY ’12. And finally, we indicated our intensions to expand our reach into new and fast-growing areas of the end user computing. The acquisition of Wyse Technology and its portfolio of industry leading capabilities is the next step in our end user computing strategy. Wyse is a global leader in client – excuse me – in cloud client computing. Its portfolio includes a wide selection of industry leading thin and zero client devices designed easily to integrate into a virtualized or web based infrastructure. Differentiated IP and device management, thin client operating systems, and mobility software that is customized to offer the best user experience with Microsoft, Citrix and VMware virtual desktop infrastructures. Wyse solutions require less memory and processing power than other comparable thin client solutions, making them more cost competitive and effective for customers. To date, Dell has relied on shared IP solutions to serve its thin client customers. With this transaction, we are moving to a more profitable industry leading and complete end to end solutions with Dell owned IP and the associated R&D capabilities with it. Wyse Technology’s portfolio complements and extends Dell vision of providing innovative and complete end to end solutions to our customers. In addition, the combination of Wyse Technology with Dell’s brand and customer reach presents a dramatic increase in Wyse’s addressable demand. I’d like to leave you with the following takeaways; Tarkan Maner, CEO of Wyse Technology: The entire team at Wyse is excited about joining the Dell team and becoming an integral part of enabling Dell’s end user company vision. This agreement is great news for our customers and channel members worldwide. We’ve been focused on delivering innovative solutions for our customers and channel members for the past 30 years now. To be exact, 31 years now. Dell and Wyse share a focus on delivering innovative IP, world class service support, and optimized overall value to our customers and channel members. Customers and channel members rely on Dell to provide comprehensive end to end IT solutions. Clearly, Dell distribution, reach and brand are well recognized across the industry and it has industry leading capabilities across servers, storage, networking services and end user computing solutions. Wyse has historically been recognized as a leader in cloud client computing where our skills and capabilities in security, manageability, availability, reliability, lower total cost of ownership both in terms of CAPEX and OPEX, and scalability have been key differentiators in delivering the best value to our customers and channel members. Through the combination with Dell, we see obviously a tremendous opportunities to grow our core desktop virtualization business, as well as to expand into new and fast growing market segments and on mobility, and cloud computing. These include infrastructure and content management as a service solution from the cloud for large enterprises, for small and medium businesses, as well as consumers. We have extended our solutions into the unified communications space lately as well, providing voice, data, and video (what we call triple play) type of content delivery from the cloud for any user, for any content, for any app on any device, anywhere, anytime. And we would like to say, without compromise, without constraint or conflict. Our strong alliance ecosystem will be able to benefit from the extensive solutions portfolio they can now provide to their customers in teaming up with Dell. The Dell PartnerDirect program currently has 100,000 channel members and a proven track record of effectively onboarding and training channel members of acquired companies. This is exciting for us. Wyse has a history of innovation across all of our product lines and have recently introduced many new solutions for our customers and channel members with more than 180 patents; to be exact, 182 patents in cloud client computing. We believe that taking the next step at Dell is a very natural progression for our business and offers our customers and channel members some great advantages that are not available to us today at our scale and size. It is exciting to think about the potential of integrating Wyse’s technology and R&D capabilities with Dell’s reach, existing solutions, capabilities and reputation. We believe our customers and channel members worldwide will benefit in a big way from this entire combination. … Q: … just some more detail on Wyse’s hardware/software mix and margin structure, and what growth assumptions did you guys make to justify the price and over what time period and did you make any assumptions about cross-selling Dell branded enterprise solutions when coming up with the price? Today, the majority of the revenue is from the thin client and zero client business with the growing percentage of that revenue now starting to come from some other areas, including some of the things that Tarkan spoke about. … If we look at and project out a few years, clearly a big part of this transaction is the synergy that we would get from our datacenter solutions business, including servers, storage, networking services, and software. We also would expect, you know, within the services space, maintenance and some ongoing hosting opportunities over time, and there are also opportunities even in software and peripherals (S&P) if you think about the things like monitors and other items that you would sell in conjunction with a thin client solution. … … Wyse as an independent entity has really been gaining momentum to grow into a number one market share position. In fact, they are growth accelerated in their last fiscal year to 45 percent. Far outstripping the mid-teens industry average growth, both historically and projected in the future for this segment. And that’s driven by the breadth of their portfolio and the differentiation that they bring to their customers. … the thin client portion of the entire stack is really a small piece. Our expectation and our experience has been as we engage with our customers on helping them determine how to solve for this workload set of requirements – and it really is a workload that you’re talking about – and your engaging at a much more comprehensive enterprise level about a solution. And if you move to a thin client solution, and clearly the network, compute and storage moves, whether that’s into a private cloud or a public cloud, it’s in part of the entire solution. Wyse is an independent entity that didn’t have, of course, access to the broad portfolio that we do. … So, we believe the combination of our service and enterprise with our capabilities and the added capabilities of Wyse in the client space is a great combination and will be extremely synergistic for us. … I think, a key element that much of their software value is captured in the hardware itself. So, for example, they build on top of the protocols in our industry events features ahead of others, whether that’s multi-monitor support, the integration of voice, data and video, and/or USB redirect. Their ability to put those features into the platform ahead of the industry has allowed Wyse to extract value for that from its customers. It also, as we mentioned in our remarks, their thin OS and the IP around the thin OS has allowed them to drive greater performance using less memory and they extract a value for that in the industry. And then the bigger picture Dave hit on, for every thin client hardware dollar that exists in our industry, there’s $5 of enterprise servers, storage, networking services that go along with that. So, our ability to really move into that $18 billion marketplace with an end to end set of solutions from Dell is certainly how we view the asset a key piece. Q: Obviously, this is a capability that Dell could have developed probably internally. Does the fact that you decided to do this acquisition now suggest that you’re – Dell is seeing an inflection in the number of customers that are looking for these types of solutions and maybe if you could just give a little more detail on that and what you’re hearing from customers at this point on thin client? … what we view is the momentum around alternative computing is a trend that we see many customers continuing to experiment with and in many cases, beginning to deploy. The adoption rates are still relatively low for desktop virtualization, but there clearly are a lot of customers out kicking the tires, very similar to maybe a decade ago around server virtualization. Not that I’m comparing the two, but more of just the adoption rate. And we think this is an opportunity particularly in the verticals around financial services, government healthcare, and the financial services sector to really take a leadership position. Wyse Technology does have a leadership position in the thin client itself. We have very strong presence in the enterprise and each of those verticals and us building – and Dell now being able to build end to end vertical solutions for these set of customers where it makes sense is key. And again, I would emphasize we don’t see the entire world going to thin clients. We still think there’s a healthy PC demand in the industry and there’s a balance of alternative computing that allows people to take advantage of securing their information, managing the assets in a very differentiated way. And as Dave said, which I think is key in our thinking here, this is a different workload. We look at this workload from the device out on the edge to what we do in the datacenter, providing a set of services and value offerings to our customers. … This is really specific use cases. For example, in regulated industries like healthcare and financial services, the value of centralizing your data to better have access and control is a specific use case that this thin client desktop virtualization lends itself to. And also, lends itself to environments in industries where, again, there’s a desire to simplify the endpoint and manage the application much more centrally. That is often the case in education and ever increasing in some of the emerging geographies. So, we see this as an opportunity, again, to provide specific solutions to specific customer problems and much more industry-centric approach to our business. Q: … do you have any specifics around what percentage of your VDI customers for Dell are incorporating a full PC versus a thin client? And then any thoughts as to whether there’s anything on the horizon that would, you know, increase the ratio of thin client penetration versus a full PC in virtualized installations? We don’t see any real dramatic change. The IDC forecast continues to project into the future a sort of steady 15 percent growth rate. So, there’s no apparent broad inflection point. And as we articulated a moment ago, these are mostly fairly specific situations where the value proposition applies. And so, today, the total opportunity is, you know, counting the entire stack is about $3 billion. And so that’s still a relatively de minimis piece of the overall PC industry. Q: But, just to be clear on that point, you do have customers who are virtualizing their desktop and still purchasing regular Dell PCs rather than thin client? … A common deployment today is on a standard PC that’s been virtualized. Yea, I mean, we’ve seen that business grow in demand through last year and expect it to grow in demand this year. … And again, I don’t think a zero client or a thin client is an answer for all customers. I think in our mind the bigger message here is we now have a range of devices, an incredibly strong portfolio of thin client devices and zero client devices from Wyse, the standard Dell set of PCs, which do virtualization, and now the ability to manage those in a very differentiated way with the key software assets that we’re bringing on board that expand themselves to tablets, expands itself to mobile phones. And the fact that in some cases these usage models are moving to the cloud and the ability to do client cloud computing, I think is key, and a key element of this acquisition. Q: … You mentioned earlier some of the verticals that have been early adopters for this type of technology, can you talk about what you think some of the remaining barriers to broader adoption may be and how, perhaps, Dell is still solving that and what this acquisition does to help you there? … from a vertical perspective … we see growth both in public sector and private sector, obviously, both in large enterprise and midmarket. And from a bigger perspective, we see from time to time, some companies do not have the right level of datacenter portfolio and datacenter ecosystem. Sometimes we see certain customers in certain – in vertical industries or geographies complain about the fact they don’t have the right networking systems in the backend. … these open up an opportunity, obviously. So, those two are mostly the biggest barriers for deploying desktop virtualization centric cloud client portfolios and platforms. … I think the key elements – one of the opportunities we have has changed the value proposition to make the total cost of ownership around manageability, securing the data and the devices much more efficient and attractive for our customers. I think the differentiated technology that we’re getting with the integration of Brad Anderson’s [Dell president, Enterprise Solutions] and Steve Schuckenbrock’s [Dell president of Services] businesses, allow us a unique position to do this for our customers. Q: … because you had mentioned seeing specific vertical opportunities, do you have any details on the split today of [Wyse] revenue by verticals or by geography? The geographic mix is roughly 40 percent U.S., 40 percent EMEA and 20 percent APJ. … from a vertical perspective, I would say 50 percent public sector, 50 percent private sector. When I say public sector, we mean, obviously, you know, state and local governments, healthcare, education, and federal government type of deployments and also private sector, you get the point. In terms of customer size segmentation, I would say about 50 percent large enterprise, 50 percent midmarket/small business is our business at very high level. Q: … if you expect to accelerate the growth rate actually from 45 percent, given synergies from Dell, and then, if you do or whatnot, is the revenue incremental or do you expect any substitutional revenue as well? Like, do you expect that maybe Dell client sales will be hurt by Wyse and then it wouldn’t be completely additive, we’d have to subtract a little from the client side? … our projection is that we will maybe conservatively grow with the industry relative to thin client. But, of course, as you’re pointing out, they didn’t have the ability to integrate the comprehensive solution with networking, storage, compute, as well as wrap all the services around it. So, much of the revenue acceleration is driven by those synergies that you’re pointing out and we expect that to be significant in terms of the growth rates that we’ll be able to achieve through the entire offering that we will provide. Q: … could you go back and speak to build versus buy because it seems to me that Dell would have had a fairly easy time replicating the thin clients from Wyse. … Getting to your point about internal versus external, a comment on this that this is one of the industries when you look at it where Wyse and one other competitor basically had almost 50 percent of the market and then it’s a tremendous drop off to the rest of the players, none greater than 10 percent. And so, the combination of Dell with Wyse will put us in a very dramatic number one – not dramatic, but clearly a number one market position. And so, there’s certain value, as you know, of being a significant player in that kind of an industry situation. … because one of the other elements of the question is Dell versus buy, could we have done this organically? And our view is, I think, very straightforward. This [Wyse] is a company that has 31 years of experience. They have the intellectual property, they have the software and as Dave mentioned earlier, 150 R&D engineers which 140 are in software. We think the stickiness and the solution in the stack that I showed on one of the earlier slides is the software stack that brings together the edge device, the management software that manages that, that sits into the cloud or sits into the datacenter, and the ability to build that software from essentially ground zero to being able to acquire those capabilities and that experience and the technology with it, puts in a, I think, a leadership position and in a position as we integrate this with Steve [Schuckenbrock’s] and Brad [Anderson]’s organizations and build out workloads and solutions to move quite quickly in the marketplace much quicker than we could have done it on our own. Q: … specifically, I noticed than one of your newer products is where the T10 is on an ARM based platform, so what type of ARM engineers are you bringing to Dell? … I’m just curious about ARM technology that’s being – will this further Dell’s ARM, I guess, initiatives? Well, the way that I’d like to answer that question is simply around we’re going to build client devices, both desktops, notebooks, tablets, smart phones, thin clients, zero clients at the appropriate hardware architecture. That will be a combination of x86 and ARM. Dell itself has a pretty strong capability around ARM processor architecture. And as we mentioned, there’s only a dozen or so hardware engineers inside Wyse technology that work on the hardware. So, us getting hardware competence or assets around the design of ARM from Wyse, that’s not the nature of this acquisition, it’s the 140 software engineers that were key. The hardware architects on the Dell side that are working on ARM implementation across the plethora of devices that I mentioned earlier would still be the core ARM architects and the knowledge based for our ARM implementations. The real question maybe lying in the fact, will we continue to support thin clients based on ARM architecture and this thin OS? Absolutely. We believe that’s part of the value proposition that Wyse has had in the marketplace today. It’s allowed them to move quite quickly in implementing new products to the marketplace, providing a performance advantage or a lower cost option because they’ve done a great job in designing for cost and providing comparable features in the marketplace that others do in a more costly way. And on top of that, they innovate the platform, as I mentioned earlier, around the management stack, and then the promise around the software engineer being able to take things like Stratus and PocketCloud and being able to build that around those platforms and integrate Dell’s services around that with the rest of our Dell client assets, we think is an opportunity for us to differentiate with this acquisition. Q: … how this sort of positions yourself with Citrix and the VMware’s of the world, i.e. you know, there’s not going to be any attempts to (inaudible) features and functionality you get with some of those software partners. … we have strong relationships with the key players in thin client computing and virtualization. Not only are we going to continue those partnerships, we’re going to grow those and foster even deeper relationships. … as you all know, we [Wyse] are pretty close partners with Microsoft, we do a lot of work with VMware, with Citrix. As these providers, you know, provide desktop virtualization methodology and technology between the datacenter and end use computing platforms. So, we add to that value and the partner heavily with them and obviously that’s going to continue and the opportunity now, obviously as Jeff said earlier, now we’re bringing the datacenter, the network and end user platform all in an integrated way to our customers for more value. So, we’re going to have more opportunities to partner with Microsoft, with VMware, with Citrix and others in that space. And also, one other piece to add, we provide some of the software we provide is differentiated in the marketplace, is the leader in this space also from the cloud, both on the infrastructure management side from the cloud, with a product called Wyse Stratus. So, many of you on the phone are using today, Wyse PocketCloud, the market leading product for content management from the cloud on any mobile device and also from your web browser, connecting your apps and content inside the content voice data video from your choice of your cloud, private or public. So, these are all opportunities for us to do more with Microsoft, with VMware and Citrix as they move forward. And that’s a big differentiator.
This slideshow could not be started. Try refreshing the page or viewing it in another browser.
Dave Johnson, Senior Vice President, Dell Corporate Strategy:
We at Dell continue to execute on our strategy to develop and expand our solutions capability built on Dell’s intellectual property. These solutions are open with a focus on enhancing customer productivity, delivering results faster and eliminating unnecessary complexity. We’re making great progress in delivering solid results on this strategy.
Today’s announcement is an important next step to our end user computing strategy. It enhances our portfolio in the critical area of client computing and further supports our efforts to help our customers innovate end to end IT solutions from the edge to the core of the cloud. The acquisition of Wyse Technology compliments and expands Dell’s existing desktop virtualization capabilities, allowing us to offer industry leading and differentiated solutions to a fast-growing segment of the end user computing space.
In addition, it also provides synergies with our enterprise solutions business. Our ability to now offer an industry leading cloud client computing solution will provide opportunities for Dell to further accelerate the growth of our servers, storage and network portfolios. IDC estimates that worldwide thin client demand will grow 15 percent per year to approximately 3 billion by 2015, and that the end to end datacenter infrastructure stack for these solutions is expected to exceed 15 billion by 2015. And with Dell’s portfolio, we’ll be able to participate in this broader opportunity.
Wyse Technology is a leader in the high growth and strategic area of cloud client computing, ranking number one worldwide in thin client unit share in the fourth quarter of 2011. Wyse delivered approximately $375 million in annual revenue over the trailing 12 months.
Wyse has approximately 500 employees with 150 employees in research and development, most of which are software engineers.
In addition, it has approximately 250 sales specialists that are solely focused on selling Wyse cloud client computing end to end solutions. They have more than 3000 channel partners that sell Wyse technology on a global basis.
This transaction expected to be accretive to Dell’s non-GAAP earnings in the second half of fiscal year 2013.
Dell’s reputation as a trusted adviser to our customer, our distribution and sales capabilities combined with Wyse’s innovative solutions in cloud computing will help address customers’ needs and is a great strategic fit, both operationally and culturally for Dell.
Finally, Dell has a strong track record of integrating acquisitions of this size. Based on experience with similar acquisitions, we expect this transaction to be accretive to earnings on a non-GAAP basis in the second half of this year.
We’re really excited about welcoming Wyse to Dell and even more excited about the opportunities for our customers.
Jeff Clarke, Vice Chairman, Global Operations and End User Computing Solutions:
We see a growing opportunity in cloud client computing. This includes thin and zero client hardware, client infrastructure management software, virtualization end user optimization software, datacenter networking and implementation and managed services.
It compliments and extends the desktop virtualization capabilities that Dell has today. These solutions offer customers an alternative compute model and helps enterprises enhance security, streamline desktop management and boost user productivity.
Examples of the benefits that a cloud client computing solution can provide include,
We have discussed our strategy and end user computer was first to strengthen our core business by implementing sustainable supply chain improvements and the results of which were evident in FY ’12.
Our next goal was to deliver solutions and include compelling devices plus the tools to secure, manage that hardware, software and data. You’ve seen the results of that with some of our recent product announcements, as well as the strong growth of our transactional services business in FY ’12.
And finally, we indicated our intensions to expand our reach into new and fast-growing areas of the end user computing. The acquisition of Wyse Technology and its portfolio of industry leading capabilities is the next step in our end user computing strategy.
Wyse is a global leader in client – excuse me – in cloud client computing. Its portfolio includes a wide selection of industry leading thin and zero client devices designed easily to integrate into a virtualized or web based infrastructure.
Wyse solutions require less memory and processing power than other comparable thin client solutions, making them more cost competitive and effective for customers.
To date, Dell has relied on shared IP solutions to serve its thin client customers. With this transaction, we are moving to a more profitable industry leading and complete end to end solutions with Dell owned IP and the associated R&D capabilities with it.
Wyse Technology’s portfolio complements and extends Dell vision of providing innovative and complete end to end solutions to our customers. In addition, the combination of Wyse Technology with Dell’s brand and customer reach presents a dramatic increase in Wyse’s addressable demand.
I’d like to leave you with the following takeaways;
Tarkan Maner, CEO of Wyse Technology:
The entire team at Wyse is excited about joining the Dell team and becoming an integral part of enabling Dell’s end user company vision. This agreement is great news for our customers and channel members worldwide. We’ve been focused on delivering innovative solutions for our customers and channel members for the past 30 years now. To be exact, 31 years now.
Dell and Wyse share a focus on delivering innovative IP, world class service support, and optimized overall value to our customers and channel members.
Customers and channel members rely on Dell to provide comprehensive end to end IT solutions. Clearly, Dell distribution, reach and brand are well recognized across the industry and it has industry leading capabilities across servers, storage, networking services and end user computing solutions.
Wyse has historically been recognized as a leader in cloud client computing where our skills and capabilities in security, manageability, availability, reliability, lower total cost of ownership both in terms of CAPEX and OPEX, and scalability have been key differentiators in delivering the best value to our customers and channel members.
Through the combination with Dell, we see obviously a tremendous opportunities to grow our core desktop virtualization business, as well as to expand into new and fast growing market segments and on mobility, and cloud computing.
These include infrastructure and content management as a service solution from the cloud for large enterprises, for small and medium businesses, as well as consumers.
We have extended our solutions into the unified communications space lately as well, providing voice, data, and video (what we call triple play) type of content delivery from the cloud for any user, for any content, for any app on any device, anywhere, anytime. And we would like to say, without compromise, without constraint or conflict.
Our strong alliance ecosystem will be able to benefit from the extensive solutions portfolio they can now provide to their customers in teaming up with Dell. The Dell PartnerDirect program currently has 100,000 channel members and a proven track record of effectively onboarding and training channel members of acquired companies.
This is exciting for us. Wyse has a history of innovation across all of our product lines and have recently introduced many new solutions for our customers and channel members with more than 180 patents; to be exact, 182 patents in cloud client computing.
We believe that taking the next step at Dell is a very natural progression for our business and offers our customers and channel members some great advantages that are not available to us today at our scale and size.
It is exciting to think about the potential of integrating Wyse’s technology and R&D capabilities with Dell’s reach, existing solutions, capabilities and reputation.
We believe our customers and channel members worldwide will benefit in a big way from this entire combination.
Q: … just some more detail on Wyse’s hardware/software mix and margin structure, and what growth assumptions did you guys make to justify the price and over what time period and did you make any assumptions about cross-selling Dell branded enterprise solutions when coming up with the price?
Today, the majority of the revenue is from the thin client and zero client business with the growing percentage of that revenue now starting to come from some other areas, including some of the things that Tarkan spoke about. … If we look at and project out a few years, clearly a big part of this transaction is the synergy that we would get from our datacenter solutions business, including servers, storage, networking services, and software.
We also would expect, you know, within the services space, maintenance and some ongoing hosting opportunities over time, and there are also opportunities even in software and peripherals (S&P) if you think about the things like monitors and other items that you would sell in conjunction with a thin client solution.
… Wyse as an independent entity has really been gaining momentum to grow into a number one market share position. In fact, they are growth accelerated in their last fiscal year to 45 percent.
Far outstripping the mid-teens industry average growth, both historically and projected in the future for this segment. And that’s driven by the breadth of their portfolio and the differentiation that they bring to their customers.
… the thin client portion of the entire stack is really a small piece. Our expectation and our experience has been as we engage with our customers on helping them determine how to solve for this workload set of requirements – and it really is a workload that you’re talking about – and your engaging at a much more comprehensive enterprise level about a solution.
And if you move to a thin client solution, and clearly the network, compute and storage moves, whether that’s into a private cloud or a public cloud, it’s in part of the entire solution.
Wyse is an independent entity that didn’t have, of course, access to the broad portfolio that we do. …
So, we believe the combination of our service and enterprise with our capabilities and the added capabilities of Wyse in the client space is a great combination and will be extremely synergistic for us.
I think, a key element that much of their software value is captured in the hardware itself. So, for example, they build on top of the protocols in our industry events features ahead of others, whether that’s multi-monitor support, the integration of voice, data and video, and/or USB redirect.
Their ability to put those features into the platform ahead of the industry has allowed Wyse to extract value for that from its customers.
It also, as we mentioned in our remarks, their thin OS and the IP around the thin OS has allowed them to drive greater performance using less memory and they extract a value for that in the industry.
And then the bigger picture Dave hit on, for every thin client hardware dollar that exists in our industry, there’s $5 of enterprise servers, storage, networking services that go along with that. So, our ability to really move into that $18 billion marketplace with an end to end set of solutions from Dell is certainly how we view the asset a key piece.
Q: Obviously, this is a capability that Dell could have developed probably internally. Does the fact that you decided to do this acquisition now suggest that you’re – Dell is seeing an inflection in the number of customers that are looking for these types of solutions and maybe if you could just give a little more detail on that and what you’re hearing from customers at this point on thin client?
… what we view is the momentum around alternative computing is a trend that we see many customers continuing to experiment with and in many cases, beginning to deploy.
The adoption rates are still relatively low for desktop virtualization, but there clearly are a lot of customers out kicking the tires, very similar to maybe a decade ago around server virtualization. Not that I’m comparing the two, but more of just the adoption rate.
And we think this is an opportunity particularly in the verticals around financial services, government healthcare, and the financial services sector to really take a leadership position. Wyse Technology does have a leadership position in the thin client itself.
We have very strong presence in the enterprise and each of those verticals and us building – and Dell now being able to build end to end vertical solutions for these set of customers where it makes sense is key.
And again, I would emphasize we don’t see the entire world going to thin clients. We still think there’s a healthy PC demand in the industry and there’s a balance of alternative computing that allows people to take advantage of securing their information, managing the assets in a very differentiated way.
And as Dave said, which I think is key in our thinking here, this is a different workload. We look at this workload from the device out on the edge to what we do in the datacenter, providing a set of services and value offerings to our customers.
… This is really specific use cases. For example, in regulated industries like healthcare and financial services, the value of centralizing your data to better have access and control is a specific use case that this thin client desktop virtualization lends itself to.
And also, lends itself to environments in industries where, again, there’s a desire to simplify the endpoint and manage the application much more centrally. That is often the case in education and ever increasing in some of the emerging geographies.
So, we see this as an opportunity, again, to provide specific solutions to specific customer problems and much more industry-centric approach to our business.
Q: … do you have any specifics around what percentage of your VDI customers for Dell are incorporating a full PC versus a thin client? And then any thoughts as to whether there’s anything on the horizon that would, you know, increase the ratio of thin client penetration versus a full PC in virtualized installations?
We don’t see any real dramatic change. The IDC forecast continues to project into the future a sort of steady 15 percent growth rate. So, there’s no apparent broad inflection point.
And as we articulated a moment ago, these are mostly fairly specific situations where the value proposition applies. And so, today, the total opportunity is, you know, counting the entire stack is about $3 billion. And so that’s still a relatively de minimis piece of the overall PC industry.
Q: But, just to be clear on that point, you do have customers who are virtualizing their desktop and still purchasing regular Dell PCs rather than thin client?
… A common deployment today is on a standard PC that’s been virtualized. Yea, I mean, we’ve seen that business grow in demand through last year and expect it to grow in demand this year.
… And again, I don’t think a zero client or a thin client is an answer for all customers. I think in our mind the bigger message here is we now have a range of devices, an incredibly strong portfolio of thin client devices and zero client devices from Wyse, the standard Dell set of PCs, which do virtualization, and now the ability to manage those in a very differentiated way with the key software assets that we’re bringing on board that expand themselves to tablets, expands itself to mobile phones.
And the fact that in some cases these usage models are moving to the cloud and the ability to do client cloud computing, I think is key, and a key element of this acquisition.
Q: … You mentioned earlier some of the verticals that have been early adopters for this type of technology, can you talk about what you think some of the remaining barriers to broader adoption may be and how, perhaps, Dell is still solving that and what this acquisition does to help you there?
… from a vertical perspective … we see growth both in public sector and private sector, obviously, both in large enterprise and midmarket.
And from a bigger perspective, we see from time to time, some companies do not have the right level of datacenter portfolio and datacenter ecosystem. Sometimes we see certain customers in certain – in vertical industries or geographies complain about the fact they don’t have the right networking systems in the backend.
… these open up an opportunity, obviously. So, those two are mostly the biggest barriers for deploying desktop virtualization centric cloud client portfolios and platforms.
… I think the key elements – one of the opportunities we have has changed the value proposition to make the total cost of ownership around manageability, securing the data and the devices much more efficient and attractive for our customers.
I think the differentiated technology that we’re getting with the integration of Brad Anderson’s [Dell president, Enterprise Solutions] and Steve Schuckenbrock’s [Dell president of Services] businesses, allow us a unique position to do this for our customers.
Q: … because you had mentioned seeing specific vertical opportunities, do you have any details on the split today of [Wyse] revenue by verticals or by geography?
The geographic mix is roughly 40 percent U.S., 40 percent EMEA and 20 percent APJ. … from a vertical perspective, I would say 50 percent public sector, 50 percent private sector. When I say public sector, we mean, obviously, you know, state and local governments, healthcare, education, and federal government type of deployments and also private sector, you get the point.
In terms of customer size segmentation, I would say about 50 percent large enterprise, 50 percent midmarket/small business is our business at very high level.
Q: … if you expect to accelerate the growth rate actually from 45 percent, given synergies from Dell, and then, if you do or whatnot, is the revenue incremental or do you expect any substitutional revenue as well? Like, do you expect that maybe Dell client sales will be hurt by Wyse and then it wouldn’t be completely additive, we’d have to subtract a little from the client side?
… our projection is that we will maybe conservatively grow with the industry relative to thin client.
But, of course, as you’re pointing out, they didn’t have the ability to integrate the comprehensive solution with networking, storage, compute, as well as wrap all the services around it. So, much of the revenue acceleration is driven by those synergies that you’re pointing out and we expect that to be significant in terms of the growth rates that we’ll be able to achieve through the entire offering that we will provide.
Q: … could you go back and speak to build versus buy because it seems to me that Dell would have had a fairly easy time replicating the thin clients from Wyse.
Getting to your point about internal versus external, a comment on this that this is one of the industries when you look at it where Wyse and one other competitor basically had almost 50 percent of the market and then it’s a tremendous drop off to the rest of the players, none greater than 10 percent.
And so, the combination of Dell with Wyse will put us in a very dramatic number one – not dramatic, but clearly a number one market position. And so, there’s certain value, as you know, of being a significant player in that kind of an industry situation.
… because one of the other elements of the question is Dell versus buy, could we have done this organically?
And our view is, I think, very straightforward. This [Wyse] is a company that has 31 years of experience. They have the intellectual property, they have the software and as Dave mentioned earlier, 150 R&D engineers which 140 are in software.
We think the stickiness and the solution in the stack that I showed on one of the earlier slides is the software stack that brings together the edge device, the management software that manages that, that sits into the cloud or sits into the datacenter, and the ability to build that software from essentially ground zero to being able to acquire those capabilities and that experience and the technology with it, puts in a, I think, a leadership position and in a position as we integrate this with Steve [Schuckenbrock’s] and Brad [Anderson]’s organizations and build out workloads and solutions to move quite quickly in the marketplace much quicker than we could have done it on our own.
Q: … specifically, I noticed than one of your newer products is where the T10 is on an ARM based platform, so what type of ARM engineers are you bringing to Dell? … I’m just curious about ARM technology that’s being – will this further Dell’s ARM, I guess, initiatives?
Well, the way that I’d like to answer that question is simply around we’re going to build client devices, both desktops, notebooks, tablets, smart phones, thin clients, zero clients at the appropriate hardware architecture. That will be a combination of x86 and ARM.
Dell itself has a pretty strong capability around ARM processor architecture. And as we mentioned, there’s only a dozen or so hardware engineers inside Wyse technology that work on the hardware. So, us getting hardware competence or assets around the design of ARM from Wyse, that’s not the nature of this acquisition, it’s the 140 software engineers that were key.
The hardware architects on the Dell side that are working on ARM implementation across the plethora of devices that I mentioned earlier would still be the core ARM architects and the knowledge based for our ARM implementations.
The real question maybe lying in the fact, will we continue to support thin clients based on ARM architecture and this thin OS? Absolutely. We believe that’s part of the value proposition that Wyse has had in the marketplace today. It’s allowed them to move quite quickly in implementing new products to the marketplace, providing a performance advantage or a lower cost option because they’ve done a great job in designing for cost and providing comparable features in the marketplace that others do in a more costly way.
And on top of that, they innovate the platform, as I mentioned earlier, around the management stack, and then the promise around the software engineer being able to take things like Stratus and PocketCloud and being able to build that around those platforms and integrate Dell’s services around that with the rest of our Dell client assets, we think is an opportunity for us to differentiate with this acquisition.
Q: … how this sort of positions yourself with Citrix and the VMware’s of the world, i.e. you know, there’s not going to be any attempts to (inaudible) features and functionality you get with some of those software partners.
… we have strong relationships with the key players in thin client computing and virtualization. Not only are we going to continue those partnerships, we’re going to grow those and foster even deeper relationships.
… as you all know, we [Wyse] are pretty close partners with Microsoft, we do a lot of work with VMware, with Citrix. As these providers, you know, provide desktop virtualization methodology and technology between the datacenter and end use computing platforms.
So, we add to that value and the partner heavily with them and obviously that’s going to continue and the opportunity now, obviously as Jeff said earlier, now we’re bringing the datacenter, the network and end user platform all in an integrated way to our customers for more value. So, we’re going to have more opportunities to partner with Microsoft, with VMware, with Citrix and others in that space.
And also, one other piece to add, we provide some of the software we provide is differentiated in the marketplace, is the leader in this space also from the cloud, both on the infrastructure management side from the cloud, with a product called Wyse Stratus. So, many of you on the phone are using today, Wyse PocketCloud, the market leading product for content management from the cloud on any mobile device and also from your web browser, connecting your apps and content inside the content voice data video from your choice of your cloud, private or public.
So, these are all opportunities for us to do more with Microsoft, with VMware and Citrix as they move forward. And that’s a big differentiator.
Updates: Reflective OutLook: Shades of Gray or Colorful? [Touch and Display-Enhancement Issue of Information Display, Sept 21, 2012]
E Ink and SiPix
Meanwhile, could color have anything do to do with E Ink’s recent announcement of its intention to acquire SiPix, whose microcup technology does show promise in that area? E Ink will certainly utilize SiPix’s color capabilities, says Sriram K. Peruvemba, Chief Marketing Officer for E Ink Holdings. Peruvemba characterizes that color as having “some of the same advantages as E Ink in that it is low power, sunlight readable, thin, light … .”
Beyond a doubt, one area of interest for E Ink is SiPix’s manufacturing capabilities. “SiPix’s factories, equipment, and infrastructure are relatively newer, which gives us greater flexibility and additional capacity as we seek new markets,” says Peruvemba. Among the markets that the potential acquisition will make more accessible, he says, are digital signage and smart cards.
When it comes to E Ink, it isn’t necessarily all about color, notes University of Cincinnati’s Jason Heikenfeld, who has served as a guest editor for Information Display (and is also a founder of e-Paper up-and-comer Gamma Dynamics, mentioned later on). “We should maintain excitement about the continued expansion of monochrome e-Paper products,” he says. “A quiet revolution continues to take place there. Color-video e-Paper will also have its day, but today we should be impressed with E Ink’s continued product growth and diversification.”
Any way you look at it, with E Ink, whose share of the e-Reader market is more than 90%, poised to acquire AUO subsidiary SiPix, further consolidation in the e-Paper market seems inevitable. At press time, E Ink had reached an agreement to acquire 82.7% of SiPix’s shares and was seeking to acquire up to a 100% stake, valued at approximately NT$1.5 billion [US$ 51.2 million]. [See: Complementary ePaper technology adds to E Ink’s portfolio of offerings [E Ink Holdings press release, Aug 3, 2012]] As DisplaySearch analyst Paul Semenza wrote in a recent blog, titled And Then There Was One – E Ink to Acquire SiPix, “Combined with Bridgestone’s exit [earlier this year] from the electrophoretic display (EPD) business, this means that E Ink, the first company to mass produce EPDs, will be the sole manufacturer of the technology.”
Yet, the e-Paper story isn’t all black and white. In the future, look for news from Liquavista (which Samsung acquired in January 2011) and Gamma Dynamics (a spinoff from the University of Cincinnati). Both companies have video-capable displays (Liquivista’s is based on electrowetting and Gamma Dynamics’s on electrofluidics) that are reported to show more vibrant color than previously available.
Meanwhile innovation in “color inking” is continuing as evidenced by Vivid e-ink makes ditching books a colourful choice [NewScientist, Sept 5, 2012]
… Naoki Hiji of Fuji Xerox in Kaisei, Japan, and colleagues have built a prototype system that uses tiny fluid-filled cells containing cyan, magenta, yellow and white particles to produce almost any colour.
Black-and-white e-ink displays work by having negatively charged black particles and positively charged white particles suspended in fluid inside a cell. Apply a negative electrical field to the cell, and white particles move to the top and become visible; flip the current, and black shows up.
Hiji’s display uses the same principle, but each colour particle responds to a certain intensity of electrical field, while the white particles are uncharged (see diagram). …
No problem with reading on tablets over a long period of time [Eva Siegenthaler on IFeL bloggt, Sept 20, 2012]
“Tablets are not suited for reading over an extended period of time”; this statement is widespread. For example Scott Liu, head of the American-Taiwanese company E Ink Holdings, states that reading over an extended period of time on a Liquid Crystal display leads to increased visual fatigue. “The iPad is a fascinating multifunctional device, but not intended for hour-long reading” (stern.de). In comparison, E-ink readers, with their paper similar displays, are looked at as an adequate replacement for a book.
But is it true that the tablet is an inadequate device for reading over an extended period of time? Critical statements against the tablet as a replacement for the book are widespread but there is a lack of scientific evidence for these assumptions. For that reason, a study answering this question was implemented at the Institute for Research in Open- Distance- and eLearning (IFeL).
In a laboratory study, the participants read for several hours on either E-ink (Sony PRS-600) or LCD-Tablet (Apple iPad), where different measures of reading behaviour and visual strain were regularly (after each hour) recorded. These dependent measures included subjective (visual) fatigue, a letter search task, reading speed, oculomotor behaviour, and pupillary light reflex.
The results of the study show that reading on both display types is good and very similar in terms of both subjective and objective measures. Participants did not have more visual fatigue when reading on a tablet than when reading on an E-ink device. We concluded from this study that it is not the technology itself, but rather the image quality that is crucial for reading. The study shows that compared to the visual display units of past decades, recent electronic displays allow good and comfortable reading, even for extended time periods.
A few critical remarks still need to be made though. This laboratory study was conducted under artificial light conditions. Therefore it is unclear if an experiment under daylight conditions would lead to the same results. Another interesting question is how the sleep quality is influenced by different display technologies.
But still, the result of the study is an important novelty in reading research, and is opposed to many statements from publishers and subjective user self tests, that have stated that tablets are not appropriate for reading over a long period of time.
More information on the study is available online: http://onlinelibrary.wiley.com/doi/10.1111/j.1475-1313.2012.00928.x/abstract
Siegenthaler, E., Bochud, Y., Bergamin, P. and Wurtz, P. (2012), Reading on LCD vs e-Ink displays: effects on fatigue and visual strain. Ophthalmic and Physiological Optics, 32: 367–374. doi: 10.1111/j.1475-1313.2012.00928.x
Beyond the Kindle: what the future holds for E Ink [TechRadar, Sept 10, 2012]
IN DEPTH Ereaders for classrooms, smart locks and dual screen smartphones on the cards
“The ereader market has been tripling in volume since 2007 but not this year,” explained Siram Peruvemba, E Ink’s chief marketing officer, to TechRadar.
“It is partly to do with tablets but the biggest reason is that the economy is off at the moment… we have also seen not as many product launches as last year and the year before.”
“We believe that E Ink will come to home appliances. We are thinking differently – we want E Ink on every surface.
“There are a lot of dumb surfaces around and by adding the E Ink technology we can transform them, by adding a display and making them smart.
“We are going to keep going in that direction, enhancing products. Whether it is animated shelf labels, USB keys… drills.”
“We create a lot of these concepts and some of them go nowhere, while some are picked up” – the company continues to create prototypes to show how versatile E Ink technology can be.
It also seems that sometimes an E Ink device created for one specific market may take on a wholly different guise when it is finally released in the wild.
“One concept that was picked up but not how we originally intended was our E Ink lock,” said Peruvemba.
“This was originally pitched as a bicycle lock, where it could tell you if your bike was locked properly or not. It’s very low powered, just an E Ink display with a hole in the middle. But it just didn’t get picked up; no one in the bicycle world wanted it.”
“And then a company called InVue decided to take it on and use it for cabinet displays, it’s virtually indestructible so no more broken keys – alleviating a problem that retailers have with their cabinets.”
This move away from ereaders doesn’t mean that E Ink is not innovating in the market it continues to dominate.
The latest kindle to be launched, the Kindle Paperwhite, shows that E Ink can compete with tablets when it comes to display.
Using E Ink’s Pearl technology and LED lighting it means you can use your Kindle in the dark, but still offer a screen that’s easier on the eyes – something tablets just can’t do.
One final place where we could see an E Ink screen is on the back of a mobile phone. Again, it’s E ink’s mantra of making a ‘dumb’ space smart. According to Peruvemba an additional screen on a mobile could be exactly what consumers need.
“Most of these mobile phones have nothing going on on the back.
“We can add another display at low cost on the back of the device and offer things like clocks, stock information.”
Peruvemba also hinted: “There are vendors looking into this technology – it is very new but typically we should see this type of concept come within the year.”
Looks like the world is going to be E Ink stained for some time to come.
E-Ink concept double-display smartphone hands-on [SlashGear, Aug 31, 2012]
… What could a twin-screen smartphone of this sort be used for? E Ink has a few ideas, though is leaving most of that to OEMs. An ereader app is the obvious choice, though you could also show a digital boarding pass for a plane (even if you had no battery life remaining on your phone to drive the regular screen), QR codes, or mapping directions. Alternatively, the panel could be used to show promotional information, such as vouchers for nearby stores, or even sponsored messages in return for free call, message and data credit. …
… In 2011 consolidated sales revenues totaled NT$ 38.43 billion [US$1.3 billion], a growth of 53% as compared to 2010. Profit after-tax totaled NT $6.53 billion [US$220.85 million] and EPS totaled NT$6.05, a growth of 59 percent as compared to 2010. … Scott Liu, the chairman of E Ink, said, “… in 2012 we expect to strengthen our competitiveness and continue development of both flexible and color ePaper technologies. Additionally, we expect a customer to launch a high-resolution product with touch technology within this year.”
As to market development, Liu said, “in addition to the eReader market, we are also actively expanding into the education and business markets. …”
Today E Ink also announced the launch of a new global website, www.einkgroup.com, which provides product, technology and operational information for all of the companies under the E Ink umbrella.
Sriram Peruvemba, chief marketing officer of E Ink Holdings, said: “As our businesses expand and products become diversified, we are keenly aware of the importance of integrating our internal resources globally. This is why we decided to launch einkgroup.com as the portal of E Ink Holdings around the globe. This website provides information of product and technology of E Ink Holdings, in which browsers can easily find the information they need.”
Visitors to the site will find a consolidated location to browse the technology, product offerings and company backgrounds for the organizations under the E Ink Holdings umbrella. The site will host the Investor Relations portal for E Ink Holdings, as well as sales and marketing information. In addition to their inclusion in the new website, product line websites, such as www.eink.com and www.hydis.com will continue to host information particular to their technologies and job offerings.
Shares of E Ink under pressure amid market uncertainty [Focus Taiwan of the CNA, Feb 23, 2011]
“Despite record high earnings for 2011, E Ink’s gross margin has been squeezed by price cuts by the Kindle series of e-paper devices of Amazon, which is the largest customer of the Taiwanese firm,” Mirae Asset Management analyst Arch Shih said.
“With market uncertainty expected to continue to impact product prices, I am afraid that E Ink’s profit margin will keep falling in the first quarter of this year,” he said.
In the fourth quarter of last year, E Ink’s gross margin fell 6.8 percentage points to 28.6 percent, while it posted NT$1.28 billion in net profit, or NT$1.19 per share, down from NT$2.08 recorded in the third quarter.
… “Amazon has tried its best to stage a price war in a bid to grasp a larger market share, and at the same time, it has cut contract production fees to its suppliers like E Ink,” Shih said. “This development has imposed a pressing threat to E Ink’s operations.”
“Share prices tend to reflect forward-looking prospects, so it was no surprise to see investors dumping the stock,” he went on.
E Ink said it has become very cautious about its earnings outlook for 2012 and that it is possible its sales and profit will see the largest challenge of the year in the first quarter due to slow-season effects.
Shih said the global EPD market is suffering a failure to expand content to attract buyers and that the problem is unlikely to be resolved any time soon.
“I doubt E Ink will have a quick turnaround after the first quarter. Its share price is expected to continue to be pressured,” Shih said.
E Ink reports 33.26% earnings decline [Taipei Times, Feb 23, 2011]
… The decline in profit was because of the higher shipments of fringe field switching (FFS) LCD panels, which offer lower margins than the company’s flagship product — e-paper — E Ink chairman Scott Liu (劉思誠) said at an investors’ conference yesterday. …
Liu said this year would be a “challenging year full of uncertainties,” mainly because of the possible fallout from the unresolved eurozone debt crisis.
“Clients are conservative and said the market visibility is low,” he said, adding that E Ink would no longer provide shipment targets or projections in a response to clients’ requests.
E Ink posts EPS of over NT$6 in 2011 [DIGITIMES, Feb 23, 2012]
EIH plans to launch its next-generation color e-paper products in the fourth quarter of 2012, but the company currently does not have plans to ramp up its capacity for color EPD products, Liu said.
The company is also developing flexible e-paper products, using plastic substrates instead of glass substrates used previously, with new products to be released in the third or fourth quarter, Liu revealed.
Amazon 6″ color Kindle will not be arriving this year [übergizmo, Feb 21, 2012]
Just yesterday we reported that according to Digitimes, Amazon is supposedly working on a 6” Kindle e-reader that will be utilizing colored e-ink. This rumor supposedly came about based on reports that E Ink Holdings had landed an order from Amazon for 6” color e-reader modules, but Nate Hoffelder over at The Digital Reader, who’s had a pretty decent track record when it comes to these rumors, doesn’t seem to think so.
According to Nate who contacted his source at E Ink, this is completely untrue. His source told him that if Amazon were indeed planning a color e-reader, they would only be able to start shipping them in a year’s time, because that would be how long it would take Amazon to set up a new production line for this rumored device.
He also revealed that while E Ink has been making the Triton screens for years, it has mainly been the 9.7” model and not one in the 6” variety like the rumors had suggested and can be found in the Ectaco Jetbook Color. For now it looks like if you had hopes for a 6” color Amazon Kindle e-reader this year, you could be out of luck but we’ll be keeping our eyes open either way.
End of updates
EPD maker E Ink Holdings (EIH) reportedly has landed orders for 6-inch color e-book reader modules from Amazon with shipments to begin in March, according to a Chinese-language Economic Daily News (EDN) report.
Shipments of the touched-enabled e-book reader modules are expected to top three million units a month, the paper said.
EIH is to reveal its financial results for 2011 at an investors conference on February 22, said the paper, which added that EIH is expected to report an EPS of over NT$6 (US$0.2) for the year.
At: E Ink lands 6-inch color e-book reader module orders from Amazon, says paper [Feb 20, 2012]
So, E Ink’s business seems to expand quite well along the traditional e-book reader direction. But what is the more general business direction? In this post I am giving the answer.
Before that it is also worth to go through the previous posts: E Ink Holdings EPD prospects are good [April 30, 2011 – Jan 9, 2012], Barnes & Noble NOOK offensive [May 25, 2011], E Ink and Epson achieve world-leading ePaper resolution [May 23, 2011] and Hanvon – E-Ink strategic e-reader alliance for price/volume leadership supplementing Hanvon’s premium strategy mostly based on an alliance with Microsoft and Intel [Dec 21, 2010].
The marketing idea of E Ink as a technology for all kind of smart surfaces came up in 2008 at the E Ink Corporation when it was an organization independent of any EPD panel manufacturers:
“Fashion is a key driver in today’s world,” said Sriram Peruvemba, Vice President of Marketing, E Ink Corp. “E Ink offers a smart surface that changes the design and brings mobile phones to the fashion forefront of technology.”
See: E INK ANNOUNCES MOBILE PHONE DESIGN WINS IN JAPAN [July 22, 2008]
When in 2010 it was acquired by the leading EPD panel manufacturer (then called Prime View International, immediately renamed as E Ink Holdings) that idea was picked up by the new owner as well and even extended into a kind of a general vision:
“The E Ink name is synonymous with the ePaper industry that we pioneered and in which we enjoy a leadership position,” said Dr. Scott Liu, Chairman and CEO of E Ink Holdings Incorporated. “We are now a globally recognized brand name and aim to have our displays on every smart surface.”
See: PRIME VIEW INTERNATIONAL (PVI) IS NOW E INK HOLDINGS INCORPORATED [June 18, 2010]
And now at CES 2012 we had a full manifestation of that marketing concept:
The above note produced by the author of video, Nicolas Charbonnier, aka Charbax, is not meant to elaborate all the talk by the Chief Marketing Officer of E Ink Holdings Sriram Peruvemba (or Sri Peruvemba). Since this post is about the strategic value proposition of E Ink I had to compose a note of my own which also corresponds to the order of presentation by Sri Peruvemba on the Charbax video:
- Amazon Kindle lineup, most used in the area of leasure reading.
- The 11.5″ 300DPI eDocument reader made in collaboration with Epson, going beyond publishing into what they are calling e-document space. Their plan is to replace electronic forms that are used by different folks with their laptops replacing printed paper, pads of paper and that sort of the things. These devices will have WiFi support, pen input and ability to edit. Some applications you may imagine are in inventory logistics, in the doctor’s office, and attorneys and other office people carrying this. They can put a number of images on them which would be very suitable for this 300DPI display, e.g. circuits, graphs, charts, maps and that sort of things. This has almost twice the resolution of the most of the other displays they are shipping which have a 167 to 200DPI resolution.
- With E-Ink technology they are at the point where it is better than reading on printed paper [for B/W]. Now they have pen input available on their devices thus replacing both printed paper and pen with their products. The idea here is to allow people to highlight, annotate, write notes and use it to fill-out forms. The E-Ink display would come into play during this (not the processor is the “limiting factor”) in the B/W case the native speed can be used which is 250 msec response time for the E-Ink display.
- Color E-Ink display based on the Triton display material. ECTACO JetBook Color, an actually shipping device is shown. It is being deployed in Russian schools as replacement for textbooks. They are still in early stages of deployment with this device but see a lot of promise in the education sector. They expect the education to be one of the largest markets for E-Ink, both the monochrome and color Trident display. A devices like this ECTACO JetBook Color is not simply replacement for textbooks but in fact it is a library. You can put a thousand books or more on any one of these devices and replacing the library. Literally every student has a library of his/her own. It also increases the interaction between the student and the teacher. Tests are created and assessed almost instantenously. Another point is that the color feature in the EPD display allows to convey more information and so students have much better learning tool than they had with printed books. Also books will never be out of the stock, there will be no late fees with the library and the content is available 24×7 etc. As far as the price of the color display is concerned the color is still based on the monochrome display, they put a color filter on top. So the color filter is an additional cost but most of the additional costs on the device itself would probably be the cost of the software (from E-Ink Holdings’ customer engineering the device) that makes the additional features of a colored device possible compared to the monochrome.
- Triton color display for signage (like the large billboards put on the streets) with color so saturated that it looked like an LCD except that it was thinner than OLED, sunlight readable, uses no backlights and uses very little power. This is all the result of a significant increase of the pixel size when significantly more light is coming to each pixel. They are looking at applications at signage space where you are looking at a device not from 6″ away but to a device that is 6 feets or 60 feets awaywhere the larger pixels are perfect for that.
- Brief showing of the SURF display (used in a hand drill shown later) just to demonstrate the display materialfor that case.
- The actual E-Ink display material is extremely thin and flexible like a sheet of transparency foil. This is the direction they are going to make display without glass and conform to non-flat surfaces, getting into non-publishing applicationslike signage.
- A concept power drill with the SURF display put on the surface as a case of showing the usefullness of an EPD display for a battery powered device when otherwise you would have no idea about whether there is enough battery power left or not. This could be quite an annoyance when you climb up a ladder and in the process you discover that the battery power to work with the drill drained down too soon. They can cut the display material in a needed shape so the display can be non-rectangular. E.g. a wrist watch is shown where the display material is round shaped. E-Ink is very unique in this respectamong the display technologies.
- Eton Ruckus music player with E Ink display that was launched that week was demonstrated, it is meant for outdoor applications and considered to be virtually indestructible. It combines the solar technology with the E-Ink display, and essentially all of the solar power is used to listen to music rather than showing information. Considered to be a perfect combination for applications like that and they foresee many more deployments like that in the future.
- A couple of wristwatches. With a segmented SURF display which is curved and in a unique shape (a Phosphor device). Then a matrix display in a Seiko watch where you can have images changedon the display.
- For segmented display they can go for very low volumes because that kind of display doesn’t involve fabricating the backplane in a fab. But on the other hand the matrix displays (for which much larger order volumes are required) can be made in very large sizes since they are making their display material in rolls, seven feet wide and going a kilometer long. So a lot of new applications will come up, in areas where display technology hasn’t been used before. Then their unique selling point is the ruggedness of the E-Ink display material as well.
After that it is worth to watch the following, very recent branding/directional videos from E Ink Holdings:
- Imagine… a classroom with no paper… Build an eLearning environment
- Imagine… a schoolbag with no book… Build an eLearning environment
- Imagine… A ubiquitous home… Build an eLearning environment
- E Ink – The First Law of More – Innovation:
E Ink – More 的第一法則 [EInkSeeMore YouTube channel, Feb 17, 2012]
- states: the more you See, the more you Do
- … Evolution is a collaborative process …
- We’ve teamed up with some of the best names in the electronics business like: Epson, Freescale, Marvell Semiconductor and Texas Instruments to create an electronics ecosystem that will nurture the E Ink innovations of the future.
- We’ve joined forces with some of the most iconic brands in the world including Sony, Amazon, Barnes&Noble, Samsung, Lexar and Motorola to bring an exiting new generation of consumer products to life.
- We believe more innovation brings more good into the world. As an 840 million dollar [US] company we intend to do everything we can to make a big difference.
- The Second Law of More – Growth:
E Ink – More 的第二法則 [EInkSeeMore YouTube channel, Feb 17, 2012]
- states: the more you Do, the more you Grow
- 100,000 displays in 2006 … over 10 million in 2010 …
- today almost every e-book device on the market uses E Ink enabled reflective displays
- Tomorrow we expect to lead the way in e-textbooks, providing a libray in every student’s backpack. And a few years down the road we see ourselves in signage of all shapes and sizes.
- The next generation of E Ink applications is being developed as we speak: the paperless office, electronic toll passes, sporting goods, musical score sheets, personal medical devices, and more.
- Look at the future from our vantage point. You’ll see why we are excited.
- The Third Law of More – Green:
E Ink – More 的第二法則 [EInkSeeMore YouTube channel, Feb 17, 2012]
- states: the more you Grow, the more you Care
- Care = Save In more ways than you can imagine
- E-Ink display use 97% less energy than the LCD versions
- Under normal conditions an E Ink enabled e-reader runs three weeks on a single charge. That is supposed to be a day and a half on an LCD display.
- A recent study from the University of California Berkely shows that an E Ink enabled electronic newspaper releases 32 – 140x less CO2than its paper counterpart. What’s more, e-book saved trees by drastically reducing the consumption of paper.
- This year sales of e-books are predicted to top 1 billion [US] dollars, more than 10x increase over last year. …
– ECTACO jetBook Color introduced in Russian Schools – цветная эл.книга [ECTACO YouTube channel, Dec 2, 2011]
– William (Bill) Wong, staff technology editor from Electronic Design – focusing on embedded, software, and systems otherwise – who is an ardent follower of E Ink’s progress. For more E Ink related information you can watch his two Engineering TV Videos and a few Electronic Design excerpts given below:
- Smart Surface Devices and More from E Ink – CES 2012[Engineering TV, Jan 17, 2012]
- Color Active Matrix E Ink Triton Imaging Film [Engineering TV, Jan 11, 2011]
Behind the Scenes at CES 2012 – Display Technology [William Wong, Electronic Design, Jan 25, 2012]
Eink’s electronic paper display (EPD) is popular with e-readers and it has been used on other devices such as Lexar’s JumpDrive flash that shows the amount of space used on the drive. The display uses no power when not plugged in and draws only a tiny amount when updating the display. Eink was showing off color demos and EPD prototype applications. It is a technology worth investigating for embedded applications.
Cortex-A9 Incorporates Electronic Paper Display Controller [William Wong, Electronic Design, Jan 18, 2012]
E-readers with electronic paper displays (EPDs) provide an excellent reading experience. But most of these e-readers have been underpowered compared to smart phones and tablets. E-reader manufacturers try to keep costs low, which is why processor performance has been lower.
Freescale’s i.MX 6SoloLite and i.MX 6DualLite target these low-cost products with one or two 1-GHz ARM Cortex-A9 cores. Developers will have to decide whether the i.MX 6SoloLite’s 2D graphics are sufficient or if they require the 3D graphics support of the i.MX 6DualLite. Likewise, the 6SoloLite has a 32-bit DDR3 controller, while the 6DualLite has a 64-bit DDR3 controller for a higher-performance platform. Both support LP-DDR2 memory along with a range of flash memory.
The i.MX 6DualLite has a single shader, compared to the four 3D shaders found in the higher-end i.MX 6Dual and i.MX6Quad chips. The family also addresses LCD screens, so these chips may find their way into low-end tablets and embedded display devices. The i.MX 6DualLite has HDMI, low-voltage differential signaling (LVDS), and MIPI display support along with a MIPI camera interface as well. And, this chip tops the Solo with a Gigabit Ethernet port and a PCI Express x1 link.
The i.MX 6DualLite is pin-compatible with other i.MX 6 chips like the 1-GHz i.MX 6Solo, 1.2-GHz i.MX 6Dual, and 1.2-GHz i.MX6Quad. All are software compatible. Software support includes Google Android 4.0, Windows Embedded CE, QNX, Ubuntu, Linux, Linaro, and Skype.
SMILE = Stanford Mobile Inquiry-based Learning Environment
and this is currently a smartphone based solution aimed at the digital classroom.
In this sense it is a kind of a newer (only 1+ year old) approach than the 6 years old OLPC.
(Read about Marvell’s OLPC involvement in:
– Marvell® ARMADA® PXA168 based XO laptops and tablets from OLPC with $185 and target $100 list prices respectively [Jan 8-11, 2012]
– Marvell ARMADA with sun readable and unbreakable Pixel Qi screen, and target [mass] manufacturing cost of $75 [Nov 4, 2010 – July 20, 2011])
Remark: Will be interesting to see OLPC related educational initiatives merged in some way with SMILE during 2012 as Marvell’s “Classroom 3.0” initiative is rolled out. An Argentinian report on SMILE success (see well below) notes that: “… pilot projects and programs, driven by governments in most cases by applying the model 1:1, or also known as OLPC (One Laptop per Child), … Unfortunately, no results have been achieved educational and cognitive testing. Moreover, assessments of international experts from various experiences of OLPC in the world are not the most encouraging. Most of these programs have focused primarily on an abundant supply of hardware, often with little support, but above all, without proper teacher training, which keeps pace with the massive deployment of equipment. On the other hand, the vast majority of digital content that are being used by several of these programs are not innovative and do not promote interactive learning and motivating children.”
No wonder Marvell is contributing to SMILE as well:
SMILE Plug at CES 2012 [Jan 25, 2012]
ARMADA powered SMILE Plug and One Laptop per Child tablet transform traditional classroom activities with interactive, multimedia curricula for more engaging learning experience
Marvell (NASDAQ: MRVL), a worldwide leader in integrated silicon solutions, today announced new education solutions designed to enable “Classroom 3.0,” a connected, secure learning environment that simplifies and speeds the deployment of technology to students around the world. Marvell’s collaboration with Stanford Universityhas resulted in the Marvell® SMILE Plug, the first plug development kit designed to turn a traditional classroom into a highly interactive learning environment. Designed to engage students in critical reasoning and problem solving, the SMILE Plug creates a “micro cloud” within a classroom that is completely controlled by the teacher. Marvell also announced that it has extended its relationship with the One Laptop per Child Association (OPLC) on a number of new products, including the upcoming OLPC X0 3.0, a low cost, low power tablet designed for education.
“Marvell is driving a revolution in the classroom with technology that improves the education experience for students and teachers around the world. We’re deeply committed to improving education worldwide, and through our work with organizations like OPLC and Stanford University, we are helping to transform learning from a static one-way activity to an interactive experience brought to life with compelling content, engaging interactive multimedia and numerous new ways to collaborate,” said Weili Dai, Co-founder of Marvell. “Marvell’s SMILE Plug, the first ultra small server designed specifically for multi-modal curriculum delivery, combined with our affordable, easy-to-use and durable OLPC XO 3.0 tablet, are important additions to the world’s classrooms. It’s a matter of time before we leverage the power of Google TV and other smart screens in our daily lives to bring knowledge experts from around the globe to any local classroom.”
The Marvell SMILE Plug, powered by Marvell’s high-performance, low power ARMADA® 300 series SoC and Marvell Avastar™ 88W8764 Wi-Fi, creates a micro-cloud, eliminating the problem of inconsistent Internet access within a classroom and creating a safe and secure connectivity for up to 60 students. The SMILE plug also securely delivers digital content to a range of devices, including personal computers and handheld devices. Teachers and students can now tap into an unprecedented amount of open or premium digital content. The SMILE plug also allows teaches to control and run interactive classrooms with real-time feedback and analytics, deepening the learning experience.
In tandem with the Stanford Mobile Inquiry Based Learning Environment program, Marvell has developed an easy-to-manage access point for a wide array of SMILE learning applications and has created an administration API and user interface, Plugmin, which provides access to many additional SMILE programs. These tools provide teachers total control of the devices and content used within their classroom for better lesson planning and student evaluation.
Additionally, the SMILE Plug Computer features an open platform based on Arch Linux for ARM, the Plugmin administration app and the Stanford SMILE Junction Server. The SMILE Plug includes a 5V Lithium-Ion polymer battery for back-up power, making it ideal for learning environments where electrical power can be inconsistent.
Also at CES, Marvell and OLPC showcased the first prototype of the X0 3.0, a low cost, low power rugged tablet computer designed for education in emerging markets. Built on the Marvell ARMADA 618 processor and its Avastar 88W8686 wireless chip, the XO 3.0 tablet will feature unique capabilities that allow it to be charged by solar panels, hand cranks and other alternative power sources. Marvell and OLPC also announced that the XO 1.75 laptop will begin shipping in February, with initial orders benefiting education programs in Rwanda and Uruguay. For additional information on the XO 3.0 prototype or the XO 1.75 laptop, please see the related release, “Marvell and One Laptop per Child Unveil the Eagerly Anticipated XO 3.0 Tablet at CES.”
The SMILE Plug will be available in spring 2012; please visit http://www.marvell.com for more information. The One Laptop per Child XO 1.75 will be available in February; please visit http://one.laptop.org/action/donateif you’re interested in donating or for more information.
Marvell will also be demonstrating the its education solutions at the 2012 Consumer Electronics Show (CES) in its booth, No. 30542, located in South Hall 3 on the upper level. CES will be held Tuesday, Jan. 10 – Friday, Jan. 13, 2012, at the Las Vegas Convention Center (150 Paradise Road, Las Vegas).
Lesson 2: How Do I Use Inquiry-Based Learning with Youth? [national4H, July 20, 2011]
Classroom 3.0: Why the promise of the Digital Classroom depends on technology addressing the human issues first. [Announced Departmental Seminar for Feb 3, 2012, UC Berkeley]
Director, Application Processor Business Unit, Marvell
Classroom 3.0: Why the promise of the Digital Classroom depends on technology addressing the human issues first.
In this talk, Mr. Kang will share his vision for what a digital, next generation “Classroom 3.0” looks like. Before that however, Mr. Kang will focus on the people and process issues that have to be overcome in order to fully realize the value of technology–issues that technologists and engineers often underestimate. Covering use cases both in the United States and in developing 3rd world countries, the session will end with a practical call to action, with an opportunity for students to immediately contribute to the Marvell SMILE Plug project, a revolutionary new product that will improve student lives today.
Mr. Kang joined Marvell in February, 2006 and is currently director of Marvell’s Application Processor Business Unit. He has been in the semiconductor business for more than seven years, holding previous positions in design engineering at several leading technology vendors. At Marvell, Mr. Kang manages multiple product lines from design conception to mass market implementation and adoption. These include the industry-leading ARMADA PXA processors, which are fueling today’s premier consumer devices. Additionally, he oversees various market segments, including education, eReaders, gaming, tablets and other connected consumer and embedded devices. Most recently, Mr. Kang was responsible for the processor design powering Microsoft’s gaming console, Microsoft Kinect. This gaming console shattered sales records and was named the fastest-selling tech gadget of all time by the Guinness Book of World Records – totaling more than 20 million units since its launch in November, 2010.
Mr. Kang is currently driving the development of Marvell’s ‘Classroom 3.0’, a connected, secure learning environment that simplifies and speeds the deployment of technology to students around the world. A new device called the SMILE plug, central to Classroom 3.0, creates a ‘micro cloud’ within a classroom that is completely controlled by the teacher.
SMILE (Stanford Mobile Interactive Learning Environment) [Aug 4, 2011]
SMILE workshop (Stanford Mobile Interactive Learning Environment – open source mobile application and mobile interaction management system) engages participants to experience how the latest open source mobile learning environment helps teachers to engage students in generating mobile media-based inquiries and using the student-generated inquiries as tools to promote self-reflection among students and formative assessment for teachers. An Android-based mobile learning device will be provided for each participant for the hands-on workshop.
It is generally acknowledged that student-created questions play an important role in the learning process (Dale, 1937; Dillon, 1990; Hunkins, 1976) and they have been demonstrated to improve student learning outcomes (Barak & Rafaeli, 2004; Commeyras, 1995; Dori and Herscovitz, 1999; Rothkopf, 1966). In posing questions themselves, students must revisit previous learning materials and reshape their thoughts relating to prior learning, thereby deepening their understanding (Marbach-Ad & Sokolove, 2000). Moreover, if students are made aware that they will be asked to create questions at a later time, they will actively monitor and attend to what they are learning during class in anticipation (Mosteller, 1989; Wilson, 1986). Despite these findings, though, student-created questions have remained consistently absent from the majority of teachers’ repertoires (Gall, 1970). Studies have reliably shown that only a very small portion of questions asked in a classroom are created by the students(Corey, 1940; Dillon, 1988), implying that a powerful pedagogical tool is being underutilized.
The affordances of mobile phones present a unique opportunity to reintegrate student-generated questions back into the classroom. More specifically, considering that students are already actively trying to communicate with each other during class on their mobile phones (Educational Digest, 2005; Gilroy, 2004), there is an opportunity to reorient this communication toward class material through student-created questions. Indeed, it is slowly being recognized and demonstrated that mobile phones are highly engaging tools to be taken advantage of, not prohibited (Kolb, 2008). For example, data collected by Swan et al. (2005) from four elementary and two middle-school classes indicated that the use of mobile phones in the classroom increased student motivation, improving their quality of work. With mobile phone ownership among children has increased byin the past five years (MRI report, 2010) and a current trend towards the consolidation of open-source mobile operating system platforms (Shuler, 2009), there could be no better time to take advantage of these affordances in order to increase the incidence of student-generated questions as an effective way to promote student learning and engagement in the classroom.
Therefore, a newly developed SMILE (i.e., Stanford Mobile Interactive Learning Environment – open source mobile application and mobile interaction management system) will be demonstrated in a hands-on workshop format. Each participant will be given an Android mobile device to participate in the workshop and two facilitators will coordinate the setup and lead the workshop. The mobile learning workshop basically engages all participants to quickly generate a variety of inquiries (Shown in Figure 1), reflect on the inquiries, and rate participants’ inquiries through a real-time mobile interaction network while the facilitator demonstrates how teachers might be able to monitor the progress of the inquiry generation process and types of inquiries participants generate.
There are several important features of SMILE that were deliberately designed to maximize its effectiveness. First, allowing students to include digital photos in their questions garners the learning benefits gained from the presentation of materials in multimedia (Mayer, 1997). Second, having students create multiple choice question items help the student thoroughly reflect on the learned principles while thinking critically in synthesizing learning concepts and generating inquiries that are logical and sound. Third, permitting students to rate each other’s questions provides feedback and incorporates an element of peer assessment, which has been demonstrated to be valuable to a majority of students (Williams, 1992). Fourth, allowing students to view who scored the highest may foster a “non-pressured,” yet ultimately competitive game playing-like learning environment, which has been demonstrated to maintain an optimally motivating learning atmosphere (Reeve & Deci, 1999). Finally, supplying the teacher with all of the students’ questions and responsesthrough the graphic user interface provides invaluable formative assessment information, which has been demonstrated to greatly improve student learning (Black & William, 1998; Cross, 1998). For all of these reasons, SMILE provides a particularly effective means of promoting student-generated questions and in the end it can encourage the participants to engage in real-time learning and assessment with a multimedia-rich interactive learning environment.
Paul Kim firstname.lastname@example.org
Stanford Mobile Inquiry-based Learning Environment (SMILE) is basically an assessment/inquiry maker which allows students to quickly create own inquiries or homework items based on their own learning for the day. SMILE workshop is designed to introduce SMILE to people in the world and help them take advantage of SMILE.
AECT 2011 workshop ::
Date: on November 9, 2-11 at 9AM ~ 12PM
Location: Jacksonville, Florida
This workshop engages participants to experience how the latest open source mobile learning environment helps teachers to engage students in generating mobile media-based inquiries and using the student-generated inquiries as tools to promote self-reflection among students and formative assessment for teachers. An Android-based mobile learning device will be provided for each participant for the hands-on workshop.
… [Included: Instruction Manual For learners by Sunmi Seol, Presentation document, Survey (paper / on-line versions)] …
Stanford Mobile Inquiry-based Learning Environment (SMILE) [first published Feb 23, 2011, excerpted Jan 29, 2012]
“Stanford Mobile Inquiry-based Learning Environment (SMILE)” is the subproject of POMI in Education.Using student inquiries as learning objects and meta-evaluation vectors
SMILE turns a traditional classroom into a highly interactive learning environment by engaging students in critical reasoning and problem solving while enabling them to generate, share, and evaluate multimedia-rich inquiries.
Stanford Mobile Inquiry-based Learning Environment (SMILE) is basically an assessment/inquiry maker which allows students to quickly create own inquiries or homework items based on their own learning for the day.
Stanford Mobile Inquiry-based Learning Environment (SMILE) is basically an assessment/inquiry maker which allows students to quickly create own inquiries or homework items based on their own learning for the day. For example, students can freely take a photo (Shown in Figure 1) of a diagram or any other object from their own textbooks or any phenomena discovered in their school garden or lab and create a homework item.
Figure 1. Students taking a photo of their textbook
All student-created multimedia inquiry items can be tagged by the generator, but rated by peers to indicate how relevant or useful the item is to their own learning. Obviously, teachers or facilitators could decide to review the student-generated homework items from the homework pool, weed out the ones that may not be relevant and leave only the ones that are highly useful or ones with highest student ratings (i.e., rules could be made at the local level).
The SMILE application enables homework generation, completion and competition game during class. It offers opportunity to review what students learned in class and organize them and create their own inquires from them. Moreover, all student-generated questions are instantly collected, passed out to the whole class and all students are supposed to take a quiz created by all students and also give rating to each question based on standard rule made by local level. After students’ answers are submitted, they can review their results immediately. Through creating own question and sharing it with peers, students are able to check their understanding of what they learned for the day and compensate their lack of learning from peers’ questions. The instant activity blocks students’ learning of the day from fading away and after activity a teacher can give more additional information and detailed explanations to the class which helps them improve their understandings a lot. Quiz activity is controlled by teacher’s application so that students can not get distracted and do other actions. The current prototype of this application supports group/classroom level but village/school level, or community/school district level will be supported soon. Also, it enables a facilitator or teacher to map each inquiry or homework item with appropriate learning standard classifications. The former application is inside the classroom activity and the latter one is outside the classroom activity. The latter application enables students or teachers freely have access to SMILE server regardless of time and place if they have mobile devices. Basically, all homework items created by students are saved on SMILE server, and students can create their own homework items and upload them to the server. Also, they are able to solve homework items connected to the server. Teachers can review all homework items and manage all items to be high quality by seeding out ones which are not relevant to subject or have low-rating by peers from the server. In-out school network system offers continuous learning to all students and then they can pay attention to their own learning saving time and effort, and finally, are more likely to get better understandings of what they learned inside and outside the classroom.
Figure 2. Student-created inquiries incorporating own mobile photo
The immediate advantages of SMILE are in that it…
- involves the learners themselves in the reflection and generation of own learning stimuli and inquiries;
- makes it possible to have anytime/anywhere homework/inquiry generation possibility (where there is an opportunistic learning moment);
- empowers the learner to generate and incorporate mobile multimedia objects from own environment;
- allows the learn to rate peer-inquiries based on own assessment of the merit;
- enables a collective management of homework quality;
- enables any group or organization to track the academic performance of the learner at a granular (based on learning standards) level;
- makes it possible to conduct a variety of comparison analyses for benchmarking purposes;
- creates a competition or collaboration game environment.
SMILE is composed of teacher’s application and student’s application. Teacher’s application was developed using Java language and it works on web-based system which Java is installed in. To support ad-hoc network environment, XMPP server such as openfire and apache were installed at the server and both applications also include Junction library developed by Stanford Engineering School. All users connected to the server have same environment, in short all students are controlled do same action. Only teacher’s application can manage the each step of the activity happening at SMILE. On the contrary, student’s application can do action sent from teacher’s message.
1) Teacher’s Application
- GUI (Graphic User Interface)-based application and it works in any system which Java can be installed in such as desktop, laptop and net book.
- Server IP is fixed but it is changeable and a teacher can change the path of Apache directory if the path is different from the fixed one.
- This application supports two ways for a quiz activity: first one is to let students create their own inquires, share, solve and get the result of the quiz during class and other one is using saved questions. As for the latter one, a teacher can freely select the quiz set from the server and pass it out to the whole class.
- A teacher can use time-limit quiz way inserting time limit at the application (Optional).
- This activity is composed of four states: connection to the server, making question, solving questions and seeing the results. A teacher is able to see the current status from the activity flow and each state button is disabled after the state is over.
- At student status window, a teacher can check each student’s submission of the question and answers of the quiz, and final score. A teacher is also able to see total numbers of students joined the quiz, ones of students submit the question, and ones of students submit answers.
- Score board window shows each student’s final score and right answer, his/her answer and his/her rating about each question.
Figure 3a. Teacher’s application of SMILE in India
Figure 3b. Teacher’s application of SMILE in USA
- Top score window is for noticing top score winners. A teacher can see who get highest score at the quiz and who get highest rating on their own questions.
- Question status window includes the following information: as for each question, who created, how many students get right answers, and what ratings this question get. Moreover, if clicking each question, a teacher can see real question at question window. After getting result, more detailed information is added to original question.
- If clicking save questions, student-created questions are saved and they can be used for future class or other students.
2) Student’s Application
Figure4. Login Window Figure 5. Main Window Figure 6. Make your question
- To join this activity, students are supposed to insert their name and server IP which is usually fixed but if server IP is different from fixed one, change it to the right IP address. If clicking login button, this application connect to the junction server and all students’ applications are under same environment according to the teacher’s application.
Figure7a. Main Window in India Figure7b. Main Window in USA
- At main window, as shown in Figure 5 there are three buttons: Make your question, Solve questions, and See results. Each action button is automatically enabled so that students can not do different actions without a teacher’s direction.
- Each student’s application receives the message from teacher’s application, and action button is enabled and then each student goes to each action status by clicking the button. The first action is making own question.
- At making question state, students can generate their own inquires adding images related to the inquiry. At that time, students can use the pictures saved already or take a picture from their materials or everything around them. Currently, students can create multiple-choice question for supporting instant grading system. Using preview function, students can preview their own questions before submission.
Figure 8. Students are making questions
- If clicking post button, newly-created question is given to the server and the application comes back to the main window.
Figure 9. Main Window Figure 10. Solve Questions
- After getting message “Solve Questions” from the teacher’s application, the next button is enabled. All students are under solving questions state.
- Students can freely go to next or previous question and also check answer and rating. Before submitting answer sheet, students can not escape from this activity.
11a. The picture of solving questions in india
11b. The picture of solving questions in USA
Figure 11. Students answering to questions made by their peers
- Accidental logoff may happen anytime but student’s application can join the activity is going on because the server is broadcasting the current state to all students’ application at regular interval.
Figure 12. Result Window Figure 13. Detailed Result Figure 14. Who’s the winner
- After all students submit their answer sheets, see result button is enabled and students can see their own result of quiz. As figure 9 shows, main result window includes total score and correct or wrong information for each question.
- If clicking detail button, students can check each question’s detailed information: correct answer, number correct people, average rating and my answer.
- Winner page has information about the winner with highest score at the quiz and the owner with highest average rating for their own-created question.
- If seeing the results is over, all activities of SMILE end.
Current SMILE will be expanded to an application which enables accessibility to quizzes outside the classroom. Anytime Homework application enables students and teachers have access to SMILE server regardless of the time and place. This application offers different access permission to both students and teachers. Students enjoy individual quiz activity by solving question items saved on the server and also they generate their own questions and upload them to the server anytime. A teacher is also able to have access to SMILE server and manage the quality of homework items saved on the server by removing the questions items with low rating or are less relevant to class-curriculum. Figure 15 represents next version of SMILE including Anytime homework and Junction quiz applications.
Figure 15. Next version of SMILE application
- Stanford Mobile Inquiry-based Learning Environment(SMILE): using mobile phones to promote student inquires in the elementary classroom
Sunmi Seol, Aaron Sharp, Paul Kim
Proceedings of the 2011 International Conference on Frontiers in Education: Computer Science & Computer Engineering, FECS 2011
- Proceedings of WORLDCOMP’11: The 2011 World Congress in Computer Science, Computer Engineering, and Applied Computing – Download Paper
Faculty Research Assistant Research Assistant Research Assistant Student
Interview – Mr Paul Kim [Nov 18, 2009]
Stanford Mobile Inquiry-based Learning Environment (SMILE) [Google translation from Spanish, Sept 7, 2011]
Speaker: Claudia Muñoz-Reyes
(Stanford Inquiry-based Mobile Learning Environment – SMILE)
In this digital age, characterized by the rapid development of technologies, people who have less access and education opportunities will be reduced to be at greater risk of leaving the cycle of poverty. They will be increasingly difficult to be able to participate in the growing economies of information and knowledge societies, thereby increasing not only the digital divide-but also what is most important, the knowledge gap. The preferential commodities and value-added of the Century 21, are information and knowledge. Without an innovative intervention that addresses these effects of globalization and rapid technological advancement, the gap will grow more and more, excluding the communities of extreme poverty can not ensure their own survival.
It’s nothing new in Latin America and other developing countries have been taking several initiatives and attempts to introduce technology in public schools in the last 2 decades. In many of these countries, where in recent years have provided rural and suburban schools, old computers, today these “iron dinosaurs” were not only obsolete, but not used and only pollute our environment. Even in the last 6 years, have been giving greater emphasis to pilot projects and programs, driven by governments in most cases by applying the model 1:1, or also known as OLPC (One Laptop per Child), where Unfortunately, no results have been achieved educational and cognitive testing. Moreover, assessments of international experts from various experiences of OLPC in the world are not the most encouraging. Most of these programs have focused primarily on an abundant supply of hardware, often with little support, but above all, without proper teacher training, which keeps pace with the massive deployment of equipment. On the other hand, the vast majority of digital content that are being used by several of these programs are not innovative and do not promote interactive learning and motivating children.
The purpose of SMILE is to provide a pedagogical change in classrooms, through mobile technology. The pedagogical model used is the “Inquiry-based Learning Model” (Model-based Learning Questioning) and models of learning based on problem solving, which encourage creativity, critical thinking and scientific attitude collaborative work, the 21 st Century skills in our children today. Consequently, our dynamic innovative teaching to implement, through mobile devices, focus on the student as the star of the learning process and the teacher becomes more of a facilitator of this process. This tool and formats developed Android platform and IOS (iPhone, iPad) also provides a great opportunity for teachers to quickly assess learning and performance of children individually and in groups.
The first pilot of this innovative program were in India, Malaysia and the United States between January and March 2011 and last experience in August of this year has taken place in rural and suburban schools of Misiones, Argentina, with surprising results .
Short Video of the last experience with rural and suburban schools in Misiones and Buenos Aires, Argentina:
SeedsofEmpowerment [Aug 17, 2011]
Multiple photos and other videos from our pilot projects in Argentina, Malaysia and India using mobile devices (smartphones) and tablets:
SMILE (Stanford Mobile Inquiry-based Learning Environment) – Medical education [on slideshare, Nov 23, 2011]
SMILE-MedRIC Interview with Professors [Nov 21, 2011]
SMILE MedRIC-Part1: As part of SMILE project, Medical students in Chungbuk University used SMILE in their Medical Informatics class and created questions on HIV AIDS.
Part 2 – Part 3 – Part 4 – Part 5 – Part 6 – Part 7 – Part 8
Note: MedRIC (Medical Research Information Center), a Ministry of Science and Technology funded organization in S. Korea, focusing on research and development in medical informatics, medical data visualization, telematics, Virtual Reality-based medical training, and health communication and promotion policies and programs.
Plug Computers [Marvell site, Jan 9, 2012]
Whether the need is remote access to data on a home network or to turn an entire classroom into a highly interactive learning environment, the solution is simple, convenient, and inexpensive. With a small form factor server called a plug computer, network connectivity is right at a wall socket.
Simply insert the plug computer into an electrical outlet and add an external hard drive or a USB flash drive through a USB port (depending on the deployment, a router may also need to be connected into the plug) — just like that, you have a network attached storage device.
Powered by Marvell embedded processors, a plug computer is packed with enough processing power and network connectivity for managing and serving up digital media files. It also draws less than one tenth of the power consumed by its PC counterparts enabling always-on, always-connected, and environmentally friendly computing. With a gigahertz-class processor, memory and storage the plug computer has ample processing power and resources to run any embedded computing application.
Applications for a plug computer include:
- Media Server
- Home Automation
- Remote Access
- A micro cloud for the classroom
Smile Plug for Education
Powered by Marvell’s high-performance, low power ARMADA® 300 series SoC and Avastar™ 88W8764 Wi-Fi, the SMILE Plug creates a micro-cloud, eliminating the problem of inconsistent Internet access within a classroom and creating a safe and secure connectivity for up to 60 students. The SMILE plug also securely delivers digital content to a range of devices, including personal computers and handheld devices. Teachers and students can now tap into an unprecedented amount of open or premium digital content. The SMILE plug also allows teaches to control and run interactive classrooms with real-time feedback and analytics, deepening the learning experience.
Plug Computer Developer Community
Pioneered by Marvell, the plug computer is originally based on the ARM ultra-low power architecture and built on an Open Development Platform. To encourage manufacturers to create applications on the platform, Marvell founded PlugComputer.org, an online community where developers can discuss ideas and share code solutions.
Enabling Classroom 3.0: Marvell SMILE Plug [Marvell platform brief, Jan 5, 2012]
Enabling Classroom 3.0: Secure Content, Teacher Control
Marvell® is excited and proud to create Classroom 3.0 with SMILE Plug. The SMILE Plug is a revolutionary way to change how technology is used in the classroom, offering unprecedented access to secure digital content, a seamless delivery mechanism, and a simple teacher interface to fully control the classroom.
Marvell’s SMILE Plug enables education institutions to create a micro-cloud within a classroom, facilitating a simple, low-cost way to network classrooms. The SMILE Plug eliminates the problems of inconsistent Internet access within a classroom environment, safely and securely providing connectivity in the classroom. The SMILE Plug also securely delivers digital content to a range of devices, including personal computers and handheld devices. Teachers and students can now tap into an unprecedented amount of open or premium digital content. The SMILE Plug also allows teaches to control and run interactive classrooms with real-time feedback and analytics, deepening the learning experience.
The Marvell SMILE Plug is being developed in partnership between the Stanford® University School of Education and Marvell—both of whom share the vision of using technology to revolutionize and improve the way students learn and educators teach. The SMILE Plug, which is named and built with Stanford’s Mobile Inquiry Based Learning Environment (SMILE), will provide the ability to establish a local Wi-Fi network for up to 60 students. SMILE turns a traditional classroom into a highly interactive learning environment by engaging students in critical reasoning and problem solving while enabling them to generate, share, and evaluate multimedia-rich inquiries. In addition, this creates access to many more SMILE learning applications. To simplify deployment and management of the SMILE Plug, Marvell has developed a plug administration API and user interface called Plugmin.
Smile Plug Components
The SMILE Plug contains the Marvell Plug Computer, as well as all of the software tools needed to develop applications for the platform. I/0 interfaces include 2x Gigabit Ethernet, 2x USB, Wi-Fi, and SD card slot up to 32GB. The Plug Computer is an embedded computer that plugs into the wall socket and can run network-based services that normally require a dedicated personal computer. Featuring a Marvell ARM-based CPU running up to 2GHz CPU with 512MB of Flash memory and 512MB of DDR3 memory, the Plug Computer provides ample processing power and resources to run any embedded computing application. Network connectivity is via Gigabit Ethernet; peripheral devices can be connected using USB 2.0 and Wi-Fi.
• Software Tools
The SMILE Plug will be based on Arch LinuxTM for ARM and NODE.js, as well as a plug administration API and Stanford’s SMILE environment and software development kit (SDK). All components adhere to the open-source model, making the SMILE Plug an ideal platform on which to develop or port any additional learning applications. The Plugmin administration client runs on Android-based devices and enables easy administration of the SMILE Plug. Used in conjunction with the SMILE Junction Server Administration Client, the teacher can easily control or run interactive classroom learning experiences.
System-on-Chip (SoC) Solutions
The SMILE Plug Computer incorporates two of Marvell’s industry-leading system-on-chip (SOC) solutions to drive unparalleled application performance and connectivity in online classroom environment:
• Marvell ARMADA 300 CPU SoC
This is a high-performance integrated controller. It integrates the Marvell developed CPU core that is fully ARMv5TE compliant with a 256KB L2 Cache. The Marvell ARMADATM 300 (88F6282) builds upon Marvell’s innovative family of processors, improves performance, and adds new features to reduce bill of materials (BOM) costs. The 88F6282 is suitable for a wide range of applications such as routers, gateway, media server, storage, thin clients, set-top box, networking, point of service and printer products. For product information, visit http://www.marvell.com/embeddedprocessors/armada-300/assets/armada_310.pdf
• Marvell Avastar 88W8764 Wi-Fi SoC
This is a highly integrated 4×4 wireless local area network (WLAN) system-on-chip (SoC), specifically designed to support high throughput data rates for next generation WLAN products. The device is designed to support IEEE 802.11n/a/g/b payload data rates. The Marvell Avastar® 88W8764 provides the combined functions of DSSS, OFDM, and MIMO baseband modulation, MAC, on-chip CPU, memory, host interfaces, and direct-conversion WLAN RF radio on a single integrated chip. The device supports 802.11n beamformer and beamformee functionality, enabling a simplified, integrated solution. For product information, visit http://www.marvell.com/wireless/assets/Marvell-Avastar-88W8764-SoC.pdf
Key Features and Benefits
Arch Linux, a lightweight and flexible Linux® distribution that tries to Keep It Simple.
Currently we have official packages optimized for the i686 and x86-64 architectures. We complement our official package sets with a community-operated package repositorythat grows in size and quality each and every day.
Our strong community is diverse and helpful, and we pride ourselves on the range of skillsets and uses for Arch that stem from it. Please check out our forums and mailing lists to get your feet wet. Also glance through our wiki if you want to learn more about Arch.
About Arch Linux [Dec 5, 2008]
Arch Linux is an independently developed, i686/x86-64 general purpose GNU/Linux distribution versatile enough to suit any role. Development focuses on simplicity, minimalism, and code elegance. Arch is installed as a minimal base system, configured by the user upon which their own ideal environment is assembled by installing only what is required or desired for their unique purposes. GUI configuration utilities are not officially provided, and most system configuration is performed from the shell by editing simple text files. Arch strives to stay bleeding edge, and typically offers the latest stable versions of most software.
Arch Linux uses its own Pacman package manager, which couples simple binary packages with an easy-to-use package build system. This allows users to easily manage and customize packages ranging from official Arch software to the user’s own personal packages to packages from 3rd party sources. The repository system also allows users to easily build and maintain their own custom build scripts, packages, and repositories, encouraging community growth and contribution.
The minimal Arch base package set resides in the streamlined [core] repository. In addition, the official [extra], [community], and [testing] repositories provide several thousand high-quality, packages to meet your software demands. Arch also offers an [unsupported] section in the Arch Linux User Repository (AUR), which contains over 9,000 build scripts, for compiling installable packages from source using the Arch Linux makepkg application.
Arch Linux uses a “rolling release” system which allows one-time installation and perpetual software upgrades. It is not generally necessary to reinstall or upgrade your Arch Linux system from one “version” to the next. By issuing one command, an Arch system is kept up-to-date and on the bleeding edge.
Arch strives to keep its packages as close to the original upstream software as possible. Patches are applied only when necessary to ensure an application compiles and runs correctly with the other packages installed on an up-to-date Arch system.
To summarize: Arch Linux is a versatile, and simple distribution designed to fit the needs of the competent Linux® user. It is both powerful and easy to manage, making it an ideal distro for servers and workstations. Take it in any direction you like. If you share this vision of what a GNU/Linux distribution should be, then you are welcomed and encouraged to use it freely, get involved, and contribute to the community. Welcome to Arch!
New Arch Linux ARM website! [June 22, 2011]
Welcome to the new Arch Linux ARM site! We hope you like the new layout, organization, and the brand new, unified effort from the PlugApps and ArchMobile teams.
For our existing PlugApps/Plugbox users, you have probably already got the new “rebranding” packages that renames much of PlugApps to Arch Linux ARM (ALARM for short), but beyond that, we’re still the same team members with the same goal in mind – to create an advanced but simple Linux distribution for ARM devices such as plug computers and newer ARM devices.
We’d love to hear your feedback on the change – post in the forums or get in touch with us in the Support menu.
Thanks for using Arch Linux ARM and you’ll be hearing a lot more from us as we go!
Welcome ArchMobile.org Visitors! [July 23, 2011]
You may not have noticed unless you came here looking for ArchMobile.org, but the domain now redirects to Arch Linux ARM.
ArchMobile was the first effort aimed at making Arch Linux run on ARM, with an emphasis on mobile phones such as the OpenMoko. PlugApps was the other effort, aimed at making Arch Linux for plug computers. They decided to join forces and create a new, unified effort, Arch Linux ARM, for all ARM devices. This redirect completes the move to Arch Linux ARM as the base for everyone’s work.
So, welcome! Post in the forums and join us on IRC!
Arch Linux ARM [June 26, 2011]
Arch Linux ARM is a distribution of Linux for ARM computers. We are aimed at ARMv5 platforms like plug computers, OXNAS-based ARMv6 PogoPlugs, Cortex-A8 platforms such as the BeagleBoard, and Cortex-A9 and Tegra platforms like the PandaBoard and TrimSlice. However, it can run on any device that supports ARMv5te or Cortex-A instruction sets. Our collaboration with Arch Linuxbrings users the best platform, newest packages, and installation support.
Arch Linux ARM is a full Linux distribution with all of the console, server, and desktop applications you’d find anywhere else. You can run many popular services, such as CUPS to print from networked computers; Apache, Lighttpd, Cherokee, and Nginx for web servers with full PHP and CGI support; FTP, NFS, AFP, Rendezvous, Windows and Time Machine-compatible Samba servers; or install a desktop environment (with a web browser, text editors, and more) accessible through VNC, DisplayLink, or HDMI displays.
The entire distribution is on a rolling-release cycle that can be updated daily through small packages instead of huge updates every few months. Most packages are unmodified from what the upstream developer originally released.
Platforms (exerpted as of Feb 1, 2012):
PlugApps maintains two software repositories specifically designed for the features available in each platform. The devices listed for each platform are those we officially support with precompiled kernels and root file systems tailored to their unique configurations.
However, just because a device isn’t listed doesn’t mean the software won’t run on it. Any ARM system with any of the architectures we compile for will be able to run the software, and any newer systems that are backwards compatible will be able to use the software as well.
Choose a platform from the menu above or in the list below to get started.
Marvell Kirkwood 800MHz
Marvell Kirkwood 1.2GHz
Marvell Kirkwood 1.2GHz
Marvell Kirkwood 1.2GHz
Marvell Kirkwood 1.2GHz
Marvell Kirkwood 1.2GHz
Marvell Kirkwood 1.2GHz
PLX 7820 700MHz Dual-core
TI OMAP 3530 720MHz
TI DM3730 1GHz
TI OMAP 35xx 600/720MHz
B/G, Bluetooth v2.0 + EDR
TI OMAP 4430 1GHz Dual-core
B/G/N, Bluetooth v2.1 + EDR
NVIDIA Tegra 2, 1GHz Dual-core
Full and Micro SD
Marvell SoCs to win both Microsoft and Nokia for Windows Phone and Windows 8 platforms (after the Kinect success)
Update: – Marvell licenses VeriSilicon DSP cores [Feb 13, 2012]
SAN FRANCISCO—Marvell Technology Group Ltd. has signed a licensing agreement for VeriSilicon Holdings Co. Ltd.’s ZSP G3 intellectual property cores, including the dual-MAC ZSP800M and ZSP880M synthesizable DSP cores, VeriSilicon said Monday (Feb. 13). Financial terms of the deal were not disclosed.
Marvell is also using VeriSilicon’s quad-MAC ZSP800 core and suite of HD-audio software solutions in the ARMADA 1000 HD media processor SoC and the recently introduced Marvell ARMADA 1500 media processor SoC, VeriSilicon (Santa Clara, Calif.) said. These chips are designed for applications such as Blu-ray players, digital media adapters, HD-STB and HDTVs.
According to VeriSilion, the dual-MAC ZSP architecture offers a balance of high performance, power efficiency and lower cost to support the increasing feature convergence in mobile and digital entertainment products and enable prolonged battery life. The company claims its products offer ease of use and strong customer support.
“We are quite impressed with the area and power efficiency of the dual-MAC ZSP800M core, combined with the ease of programming on the ZSP architecture,” said Ivan Lee, vice president of mobile products at Marvell, in a statement. “VeriSilicon’s ZSP-based HD-audio and voice software solutions will provide us with faster time-to-market advantages necessary to meet the growing demands of the mobile platform solutions for use in tablets and smartphones.”
Marvell’s Cutting-Edge Application Processors [Jan 10, 2012]
From [2:45] the so-called hybrid multiprocessing technology is mentioned with showing the above architecture. It was introduced back in September 2010 with ARMADA 628 (see: Marvell ARMADA beats Qualcomm Snapdragon, NVIDIA Tegra and Samsung/Apple Hummingbird in the SoC market [again] [Sept 23, 2010 – Jan 17, 2011]) at the time when Marvell was working on the earlier ARMADA 610 (see also in the indicated post) for the RIM Blackberry Playbook. Six month into the project RIM dumped the 610 for a TI SoC, but even with that was only able to deliver the stable version of its new QNX software on version 2, missing the crucial 2010 Holiday season. While rumors of that time blamed Marvell for that, according to a current view: “It appears that the failures are largely RIM’s, and often software related. The Marvell processors, when used, seem to work well.“
The first larger scale win for ARMADA 610 was the VIZIO VTAB1008 8″ tablet operating with Android, made available in August 2011 (see: Innovative entertainment class [Android] tablet from VIZIO plus a unified UX for all cloud based CE devices, from TVs to smartphones [Aug 21, 2011 – Jan 7, 2012]). This tablet is shown earlier in the above video (from [0:19] to [1:24]). The ARMADA 628 still has to arrive in a tablet which probably will happen only late in 2012 on Android (as “The company looks at the tablets market as ‘saturated’ and is avoiding it for the next couple quarters“, see below) and might happen in Q4 as the earliest on Windows 8 as hinted explicitly below by Marvell. This is just a possibility (but a very big opportunity for OEMs considering the obvious maturity of 628), nothing more, as any OEM engagement currently under way might end up in a market relased product, or not (as in the case of Playbook with ARMADA 610).
Note: in the above video instead of ARMADA the earlier PXA branding is used by Marvell’s Allen Leibovitch. Jack Kang in charge of the Application Processors business is also using the PXA branding, as you could read below.
After the First real chances for Marvell on the tablet and smartphone fronts [Aug 21, 2011 – Jan 19, 2012], so far in the Android, Google TV, educational (more edu) and OPhone spaces, here is the next large scale opportunity for the company. With the young and entrepreneurial Jack Kang in charge since H2CY2010, who has an excellent earlier track record with Microsoft via the hugely successfull Microsoft Kinect application SoC effort, there is a real chance for the company to conclude with platform wins the reported below new engagements with both Microsoft and Nokia in 2012:
Exclusive: Marvell Says it Will Find a Home in Chinese Windows Phones [DailyTech, Jan 31, 2012]
Marvell also hints at possible Windows 8 tablets/laptops
We had an interesting chat with the Marvell Technology Group, Ltd. (MRVL).
Marvell is perhaps best known as the company that took the Xscale ARM division off of Intel Corp.’s (INTC) hands in 2006. During the modern smartphone era, Marvell has been a quiet competitor, overshadowed by companies like Qualcomm Inc. (QCOM) and Samsung Electronics Comp., Ltd. (KS:005930) which have pushed the smartphone processing power envelope more aggressively.
By contrast Marvell has focused on budget smartphones. It is in most of Research in Motion, Ltd.’s (TSE:RIM) BlackBerry smartphones. These budget smartphones have led it to strong sales in Indonesia and China.
Marvell has done well in China, thanks to close ties with RIM and Nokia.
[Image Source: BlackBerry Rocks]
Interestingly, the American company sees China as perhaps its most valuable market. Jack Kang, director of Marvell’s applications processor business unit states, “China was a very strategic investment.”
With Windows Phones set to land in China later this year in budget smartphones, Mr. Kang is making a bold prediction — “If there’s Windows Phones in China, there will probably be Windows Phones with Marvell in China.”
That would be a major market event as thus far Qualcomm has been the exclusive ARM chipmaker partner of Windows Phone. While Windows Phone has struggled in the U.S. where key Windows Phone partner Nokia Oyj. (HEL:NOK1V) has virtually no market share, in China Nokia is the top smartphone maker, so a switch to Marvell ARM cores would be quite a coup.
Nokia is the top phonemaker in China, thus it’s crucial that Marvell gets in Nokia’s new Chinese Windows Phones when it makes the shift later this year. [Image Source: M.I.C. Gadget]
Mr. Kang feels his firm’s biggest strength is providing “quality low-cost devices”. While it doesn’t bake discrete Wi-Fi circuitry into some of its system-on-a-chip devices, it says this approach works in markets like Indonesia or rural China where there’s plentiful 3G but sparse Wi-Fi coverage.
Marvell current produces single and dual-core chips, with the smartphone-aimed ARMADA family. Despite competitors like Qualcomm and NVIDIA Corp. (NVDA) jumping to quad core, Marvell says that approach doesn’t make sense. Mr. Kang comments, “We don’t think quad core makes sense at 40 [nm] from a power perspective, from a price perspective.”
Marvell’s ARMADA series ARM CPUs power smartphones and mobile devices like the ARM OLPC variant. [Image Source: OLPC.tv]
He says that Marvell is tentatively slotted to release quad-core designs when it hits 28 nm in mid-2013. The chipmaker uses Taiwan Semiconductor Manufacturing Comp., Ltd.’s (TPE:2330) third-party fabrication services. TSMC has struggled at the 28 nm node, delivering low yields and in turn higher costs — a combination that doesn’t work with Marvell’s business model — hence the delay.
Marvell feels that the fact that it takes its ARM license and build a unique core from the ground up using the ARM instruction set gives it an advantage over competitors like NVIDIA that simply take the core licensed from ARM Holdings plc (LON:ARM), but don’t do a complete redesign.
The company looks at the tablets market as “saturated” and is avoiding it for the next couple quarters, although it did seem distraught at losing RIM’s PlayBook to Texas Instruments Inc. (TXN), another U.S. chipmaker.
Mr. Kang hinted Marvell may jump on the tablet bandwagon or even release budget ARM laptops in Q4 2012 when Windows 8 arrives — and with it the first-ever ARM CPU support for a Windows main line operating system. He comments, “Microsoft already said Windows 8 will run on ARM. And we build ARM devices, so….”
Marvell hints it may be cooking up ARM Windows 8 tablets/laptops, too.
This move would make sense because Marvell has been involved with the One Laptop Per Child (OLPC) project in producing an ARM (Marvell) powered design. It has also played with low cost Linux laptops for years.
The company also showed off a (Android 3.2) “Honeycomb” television set, which it plans to target as an introduction to Internet TV in budget markets like China. This was a reference design, whereas Marvell would partner with a traditional TV maker for production designs.
The Honeycomb set uses Marvell’s latest dual-core chip, which contains an extra low-power core to conserve energy during simpler tasks. The power savings approach mirrors that found in Tegra 3. In that sense Marvell’s dual-core is technically a tri-core, much as NVIDIA’s quad-core is technically a penta-core.
There could indeed be a real 2012 opportunity for Marvell as Nokia CEO Stephen Elop highlighted in an answer to questions about the Quarter 4 results last week (Nokia Quarter 4 results 2011 webcast [Nokia, Jan 26, 2012]):
on China dynamics:
… The Chinese operators are increasingly, on accellerated basis entering into structures where there’s effectively retail rate plan bundling is going on at the store. The operators are driving very hard for the volume of 3G data subscribers. And this is not necessary an economic measure as it is driving volume on certain networks for certain technologies. I think those targets are probably set more broadly for all of the operators [he could mean: by the state, as all three operators are majority owned by the state]. And the impact of that is that they are discovering that with very low priced devices on certain radio technologies they can drive a lot of volume at those levels. And so we are seeing, for example, a very significant uptake in a number of low-priced devices that are on CDMA, there’s also a very significant focus on the Chinese technology TD-SCDMA, again all of the low levels ought to drive those volumes. My comment in the prepared remarks is that Symbian is not well positioned today against that. We do not have Symbian CDMA products at all, so we are not participating in that part of the market. So as that part of the market grows our addressable market has gone down because of that. In TD-SCDMA we do have some products in that space but not at the price points and configurations that is the real focus of this market. …
… We have not yet announced our specific products for the Chinese market but I will say that when we first announced our launch plans, I think all the way back in October, we did highlight that we would have CDMA based Windows Phone products and TD-SCDMA Windows Phone products. That thing said it is the case that we have work to do to successively drive the prices down further and further and further. That will take a bit of time but this is clearly the pattern you are going to see us on the months ahead. …
[I have a couple of deep and current analysis on that:
– The new, high-volume market in China is ready to define the 2012 smartphone war [Jan 6, 2012]
– China TD-SCDMA and W-CDMA 3G subscribers by the end of 2011: China Mobile lost its original growth momentum [Jan 21, 2012]
– China becoming the lead market for mobile Internet in 2012/13 [Dec 1, 2011]]
High performance SOC handles HD media [Jan 6, 2012]
The ARMADA 1500 HD media SOC decodes high-definition advanced multi-format video and audio using it’s dual ARMv7 compatible PJ4B 1.2 GHz processors with symmetric multi-processing and DSP accelerators. The chip targets IP/cable/satellite set-top boxes, advanced Blu-Ray players, digital media adapters, Google TV, and DTV applications.
The SOCs processors yield 6,000 DMIPS. It includes a secure boot ROM and USB, Fast Ethernet, HDMI, SATA, and SDIO interfaces, plus a 32-bit DDR3 at 800 MHz interface. The chips security engine handles OTP, RNG, AES/(3), DES, RSA, SHA-1, and MD5 and a comprehensive software development kit is available. (No price given – available now.)
See also my other posts regarding the other high volume opportunities for Marvell:
– Marvell® ARMADA® PXA168 based XO laptops and tablets from OLPC with $185 and target $100 list prices respectively [Jan 8, 2012]
– Google’s revitalization of its Android-based TV effort via Marvell SoC and reference design [Jan 5, 2012]
(the VIZIO VAP430 Stream Player, introduced below, is likely based on that)
– VIZIO’s two pronged strategy: Android based V.I.A. Plus device ecosystem + Windows based premium PC entertainment [Jan 11, 2012]
Background on Marvell’s relationship with Microsoft
A Cal ‘Kinect-ion’ [Innovations by UC Berkeley College of Engineering, Nov 9, 2011]
Some engineers wait a lifetime for a project like the one that Jack Kang (B.S.’04 EECS) landed when he was barely 26.
In the fall of 2008, Kang was settling into a new marketing position [Technical Marketing Manager] at Marvell, a Santa Clara-based semiconductor company, when Microsoft came knocking with a mysterious assignmentfor the company. Working on an undisclosed product, the computing giant needed a team to design a complex chip for manufacture on a massive scale.
“This project was very secretive,” recalls Kang, who had shifted from hands-on chip design to marketing management at Marvell. Marvell got the Microsoft contract, but “we didn’t really know what it was for,” says Kang. Many months into the development of a specialized microprocessor—often touted as a system’s “brains”—he got his answer. The mystery chip was destined for Kinect, Microsoft’s controller-free and immensely popular electronic game sensor device.
Introduced last November, Kinect uses sophisticated visual and voice recognition to run electronic games, movies and other entertainment. A companion to Microsoft’s Xbox 360 video gaming system, it became the fastest-selling consumer electronics gadget in history, selling 8 million devices in 60 days.
Kinect’s appeal came as no surprise to Kang. “It was a giant leap,” he says of the technology that lets users interact with media through body motions and voice commands. In fact, when Kang first learned about Kinect, he was so dazzled by the concept that he wondered if it could actually be pulled off.
His work on the Kinect chip spanned two years. Acting as the project champion in a “do-whatever-it-takes” capacity, Kang managed the effort from the earliest negotiations through a series of designs to manufacturing. In all, more than 100 Marvell chip designers, marketing representatives, software engineers and othersparticipated in a process that witnessed its share of evolutionary curveballs.
For the first six months, the Marvell team focused on what Kang believes would have been one of the most powerful mobile or consumer chips on the market. Shortly after the chip was completed, Microsoft asked for an even higher performing version. But the company soon switched course, deciding to put more of the computing functions into the Xbox instead of Kinect, Kang says.
Ultimately, Marvell engineers were asked to build a general purpose chip capable of controlling voice recognition and sending data to the Xbox. The team wound up modifying a chip already in development. That chip, as it turned out, was one that Kang had helped design in his earlier capacity as a Marvell engineer.
PHOTO BY ABBY COHN
Excited by his role in unleashing Kinect, Kang sees many possibilities for human-machine interaction. “We’re just at the tip of the iceberg of what this device can do,” he says, envisioning future Kinect systems that help the disabled and the elderly, and play a role in medical treatment and procedures.
Beyond Kinect’s intended use for home entertainment, the $150 system has already triggered a flood of creative applications for its cameras, 3-D sensing and other features. At UC Berkeley, graduate student Patrick Bouffard installed a Kinect on a small four-rotor robotic helicopter to enable it to sense its height above the floor and detect objects in its way. Other concepts have included video-conferencing, surveillance and a navigational aid for the blind.
With his boyish smile and animated personality, Kang, now 29, is at least a decade younger than most of his professional peers. He has developed 11 patents, mostly in the field of CPU (central processing unit) technology. “Everything I needed to know I learned in CS152!” he quips. Kang took that computer architecture and engineering class at Berkeley Engineering and became a teaching assistant his senior year.
Born in Taiwan and raised in the South Bay, Kang was drawn to a career at the intersection of engineering and business. “I felt you could have more of an impact,” he says. At Berkeley, he minored in business administration and was powerfully influenced by his experience as a TA. Hired as a Marvell engineer in February 2006, he was increasingly tapped to showcase company products in technical presentations for clients. “I had the mindset of marketing,” says Kang, who also enjoyed the social interaction that came with it.
Twice promoted since 2008, Kang now serves as director of Marvell’s application processor business unit. Today, with a 12-member staff, Kang manages Marvell product lines for e-readers, gaming, education, tabletsand other devices. Long gone is a work schedule with room for lunchtime volleyball and soccer games. “There’s always someone up in some time zone,” Kang observes.
Kang is eager for the next project of Kinect-like proportions to come his way. “Technology is always evolving,” he says. “I certainly hope I have something that beats it.”
Marvell: Lazard Says Buy On Kinect, TD-SCDMA Opportunities [Tech Trader Daily, June 20, 2011]
… Lazard Capital Markets analyst Daniel Amirraised the stock to Buy from Neutral …
Marvell’s sales of chips into China’s home-brewed TD-SCDMA cellular network standard, which is being developed by China Mobile (CHL), and backed by the government, is perhaps underestimated by the Street.
Marvell could produce $90 million in revenue this year from those chip sales, and $151 million next year, but it could actually go as high as $202 million next year, he thinks. The Street has just $80 million modeled for this year, on average.
Moreover, the company’s sales into Microsoft’s (MSFT) “Kinect” gaming accessory are “opening new doors” for Marvell in the mobile and wireless business, he thinks, which may help Marvell catch up after missing earlier tablet and smartphone bids. Kinect will probably produce $104 million in revenue for Marvell this year, up from $64 million last year, on Kinect units of 16 million, Amir thinks.
[Microsoft Reports Record Revenue of $20.9 Billion in Second Quarter [Microsoft press release, Jan 19, 2012]: “The Xbox 360 installed base now totals approximately 66 million consoles and 18 million Kinect sensors”]
Teardown: Kinect has processor after all [EE Times, Nov 15, 2010]
Despite Microsoft Corp.’s claims to the contrary, its new Kinect motion-gaming ad-on for the Xbox 360 uses a standalone applications processor marketed by Marvell Technology Group Ltd. , according to a teardown analysis of the Kinect performed by UBM TechInsights.
TechInsights’ teardown uncovered within Kinect a Marvell PXA 168 applications processor, a part usually found in notebook computers. In September, Microsoft reportedly said it decided not to use a dedicated processor in Kinect. Instead, the company reportedly said the peripheral would harness the power of the processor within the Xbox.
Microsoft (Redmond, Wash.) did not immediately respond to request for comment about the discrepancy.
TechInsights analysts concluded that Microsoft’s head fake means the company has bigger plans to make Kinect more of a platform for applications beyond gaming, or that the company was simply trying to prevent the device from being hacked. The Kinect has reportedly already been hacked multiple times.
The analysts also believe that Microsoft may have underestimated the resource demand on the 360 console processor and was forced into using a laptop-equivalent processor to integrate the imaging, sensing, motor-drive and control functions and orchestrate I/O and communications between the Kinect and Xbox 360. It’s also possible that the processor was required to support the spatial aspects of Kinect’s multiple microphones, they said.
“It’s difficult to identify exactly what the Marvell processor accomplishes on the Kinect as investigation on how the firmware and software manage all control and processing functions and how they could be localized/virtualized to the Xbox haven’t been investigated yet,” said Allan Yogasingam, a technical marketing manager at TechInsights. “Regardless, Microsoft has created a product that takes full advantage of all its components to provide an innovative gaming experience. The existence of this Marvell processor just opens the door for further innovation down the line and an extension of the Kinect from more than just a sensor-based gaming accessory.”
TechInsights also conducted further study on the sensor unit that works with Kinect’s image processor, made by PrimeSense Ltd. The firm discovered that the CMOS image sensors used were provided by Aptina Imaging (the die markings on the sensors still refer to Micron Imaging, which was spun off into Aptina in 2008). The infrared camera uses the MT9M001 sensor and RGB input from the color camera features the MT9M112 sensor, TechInsights said.
Close up of the Marvell PXA 168 applications processor found inside Kinect.
Source: UBM TechInsights.
TechInsights’ recent teardown of Kinect found chips made by PrimeSense, Marvell, Texas Instruments Inc., STMicroelectronics NV and others. The firm estimates that Kinect carries a bill-of-materials of roughly $56 for the components, not including the the price of design, R&D and the $500 million Microsoft plans to spend to market the device.
Teardown of the Microsoft Kinect – Focused on Motion Capture [Chipworks, Dec 23, 2010]
Application processor An Armada Series 800 MHz application processor by Marvell was also inside the Microsoft Kinect. Interestingly, this device is typically aimed at the e-reader market
Why did MS dump Kinect processor? There was ‘no need’ for it [ComputerAndVideoGames.com, Sept 29, 2010]
Camera tracks fewer points than it did last year
It emerged in January that MS had ditched a standalone processor in the camera – which some have claimed has subsequently affected performance.
Kinect now relies on the processing power of the Xbox itself – although the platform holder has claimed that it uses “less than one per cent” of the 360’s motherboard.
“We didn’t know how much processing Kinect was going to take at the start of development,” Kinect creative boss Kudo Tsunoda told the new Xbox World 360.
“Obviously you don’t want to lose any of the things that are important to Xbox customers. Graphic fidelity is something that Xbox has always been known for, and you want to make sure that you still hit that level.
“Forza is a graphical showpiece, and we had Forza with Kinect at E3… the graphic fidelity has actually improved in some areas from what they shipped with Forza 3. It’s still running at 60 FPS and it’s supporting Kinect, so there’s just no need to have that extra processor.”
When asked why Kinect detected less points on the player’s body than it did last year, Tsunoda added:
“As you start building the stuff, you’re like: ‘Wow, to track everything in the human body we can do less points. That’s just normal game development. Anything you do with games, you want the processing power to be used as efficiently as possible to get the experience that you want.”
Kinect launches in the UK on November 10 and the US on November 4.
Microsoft drops internal Natal chip [Jan 7, 2010]
GamesIndustry.biz has learned that Microsoft has dropped a chip from its forthcoming Natal motion control system as the platform holder eyes accessible price points in the build-up to release later this year.
Kinect Downgraded To Save Money, Can’t Read Sign Language [Kotaku, Aug 11, 2010]
The patent for Microsoft’s motion-sensing camera Kinect suggested that the device could understand American Sign Language. Well, it can’t. At least, the version going on sale in November can’t.
Responding to the claims made in the patent, Microsoft has told Kotaku “We are excited about the potential of Kinect and its potential to impact gaming and entertainment. Microsoft files lots of patent applications to protect our intellectual property, not all of which are brought to market right away. Kinect that is shipping this holiday will not support sign language.”
So why did the patent suggest it could? Well, sources close to the evolution of Kinect’s development tell us it’s because the version of the hardware that’ll be available later this year isn’t as capable as was originally intended.
The original Kinect had a much higher resolution (over twice that of the final model’s 320×240), and as such, was able to not only recognise the limbs of a player as the current model version can, but their fingers as well (which the current version can’t). And when the hardware could recognise fingers, it would have been able to read sign language.
But that capability came at a cost, and while Microsoft had always intended Kinect to sell for $150, “dumbing down” the camera would have meant that Microsoft wouldn’t be losing as much money on each unit sold, an important point should Kinect prove to be a failure. So dumb it down they did, reducing the camera’s resolution (which in turn reduced the number of appendages it’d have to track) and placing the burden for some of the device’s processing on the console and not Kinect’s own hardware.
This probably isn’t the first time you’ve heard such a rumour, but this latest time at least explains why Kinect can’t read sign language!
We’ve reached out to Microsoft for comment on the matter, and will update if we hear back.
Background on Jack Kang
Jack Kang, Director, Application Processors at Marvell [LinkedIn profile, excerpted, Feb 1, 2012]
- Director, Application Processors at Marvell Semiconductor, Inc.
- Technical Marketing Manager at Marvell Semiconductor, Inc.
- Logic Design Engineer at Marvell Semiconductor, Inc.
- Design Engineer at Eureka Technology
- University of California, Berkeley
- University of California, Berkeley – Walter A. Haas School of Business
Jack is currently director of Marvell’s Application Processor Business Unit. He has been in the semiconductor business for more than seven years, holding previous positions in design engineering at several leading technology vendors. At Marvell, Mr. Kang manages multiple product lines from design conception to mass market implementation and adoption. These include the industry-leading PXA168, PXA618 and PXA510 processors, which are fueling today’s premier consumer devices.
Additionally, he oversees various market segments, including education, eReaders, gaming, tablets and other connected consumer and embeddeddevices. Most recently, Mr. Kang was responsible for the processor design powering Microsoft’s gaming console, Microsoft Kinect. This gaming console shattered sales records and was named the fastest-selling tech gadget of all time by the Guinness Book of World Records – totaling more than 10 million units since its launch in November, 2010.
[Steve Ballmer, Houston Technology Forum, March 10, 2011: “We shipped those in November. We just announced that we’re over 10 million sold, in what amounts to about two-and-a-half months.”]
Outside of his work at Marvell, Mr. Kang also serves as a technical expert on CPU technology and has more than 11 patents pending in the field of CPU technology. He holds a degree in Electrical Engineering and Computer Science from the University of California, Berkeley, with an emphasis in Computer Architecture.
Jack Kang, Patents and Publications [LinkedIn page, excerpted, Feb 1, 2012]
Jack Kang’s Patents
- United States Patent 7,870,372
- Issued January 11, 2011
- United States Patent 7,904,703
- Issued March 8, 2011
- United States Patent 7,941,643
- Issued May 10, 2011
- United States Patent 7,757,070
- Issued July 13, 2010
- United States Patent 7,886,131
- Issued February 8, 2011
Inventors: Jack Kang
- United States Patent 7,904,704
- Issued March 8, 2011
- United States Patent 8,032,737
- Issued October 4, 2011
- United States Patent 8,046,775
- Issued October 25, 2011
Jack Kang’s Publications
- Berkeley Innovations
- November 28, 2011
Authors: Jack Kang, Abby Cohn
Marvell’s processors for embedded systems – Discussion of the PXA510 processor and the D2Plug developer kit
Mr. Jack Kang of Marvell discusses the PXA510 ARM V7 based 800 MHz application processor with with 512 Kbytes of level 2 cache and it’s associated developer kit.
From Dewey to Digital [HigherEdTECH, Jan 6, 2011]
No more pencils?! No more books? No more teachers? On-demand digital content, do-it-yourself learning, new generation learning platforms, and new modes of assessment are disrupting traditional textbooks, grading, courses, and degrees. Is technology really a catalyst for change? Let us count the ways.
Kenneth C. Green, Founding Director, The Campus Computing Project
- Sean Devine, Chief Executive Officer, CourseSmart
- Felice Nudelman, Executive Director, Education, The New York Times Company
- William D. Rieders, Executive Vice President of Global New Media, Cengage Learning
- Jack Kang, Director, Application Processor Business Unit, Marvell
Video Records (~10 min each) of the From Dewey to Digital (Jan 6, 2011) panel discussion:
“In addition we have got many design wins what is the next crop of tablets and other mobile devices coming out this year. We will see how those will do against Apple and so forth.” Then: “A small fab can produce one million panels a day. … A couple of million dollars are needed to adjust the process for Pixel Qi. … A committed order of at least half million is needed to start. … We have 1st tier design wins now. We will see what will come out of that.”
Mary Lou Jepsen in the very last video from Charbax (see embedded in the end)
Pixel Qi sunlight readable displays at CES 2012 [Jan 11, 2012]
from the accompanying Liliputing article:
The company has been showing off its display technology for the past few years, but few consumer products have shipped with Pixel Qi screens. The Notion Ink Adam tablet was available with an optional 10 inch, 1024 x 600 Pixel Qi screen, and the OLPC XO 3.0 tablet will also be available with a Pixel Qi display. But the display company has also had success with more specific markets where outdoor readable displays are a necessityrather than an option.
For instance, military tablets with GPS have been used by paratroopers who need to land on the ground and situate themselves immediately without first looking for shade. Pixel Qi has also been talking to companies interested in using sunlight readable displays in cars, trucks, tractors, and other motor vehicles.
At CES, Pixel Qi is showing off the same three screen sizes and resolutions as last year:
- 7 inch, 1024 x 600 pixel
- 10 inch, 1024 x 600 pixel
- 10 inch, 1280 x 800 pixel
But the company has improved viewing angles and reflection. The screens still don’t have the best viewing angles around. If you look at the display from too sharp an angle, colors will wash out — but that’s not a problem that’s unique to Pixel Qi. While some high-quality devices with IPS displays can be viewed from nearly 180 angles without any loss of clarity, many other cheaper displays offer poorer viewing angles.
Coming 2012! SOL’s 7″ Android-Windows Tablet [SOL Computer, Jan 13, 2012]
Sol Computer introduced a 10 inch Windows netbook and 10 inch Windows tablet with Pixel Qi sunlight viewable displays last year. Now the company is adding two new 7 inch modelsto its lineup, one with Google Android, and another with Windows.
Pixel Qi screens are dual-mode LCD displays which work as full-color screens when the backlight is on, or high-contrast, nearly black-and white displays with the screen off. What makes them special is that you can still see the screens even when the backlight is off, using nothing but ambient light.
Sol founder Chris Swanner says the tablets and netbooks have been popular with pilots and other professionals that work outdoors and in bright, sunlit environments where you really don’t want to have to deal with glare — and where a Windows device that can run highly specialized applications is a must.
This is a niche product though, and it costs a lot to add the Pixel Qi screens to a small number of devices. The Windows tablet has an Intel Atom processor, a capactive touchscreen, and a $1099 price tag — and Swanner says he’s not making a lot of profit at that level. But he’s selling around 20 to 30 devices a month. If volume were to go up, pricing could conceivably go down.
The two new tablets will have 7 inch, 1024 x 600 Pixel Qi displays. A prototype of the Android tablet was on-hand at the Consumer Electronics Show, but I was told that the hardware hasn’t been finalized — the plastic case may be sturdier on the final unit.
Sol Computer 7 inch Android tablet with Pixel Qi display [Jan 10, 2012]
from the accompanying Liliputing article:
Sol doesn’t have a working prototype of the new 7 inch Windows tablet yet, but Cynovo, the Chinese company Sol works with to build its tablets had a similar model with a standard 7 inch LCD display to show.
Pricing hasn’t yet been set for the new 7 inch tablets, but they’re expected to cost less than the 10 inch, $1099 model.
The New Sol Tablet PC Featuring A 10″ Sunlight Readable Display [SOL Computer, Aug 12, 2011]
Here is the latest sunlight readable Tablet PC offered by Sol Computer.com. We named it the Sol Tablet PC because it will add “some SOL to your life”. Take this Tablet PC anywhere and you will always be able to see the no glare screen display in brilliant high resolution. We have incorporated the latest Pixel Qi transflective back light technology built into our PC Tablet which provides a unique AntiGlare LED Display. Our Sol Tablet PC can be viewed perfectly in direct sunlight – no other tabletPC or IPad can make such a claim. Also, because the Sol Tablet PC has this antiglare technology built into it’s LED no glare screen, battery consumption is reduced significantly. In fact, this Win 7 Tablet PC, when viewed in full sunlight (reflective mode), LED power consumption is cut by up to 80%. This increases battery life to more than 10 hours!
Checkout our Newest Product – DryCASE Tablet™ a flexible, crystal clear waterproof case that allows complete use of your tablet or e-reader while keeping it dry and clean. The vacuum seal takes all the air out of the case so there is no way that water can enter. There can be no exchange of gas (air) for liquid (water). The vacuum seal also allows full use of your touch screen because it seal flush on the face of the tablet.
“Only one tablet has been successful in the last year” [in the next video from Charbax Mary Lou Jepsen names as “the tablet from Cupertino everyone is familiar with”, i.e. Apple iPad, saying that “unfortunately we are not in that tablet”] – CFO John Ryan – from the video embedded into the article below:
Pixel Qi is well known for developing a new breed of screens that deliver an unparalleled experience in direct sunlight and draw very low power. The company has seen their technology showcased in the early One Laptop per Child program in Africa, which initially drew industry wide attention to the company. In the last year their screens were featured in various ZTE Tablets in China and recently in the Notion Ink Adam. In the last six months 3M invested heavily in the future of Pixel Qi and has influenced the direction of the company away from consumer electronics to more specialized industries, such as the military.
We have spoken with both the CFO and CEO over the years at various industry events and their decision to gravitate away from the fickle nature of e-readers and tablets was a wise decision. The company instead plans to focus their attention on specialized market segments that would benefit more from their technology and lead to more long-term contracts.
One of the first ways they will deploy their Pixel Qi technology is within the military and give soldiers a new way to receive mission data. If you look at your average paratrooper or ranger they are constantly receiving revised mission parameters and in harsh conditions like a dessert. Being in very bright environments or dark make no different with Pixel Qi, whose very essence is low-power no-glare technology which would make lives easier. Most military operations worldwide still employ maps and written communications, to receive updates to their mission requires many steps and circumstances change many times. The plan is for soldiers to have heavily versatile tablets that last for weeks and are wired into mission control to receive new updates on the fly. Another way their technology will be employed is with the hydro electric community where operators are frequently in high elevations in direct sunlight.
3M’s investment in Pixel Qi is allowing the company to deal with multiple fabs in Taiwan where the company is based and diversify their portfolio. Obviously when you receive a huge investment from a mega-corporation whose reach is all-encompassing you will receive a ton of connections within very specialized niches. 3M is found everywhere from cars, phones, hospitals, and tape. This will turn the company around and we were told in the near future their technology will be everywhere, but in products we will never see. Obviously Pixel Qi is not stepping totally away from the end user experience and they are currently dealing with a number of existing clients in future product launches. Check out our whole interview where CFO John Ryan talks to us in detail about the new direction of the company and demonstrating two new screens they brought with them to CES 2012.We spoke with CFO John Ryan of Pixel Qi at CES 2012 where he talked about the new direction of the company, the influence of their new investor (3M) and where the company is going for the rest of the year. This is a great interview and gives you an unique prospective you can’t find anywhere but Goodereader.com