Home » Cloud Computing strategy (Page 7)
Category Archives: Cloud Computing strategy
20 years of Samsung “New Management” as manifested by the latest, June 20th GALAXY & ATIV innovations
… innovations in the broadest sense of the world: technology, hardware and software engineering and design, marketing in general and branding in particular etc.
Updates: Q2 record-high operating profit + smartphone worries deepen + overall business situation + nonproportionally high capex of the semiconductor business + the #2 capex beneficiary, the Display Panel Segment
Samsung Electronics posts record-high operating profit in Q2 [arirangnews YouTube channel, July 5, 2013]
![]()
Samsung Electronics’ Pre-Earnings Guidance [Samsung public disclosure, July 5, 2013]
On July 5, 2013, Samsung Electronics disclosed its ’13. 2Q consolidated earnings estimate as follows.
– Sales: Approximately 57 trillion Won [$49.86B]
– Operating Profit: Approximately 9.5 trillion Won [$8.31B]The above figures are consolidated earnings estimates based on K-IFRS. Korean disclosure regulations do not allow earnings estimates to be given in a range. Therefore, the above figures are the median of the earnings estimate range given below.
– Sales: 56 ~ 58 trillion Won
– Operating Profit: 9.3 ~ 9.7 trillion Won* The above information is provided for the convenience of our investors, before the external audit on the financial results of our headquarters, subsidiaries and affiliates is completed.
Samsung Electronics’ second quarter misses forecast as smartphone worries deepen [Reuters, July 5, 2013]
… Now investors fear Samsung may also follow in the footsteps of Apple and other once-mighty players that are struggling with shrinking margins, in an industry where companies live and die by their ability to stay ahead of the innovation curve. … “One of the biggest risks for Samsung Electronics going forward is that 70 percent of total operating profit comes from mobile business. Diversification is key. Samsung needs to engage in active business transition until end-2014,” said Jeff Kim, an analyst at Hyundai Securities. … Samsung spent more on marketing than R&D in 2012 for the first time in at least three years, and the S4 was launched in March with a Broadway-style show in New York. The company also invested heavily in distribution channels including opening brand shops in 1,400 Best Buy stores in the United States. But the glitz and glamour has failed to arrest a slide in handset sales growth, and shipments are seen rising only 4 percent to 8 percent in the second quarter from the previous quarter. …
The overall business situation of Samsung Electronics as of the end of Q2 2013
Samsung Electronics Announces Earnings for Q2 in 2013 [press release, July 26, 2013]
Samsung Electronics Co., Ltd. today announced revenues of 57.46 trillion won [$51.6B] on a consolidated basis for the second quarter ended June 30, 2013, a 9-percent increase from the previous quarter. Consolidated operating profit for the quarter reached 9.53 trillion won [$8.56B], representing a 9-percent increase on quarter, while consolidated net profit for the same quarter was 7.77 trillion won [$6.98B].
In its earnings guidance disclosed on July 5, Samsung estimated second quarter consolidated revenues would reach approximately 57 trillion won [$51.2B] with consolidated operating profit of approximately 9.5 trillion won [$8.53B].
…
Samsung announces second quarter profits 삼성전자, 2013년 2분기 실적 발표 [arirangnews YouTube channel, July 25, 2013]
Segmentwise and from outlook point of view, from: Earnings Release Q2 2013, Samsung Electronics, July 2013 presentation [July 26, 2013]
But: while handset revenue was up by 9% the operating profit for handsets and network products together were down by 3%. Considering that 97.3% of the IM (IT & Mobile Communications) revenue is for handsets that essentially means a similar operating profit drop of ~3% for handsets alone. Note that while the margin was 17.7% a year ago (in 2Q ’12) now (in 2Q ‘13) it was the same 17.7%, so with that 3% drop there was no fundamental problem (yet). Note as well that 66% of the operating profit was from IM, i.e. around 66% from handsets which constitute 97.3% of the total IM revenue.
Samsung explains the 3% IM operating profit drop by “marginal profit decline due to increased costs of new product launches, R&D and retail channels investments, etc.” as you could see below:
Fundamental problem could well be with the handset (IM) market share outlook, as neither for 2Q ‘13, nor for the outlook market share was talked about at all.
In the continuation of Samsung Electronics Announces Earnings for Q2 in 2013 [press release, July 26, 2013] there are certain remarks regarding all that:
…
Highlighting the quarterly performance, growth remained steady in high-end smartphone and premium television businesses. Most noticeably, a growth spurt in shipments for OLED panels for smartphones and high consumer demand for air conditioners spurred growth.
Led by the much-awaited launch of GALAXY S4, smartphone shipments and revenue increased from the March quarter. The strong growth streak for the smartphone market is expected to continue in the third quarter albeit at a slower pace.
The components business [Semiconductor] improved both in terms of revenue and operating profits from a quarter earlier due to a higher demand for mobile device-related parts. However, overall sales of logic chips declined due to lower mobile application processor shipments.
Escalated investments in R&D and in distribution channels, as well as expenses on new product launches have dampened wider gains for IT & Mobile Communications (IM) Division, which encompasses the Mobile Communications, Networks, and Digital Imaging businesses.
…
The Display Panel [DP] segment’s operating profit jumped 46 percent on quarter to 1.12 trillion won thanks to strong demand for high value-added panels for IT and TV panels sized 60-inch and over. A mid- to low-end TV lineup targeting emerging markets and a range of premium TV offerings were credited for the Visual Display business’ earnings. As for the next quarter, uncertainties over Europe’s economy and Chinese subsidies for electronics goods could possibly hinder growth.
then in terms of business outlook:
[for IM] Looking ahead, Samsung smartphone sales are expected to pick up in the third quarter and outperform global market forecasts. Smartphones will grow to account for over 70 percent of the company’s mobile phone shipments in the third quarter due to the strengthening mid- to low-end mobile market. Growth momentum in the July-September quarter will remain on course, although at a slightly reduced pace.
In the case of tablet PCs, Samsung will post growth in the mid-10 percent range with the introduction of new tablets. Shipments of tablets will jump to a little over 30 percent on-quarter, outpacing the market.
Average Selling Price (ASP) of smartphones will likely be impacted due to a wider range of low- to mid-priced smartphones hitting the market. Sales of tablet PCs are expected to remain solid and Samsung is looking to expand global sales with a broad portfolio of models including GALAXY Tab 3.
Samsung is also looking to improve profitability in IM through its lineup of mid- to high-end hybrid tablet-laptop devices such as ATIV Q and wider adoption of LTE mobile telecommunication technology.
…
[for Semiconductor] In the July-September quarter, demand for DRAM used in data centers is expected to remain high. Orders from the electronic gaming industry will add to profit margins as video gamers seek more powerful graphics DRAMs. Peak seasonality will help PC sales by a slight margin.
Samsung will try to ramp up sales of application processors (AP) with 28-nanometer process technology and high-resolution image sensors. Demand for the components is expected to grow as mobile devices needing more processing power roll out into market in the remaining quarters.
By diversifying its product portfolio and consumer base, and by gearing up development of 20 nm-class and 14 nm-class process technology, Samsung aims to achieve a stable level of growth.
[for DP]
Looking ahead to the third quarter, Samsung anticipates market growth as higher seasonal demand for panels takes effect. For TV panels, demand is expected to be dampened by economic uncertainties although the large-size premium panel market is expected to sustain growth. Samsung aims to strengthen its leadership in the high-end TV panel segment with expanded sales in UHD panels and in the 40- to 50-inch class.
Concerning the market outlook for IT panels, although uncertainty remains in the PC and monitor sector, robust demand for tablet displays is expected to continue as new products are launched by manufacturers in the latter half of 2013. Samsung plans to reinforce its market leadership in tablet panels by expanding its lineup of high-resolution and mass market displays.
For OLED panels, positive growth for high-end smartphone displays is expected to be maintained in the second half. To ensure continued momentum, Samsung will concentrate on offering differentiated smartphone displays through technological competitiveness, including flexible display technology, and focus on enhancing cost competitiveness.
A business situation (described both in the Q2 results and in the outlook) required a significant change in the investment strategy which described in the Samsung Electronics Announces Earnings for Q2 in 2013 [press release, July 26, 2013] as:
As for this year’s capital expenditure, Samsung Electronics plans to spend a record total of 24 trillion won [$21.5B], an increase of over 1 trillion won [$0.9B] from the previous year. This amount may increase depending on market conditions in the second half and the outlook for next year.
The Semiconductor business will invest 13 trillion won [$11.7B], while the Display Panel segment will inject 6.5 trillion [$5.8B] in capex. The increase in spending is aimed at enhancing Samsung’s competitive edge in growth-generating, high value-added DRAM, NAND and System LSI products.
In the second quarter, capex amounted to 5.2 trillion won [$4.7B], in which the Semiconductor business was responsible for 2.2 trillion [$1.97B] and 1.3 trillion won [$1.17B] in spending was accredited to the Display Panel segment. All told, a total of 9 trillion [$8.1B] won or 38 percent of the planned capex investment was made in the first half of the year.
which led to the 3d party headline Samsung to spend KRW19.5 trillion [$17.5B] on component business in 2013 [DIGITIMES, July 30, 2013] including the following explanations:
Samsung Electronics expects to spend a total of KRW19.5 trillion (US$17.5 billion), equivalent to 81.25% of its total capex, on the company’s component business in 2013. Of the planned capex, KRW13 trillion [$11.7B] will be invested in its semiconductor business while KRW6.5 trillion [$5.8B] will be spent on its display panel business.
Samsung will allocate KRW24 trillion [$21.5B] for its 2013 capex, up from the record KRW22.8 trillion [$20.5B] reported for 2012.
The capex plan was disclosed when Samsung announced record operating profits for the second quarter of 2013. Despite the record earnings, its mobile division that accounts for the majority of company revenues and profits posted disappointing results in the quarter. In contrast, sales and profits at its component division performed relatively brisk.
Considering Samsung’s Q1 earnings release data: [$1.35B]
As for this year’s capital expenditure, Samsung Electronics executed a combined total of 3.9 trillion won [$3.5B] for the quarter. The Semiconductor and Display Panel segments were each accountable for 1.5 trillion won [$1.35B] in capex spending. Samsung is poised to increase investment beginning from the second half of the fiscal year to preempt rising demand for differentiated products and to harness its competitiveness in the high-tech industry.
I came to the following overall capex situation:
Samsung Electronics Capex |
Q1 |
Q2 |
H1 |
H2 |
H2/H1 |
2013 total |
Semiconductor business |
$1.35B |
$1.97B |
$3.32B |
$8.38B |
+152% |
$11.7B |
Display Panel segment |
$1.35B |
$1.17B |
$2.52B |
$3.36B |
+33% |
$5.8B |
The other 2 segments (IM, CE) |
$0.8B |
$1.56B |
$2.36B |
$1.72B |
-27% |
$4.0B |
TOTAL |
$3.5B |
$4.7B |
$8.28B |
$13.22B |
+60% |
$21.5B |
Note that this is in sharp contrast to Intel capex changes as per UPDATE 3-Intel cuts 2013 revenue forecast, capex as PC industry sags [Reuters, July 17, 2013]
Intel Corp cut its full-year revenue forecast and said it is scaling back capital spending as it adjusts to a painful contraction of personal computer sales and economic weakness in China, one of its biggest markets.
The forecast and cut in capital spending were announced on Wednesday in the company’s quarterly earnings report, the first under new Chief Executive Brian Krzanich. … “Intel was slow to respond to the ultra-mobile PC trends,” Krzanich said. “We will move Atom even faster to our leading-edge silicon technology.”
…
Faced with slow demand, Intel said it was cutting 2013 capital spending to $11 billion, plus or minus $500 million. The cut follows a similar reduction from $13 billion to $12 billion in April. Intel said it expects 2013 revenue to be flat from the year before. Last quarter Intel forecast a low single digit percentage increase in 2013.
… Global shipments of personal computers dropped 11 percent in the second quarter, the fifth straight quarterly decline in a market that has been devastated by the popularity of tablets.
… Intel posted second-quarter revenue of $12.8 billion and said revenue in the current quarter would be $13.5 billion, plus or minus $500 million.
Analysts expected $12.896 billion in revenue for the second quarter and $13.732 billion for the current quarter, according to Thomson Reuters I/B/E/S.
For the second quarter, Intel reported net earnings of $2.0 billion, or 39 cents a share, in line with expectations. That compared with $2.827 billion, or 54 cents, in the same quarter last year.
The non-proportionally high capex of the semiconductor business deserves attention. From latest press releases of that segment we know the following:
From: Samsung Foundry 14nm FinFET [brochure, March 7, 2013]
Strong 14nm FinFET Logic Process and Design Infrastructure for Advanced Mobile SOC Applications
Samsung Foundry’s advanced 14-nanometer (nm) FinFET process technology offers a robust design infrastructure to drive future mobile application markets. As mobile applications continue to demand a more PC-like user experience, Samsung’s FinFET process technology enables system-on-chip (SOC) designers to reap all of the advantages for the latest energy-efficient processors: die-size reductions, faster frequencies, and lower power consumption.
Estimated groundbreaking and completion dates
Characteristics of FinFET transistor performance are closely correlated to the high aspect ratio (AR) of fin height/fin width. The challenges of the FinFET structure include: control of the fin width and height dimensions, the ability to scale the fin width down to sub-20nm nodes and gate length dimension control over a high AR while precisely controlling all of these parameters during manufacturing.
Advantages of 3 dimensional design
Samsung’s FinFET technology, unlike planar transistors with flat, multi-layer designs, uses a tall wall-like gate, 3D-structured design to minimize leakage, and in turn, increase a chip’s reliability and power at a small node process. Additionally, as less heat is generated and the power supply lasts longer, clock frequencies can be tuned for system critical components without overstepping system power requirements.
Solid design ecosystem
Samsung’s 14nm FinFET process node is supported by an ecosystem of partners including ARM®, Cadence® Design Systems, Mentor graphics® and Synopsys®. With their collaboration, Samsung’s 14nm FinFET technology process taped out multiple test chips ranging from a full ARM Cortex™-A7 processor implementation to a SRAM-based chip capable of operating near threshold voltage levels, as well as an array of analog IP.
Silicon-based Process Development Kits
Samsung Foundry’s 14nm FinFET process design kits (PDKs) provide customers with models, design rule manuals and technology files that have been developed based on silicon results from previous 14nm FinFET test chips run. Samsung’s 14nm FinFET PDK includes: design flows, routers and other design enablement features to support new device structures, local interconnects, and advanced routing rules. Samsung Foundry continues to lead the industry in providing its customers with early access to all elements of the design infrastructure to enable accelerated chip development.
Samsung talks about their 14nm FinFET process [SemiAccurate, May 28, 2013]
…
Ana Hunter of Samsung cleared up a lot of the issues that were floating around the Samsung version of the process. Please bear in mind that although the bulk of the technology is the same between the three partners, all can and likely will deliver different flavors of 14nm to their customers. What Samsung is doing may or may not be mirrored by IBM and Global Foundries, and vice versa.
The first thing is that Samsung is on track to deliver the process on the promised schedule, that would be 14nm customer tapeouts in Q1/2014. This is a pretty good validation that the time to market advantages of changing the transistors and not the interconnects is happening as promised. At the moment Samsung just completed the third rev of their Process Design Kit (PDK) and are using them internally for logic development. Customers have 14nm test wafers running through the fabs right now too, the main goal is to run test chips to see what types of structures will best suit their planned chips and how aggressively they will implement some of the offered technologies. Samsung described the yields on current test chips as good and logic libraries are well in to development now.
For the 14nm process the Front End of Line (FEOL) is completely new, the Back End of Line (BEOL) is mostly carried over from 20nm. Metal 1 and higher are the same as the older 20nm process so any characteristics determined by that technology will be constant for all three partners. Samsung is modifying the playbook from there a bit by focusing on tighter poly and contact pitches. There are no new design rules but Samsung is being fairly aggressive in pushing these two areas and likely a few more not discussed too.
The refrain in January was that 14nm would bring no die shrinks, but that isn’t quite the case at Samsung. While it is true that the 50% shrinks of processes past are not going to happen this time, there will be between a 7% and 15% shrink thanks to the poly and contact pitch work. This has been validated by SRAM test parts, they are showing those gains and they are fairly representative of what you can get out of a device.
Much of what the customers are doing with the test chips being run at the moment centers around how aggressively they want to push these boundaries for their devices. If they want to take full advantage of what Samsung is doing the maximum 15% should be achievable but works is still ongoing. How much each partner chooses to push shrinks will likely be the main differentiator between Samsung, IBM, and Global Foundries.
In the end you will get a chip that looks like it was built on a 20nm process, is sized like it was built on a 20nm process, but has the dynamic range and power consumption of a 14nm chip. You can also order wafers built on the process much sooner than you could a full 14nm process, and reuse much of what you did to build the 20nm variants of your chips. Cost will obviously go up, it always does, but to what degree is still an open question, one unlikely to be publicly answered by any of the players in the near future.
The one question that remains open is what to call this process. All of those offering it stick to the 14nm script but their competition insists that it is 20nm, the rest is spin. SemiAccurate sees both of their points and both are quite valid. The performance is 14nm, the size is 20nm, and there is nothing like it in the past to compare to. So what do you call it? Because there won’t be a full 14nm FEOL + 14nm BEOL process coming from any of the three partners. We will call it 14nm to avoid confusion but won’t argue that it could also correctly be called an enhanced 20nm process.
14nm FinFET implementation of ARM Cortex A-7 [SamsungUSATech YouTube channel, Feb 5, 2013]
Implementing ARM Cortex-A7 in a 14nm Samsung FinFET Process [SoC Design blog of ARM, Feb 5, 2013]
Recently, ARM, Samsung and Cadence announced a joint tapeout of an ARM CortexTM-A7 based test chip on Samsung’s 14nm FinFET process.
This collaboration is significant due to a couple of reasons as detailed in this blog and video below:
14nm/FinFET technology
The importance of FinFETs as the next evolution in process technology was resoundingly validated at ARM TechCon last year, where many of the papers touted improvements in the power/performance curve with the usage of FinFETs. Essentially, designers can get better performance with the same power profile, or lower power with the same performance. A 14nm FinFET process can potentially offer a 40-50% performance increase or a 50% power reduction compared to a 28nm process. With power density threatening to become a roadblock to future system on chip (SoC) innovation, FinFET technology is very welcome news indeed.
However, applying a new process technology such as 14nm/FinFETs still requires much effort. The process technology has to mature, EDA methodologies have to be established, and libraries and IP have to be developed. The test chip tapeout referred to above includes a 14nm/FinFET Cortex-A7 along with ARM 14nm/FinFET libraries and a Samsung 14nm SRAM. It was implemented with Cadence’s 14nm methodology during an 8-week period. The tapeout is an important milestone that shows progress in the industry’s move towards being able to mass produce 14nm/FinFET SoCs.
The ARM Cortex-A7
The ARM Cortex-A7 processor is starting to go mainstream in new high-end smartphone applications. Today, the ARM Cortex-A7 is used as the “LITTLE” part in ARM’s big.LITTLE configuration along with the ARM Cortex-A15, offloading computing tasks and improving energy efficiency and battery life.
In addition, it’s also being targeted as the main processor core in new entry level and mid-range smartphones, significantly improving power, performance and area of those devices that used a previous generation core. For applications that don’t need the peak computing power of a Cortex-A15 or Cortex-A9 processor, the Cortex-A7 alternative offers reasonably good performance at significantly lower power and area.
14nm/FinFET technology can potentially amplify the advantage of ARM Cortex-A7 processors by further improving the Cortex-A7’s power/performance benefits.
The entire tapeout project was completed within a tightly-packed schedule of 8 weeks. During that time, engineers from Samsung, ARM and Cadence located in multiple locations around the globe (Korea, Taiwan, U.K., Germany and the U.S.) worked together diligently to make this tapeout successful. As a side note, the tapeout also demonstrates the realities of today’s large design teams, where designers must interact with others in different locations and time zones.
The Methodology Used
As mentioned earlier, the Cadence 14nm methodology was used for this tapeout. This included the following:
Virtuoso 6.1.5 and Advanced Node environment were used for the standard cell design
RTL Compiler, Encounter Digital Implementation System (EDI System) and NanoRoutewere used for synthesis and place-and-route respectively
QRC, Encounter Timing System (ETS), and Encounter Power System (EPS) were used for extraction, as well as timing and power signoff.
From a user perspective, a 14nm/FinFET design methodology is similar to 20nm. Double-patterning is required at 14nm, just like 20nm. Under the hood, EDI System and NanoRoute handle 14nm/FinFET design rules automatically, including same-mask metal rules to prevent double patterning conflicts. The use models for QRC, ETS and EPS signoff at 14nm are also similar to that of 20nm. However, to ensure correct handling of the 14nm/FinFET design rules, as well as making sure the new libraries could be used efficiently, R&D teams from all three companies had to work closely together.
In short, the tapeout of ARM’s Cortex-A7 on Samsung’s 14nm/FinFET process is a significant milestone towards 14nm readiness. We’re all expecting to see more partnership work to come to fruition too, bridging the gap between early testing and production. For more information, see the feature story at Cadence.com. In addition, visit the ARM, Cadence and Samsung booths at the Common Platform Forumin Santa Clara, CA. Dispesh Patel, EVP and General Manager – PIPD, PIEX, ARM also will discuss this collaboration further in his keynote.
Samsung Library and IP Offerings, DAC 2013 IP Talks! – James Bong, Samsung [chipestimate YouTube channel, July 19, 2013]
From: Mentor, Freescale, Samsung DAC talks: EDA, IoT & Mobile growth & challenges [SoC Design blog of ARM, June 7, 2013]
…
Stephen Woo, Samsung Electronics: mobile growth, problems with solutions, and challenges [watch the video record of the KEYNOTE New Challenges for Smarter Mobile Devices]
Tuesday morning’s main keynote was presented by Dr. Stephen Woo of Samsung Electronics to a completely full auditorium with 200 additional standing attendees. Dr. Woo has a history with DAC; winning the best paper award at DAC in 1994. Dr. Woo’s talk looked at the impact of smart mobile devices, 3 technical problems with solutions, and some challenges for the future. Dr. Woo stated that smart mobile devices are driving today’s semiconductors; it’s no longer PCs. Today’s mobile phones are multimedia application tools that can make calls. It’s hard to believe, but Apple’s iPhone launched 5 years ago (James Bruce’s blog reminds us how simple it was), while Samsung’s Galaxy phone powered by Exynos launched only 3 years ago. Smartphones are still a young field. According to Dr. Woo, typical smartphones have over a dozen chips (95% of which are ARM-based.) With the increased use of mobile applications, phones are driving higher computing and bandwidth requirements.
Overcoming space, applications and battery obstacles in mobile devices
Dr. Woo followed with presenting 3 technical problems (with solutions) for mobile phones: space (integration), applications (compute power,) and battery and heat problem (low power/thermal.) Space is being addressed with integration. The speed of development has increased 2x in recent years. Now a new node is introduced every year. Applications are being addressed by increasing computing power with new generations of ARM processors. For the latest generation Exynos Octa processor, Samsung selected ARM’s big.LITTLETM processing technology (video).
Smart mobile devices will still be drivers for semiconductors with more and better applications, such as biometrics. Finally, Samsung is addressing the battery and heat problems with advancements with low power (including ARM’s physical IP) and thermal technology.
Challenges to be solved
Dr. Woo continued with some challenges that have yet to be solved. Are we achieving true SOC? No, we have a system of dozens of chips. Can the industry address with flexible ICs and PCB? What about chip on plastic? What about system on display? How can the advantages of flexible display be realized? Dr. Woo concluded with saying that EDA and Semiconductors have been doing a great job together to create smart mobile devices. Dr. Woo looks forward to solving these new challenges together. Richard Goering also covered this keynote.
…
Samsung Now Mass Producing Industry’s Fastest Embedded Memory [July 26, 2013]
… Featuring an interface speed of 400 megabytes per second (MB/s), the lightning-fast eMMC PRO memory provides exceptionally fast application booting and loading. The chips will enable much faster multi-tasking, web-browsing, application downloading and file transfers, as well as high-definition video capture and playback, and are highly responsive to running large-file gaming and productivity applications. … Samsung’s eMMC PRO memory chips, being produced in 16, 32 and 64GB versions, are based on Samsung 64Gb 10nm class* NAND flash technology. The new Samsung chips support the eMMC version 5.0 standard now nearing completion at JEDEC – the largest standards-setting body in the microelectronics industry. … As the fastest eMMC devices at more than 10 times the speed of a class 10 external memory card (which reads at 24MB/s and writes at 12MB/s), the new mobile memory greatly enhances the movement from one application to another in multitasking activities. …
Samsung Now Producing Industry’s Highest Density (3GB) LPDDR3 Mobile Memory for Smartphones [July 24, 2013]
… today announced the industry’s first mass production of three gigabyte (GB) low power double data rate 3 (LPDDR3) mobile DRAM, the highest density mobile memory solution for next-generation smartphones, which will bring a generation shift to the market from the 2GB packages that are widely used in current mobile devices.
The Samsung 3GB LPDDR3 mobile DRAM uses six of the industry’s smallest 20-nanometer (nm) class* four gigabit (Gb) LPDDR3 chips, in a symmetrical structure of two sets of three chips stacked in a single package only 0.8 millimeters high. With a full line-up of package dimensions, Samsung’s new ultra-slim memory solutions will enable thinner smartphone designs and allow for additional battery space, while offering a data transfer speed of up to 2,133 megabits per second (Mbps) per pin.
…
With the increased mobile DRAM capacity, users can enjoy seamless high-quality, Full HD video playback and faster multitasking on their smartphones. Also, the new LPDDR3 speeds up data downloading and is able to offer full support for LTE-A (LTE Advanced) service, a next-generation mobile telecommunication standard.
Samsung’s 3GB LPDDR3 DRAM connects with a mobile application processor using two symmetrical data transfer channels, each connected to a 1.5GB storage part. Though asymmetric data flow can cause sharp performance dips at certain settings, the symmetrical structure avoids such issues, while maximizing system level performance.
Considering that the current memory storage capacity for PCs is about 4GB, offering 3GB of DRAM memory on mobile devices should help most users enjoy PC-like performance, in narrowing the performance gap between PC and smartphone computing. With the new 3GB LPDDR3 DRAM, Samsung is now offering the widest range of mobile DRAM densities (1GB, 2GB and 3GB), while providing the industry’s first mobile DRAM based on 20-nm class process node technology. Samsung plans to continue to lead the growth of the mobile memory market, as it seeks to maintain unrivaled competitiveness in the premium memory sector.
Samsung Samples Industry’s First 16-Gigabyte Server Modules Based on DDR4 Memory technology [July 3, 2012]
Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced that it has begun sampling the industry’s first 16-gigabyte (GB) double data rate-4 (DDR4), registered dual inline memory modules (RDIMMs), designed for use in enterprise server systems.
“By launching these new high-density DDR4 modules, Samsung is embracing closer technical cooperation with key CPU and server companies for development of next-generation green IT systems,” said Wanhoon Hong, executive vice president, memory sales & marketing, Samsung Electronics. “Samsung will also aggressively move to establish the premium memory market for advanced applications including enterprise server systems and maintain the competitive edge for Samsung Green Memory products, while working on providing 20 nanometer (nm) class* based DDR4 DRAM in the future.”
Using 30nm-class* process technology, Samsung sampled new 8GB and 16GB DDR4 modules in June, in addition to providing them to major CPU and controller makers. The modules will bring the highest density and performance levels to premium enterprise server systems. Samsung previously introduced the industry’s first 30nm-class 2GB DDR4 module in December, 2010.
Employing new circuit architecture for computing systems, DDR4 technology boasts the highest performance among memory products available for today’s computing systems, which by next year will reach twice the current 1,600 megabits per second (Mbps) of DDR3 based modules. Also, by processing data far more efficiently at a mere 1.2 volts, Samsung’s DDR4 modules will reduce power consumption by approximately 40 percent compared to its predecessor DDR3 modules operating at 1.35V.
Samsung will keep working on completion of the JEDEC (Joint Electron Device Engineering Council) standardization of DDR4 technologies and product specifications, which is expected to be accomplished by August.
The company said it will work closely with its customers including server OEMs, as well as CPU and controller makers, to expand the market base for high-density DDR4 modules, of which it plans to begin volume production next year. It also is set to expand the overall premium memory market with its most advanced 20nm-class based DDR4 DRAM products, which will be available sometime next year at densities up to 32GB.
Samsung has been leading the advancement of DRAM technology ever since it developed the industry’s first DDR DRAM in 1997. In 2001, it introduced the first DDR2 DRAM, and in 2005, announced the first DDR3 using 80nm-class* technology. For more information about Samsung Green memory, visit www.samsung.com/GreenMemory
* Editors’ Note : 20nm-class means a process technology node somewhere between 20 and 29 nanometers, and 30nm-class means a process technology node somewhere between 30 and 39 nanometers, while 80nm-class means a process technology node somewhere between 80 and 89 nanometers.
From: Samsung DDR4 SDRAM: The new generation of high-performance, power-efficient memory that delivers greater reliability [brochure, July 17, 2013]
…
Samsung DDR4 is an optimized solution for highly virtualized environments, high-performance computing and networking. Semiconductor modules of Samsung DDR4 are designed with new system circuit architecture to deliver higher performance with low power requirements than previously available memory products.The Samsung portfolio of DDR4-based modules using 20nm-class process technology includes registered dual inline memory modules (RDIMMs) and load-reduced DIMMs
(LRDIMMs). These memory modules are available with initial speeds up to 2400 Mbps, increasing to the Joint Electron Devices Engineering Council (JEDEC)-defined 3200 Mbps.
The portfolio includes the following modules:
8 GB DDR4 RDIMMs
16 GB DDR4 RDIMMs
32 GB DDR4 RDIMMs and LRDIMMs
64 GB DDR4 LRDIMMs
128 GB DDR4 LRDIMMs
…
From: DRAM Market Grows Up; Industry’s Newfound Maturity Yields Growth Amid Adversity [IHS press release, June 19, 2013]
… After DRAM wafer output peaked in 2008 at 16.4 million 300-millimeter-equivalent wafers, production is expected to decline by 24 percent to 13.0 million this year, according to an IHS DRAM Dynamics Market Brief from information and analytics provider IHS (NYSE: IHS).
The projected cut will be the second straight year of deliberate downsizing following an 8 percent drop-off last year. This year’s output is expected to be slashed by 5 percent compared to 2012, as shown in the attached figure.
… Nearly 65 percent of all DRAM bit shipments went to a desktop or laptop 10 years ago, but that figure is less than 50 percent today and will fall further to south of 40 percent by the end of next year.
Meanwhile, servers and mobile gadgets like smartphones and tablets command an increasing share of DRAM bit shipments.
… The Taiwanese are no longer the powerhouse suppliers they used to be, while notable DRAM makers Qimonda of Germany and Elpida Memory of Japan have gone bankrupt and have been bought out by other players. By the end of this year, only three DRAM manufacturers will remain—Samsung and SK Hynix of South Korea, and U.S.-based Micron Technology. With fewer entities to influence the market, a more conservative approach toward capacity expansion is expected, and more stable growth can follow.
A final factor helping the global DRAM business is the slower pace of advancement in DRAM manufacturing processes. Each new generation of DRAM manufacturing technology is now taking longer to arrive.
The engineering challenges associated with shrinking DRAM size smaller than 30 nanometer [the 20nm class]— and eventually below 20 nanometer [the 10nm class]—are considerable.
The slowing cadence in manufacturing process evolution is resulting in slower bit growth, which is keeping supply in better balance with demand.
…
Samsung Brings Enhanced Mobile Graphics Performance Capabilities to New Exynos 5 Octa Processor [July 23, 2013]
Samsung Electronics Co., Ltd., a world leader in advanced semiconductor solutions, today introduced the latest addition to the Exynos product family with top level of graphic performance driven by a six-core ARM® Mali™-T628 GPU processor for the first time in the industry. With mobile use case scenarios becoming increasingly complex, Samsung’s newest eight-core ARM Cortex™ application processor gives designers a powerful, energy efficient tool to build multifaceted user interface capabilities directly into the system architecture. Samsung will demonstrate the new Exynos 5 family at SIGGRAPH 2013 in the ARM booth, #357; Exhibit Hall C at the Anaheim Convention Center.
Samsung’s new Exynos 5 Octa (product code: Exynos 5420), based on ARM Mali™-T628 MP6 cores, boosts 3D graphic processing capabilities that are over two times greater than the Exynos 5 Octa predecessor. The newest member of the Exynos family is able to perform General-Purpose computing on Graphics Processing Units (GPGPU) accelerating complex and computationally intensive algorithms or operations, traditionally processed by the CPU. This product also supports OpenGL® ES 3.0 and Full Profile Open CL 1.1, which enables the horsepower needed in multi-layer rendering of high-end, complex gaming scenarios, post-processing and sharing of photos and video, as well as general high-function multi-tasking operations.
“ARM welcomes the latest addition to the successful Exynos Octa 5 series, which uses ARM’s Mali GPU solution to dramatically improve graphics performance,” said Pete Hutton, executive vice president & general manager, Media Processing Division, ARM. “ARM big.LITTLE™ and ARM Artisan® Physical IP technologies continue to be at the heart of the Octa series and now complement the new functionality brought by ARM GPU Compute. This combination enables unprecedented capabilities in areas such as facial detection and gesture control, and brings desktop-quality editing of images and video to mobile devices.”
“Demand for richer graphic experiences is growing rapidly nowadays,” said Taehoon Kim, vice president of System LSI marketing, Samsung Electronics. “In order to meet that demand from both OEMs and end users, we developed this processor which enables superb graphical performance without compromising power consumption.”
The newest Exynos processor is powered by four ARM Cortex®-A15™ processors at 1.8GHz with four additional Cortex-A7™cores at 1.3 GHz in a big.LITTLE processing implementation. This improves the CPU processing capability by 20 percent over the predecessor by optimizing the power-saving design.
In addition, the mobile image compression (MIC) IP block inside this System-on-Chip successfully lowers the total system power when bringing pictures or multimedia from memory to display panel. This feature results in maximizing the usage hours of mobile devices with a high-resolution display such as WQXGA (2500×1600), in particular when browsing the web or doing multimedia application requiring the frequent screen refresh.
The new Exynos 5 Octa processor also features a memory bandwidth of 14.9 gigabytes per second paired with a dual-channel LPDDR3 at 933MHz, enabling an industry-leading fast data processing and support for full HD Wifi display. This new processor also incorporates a variety of full HD 60 frames per second video hardware codec engines for 1080p video recording and playback.
The new family of Exynos 5 Octa is currently sampling to customers and is scheduled for mass-production in August.
Samsung SSD 840 Evo – @2013 Samsung SSD Global Summit in Seoul [Notebookitalia YouTube channel, July 18, 2013]
Samsung Unveils New Solid State Drives at its Annual SSD Global Summit [July 18, 2013]
New High-speed 1TB SSD to expedite transition to SSDs
… Samsung unveiled new high-performance, high-density SSDs that offer over 1TB memory storage. Among the highlights were the 840 EVO, a consumer-oriented entry-level, high-performance SATA based SSD offering up to 1TB, and the XS1715, an ultra-fast NVMe* SSD for enterprise storage use offering up to 1.6TB.
As part of its strategy to expand into the consumer market and further popularize SSDs, Samsung plans to initially introduce the Samsung SSD 840 EVO to major global markets in early August. Samsung will expand into additional markets at a later date.
The new Samsung SSD 840 EVO line-up makes use of the industry’s most compact 10-nanometer class** 128Gb high-performance NAND flash memory, which Samsung began mass producing in April. With these chips and Samsung’s proprietary multi-core controller, the Samsung SSD 840 EVO achieves unrivaled value for performance with improved sequential read and write speeds.
In addition, Samsung has developed the XS1715, the industry’s first 2.5-inch NVMe SSD line-up. This device will expand Samsung’s market base for enterprise SSDs, and the company will make them available in the second half of this year.
The new NVMe SSD XS1715 delivers random read performance that is over 10 times faster than Samsung’s former high-end enterprise storage SSD. The new NVMe SSD utilizes both the PCIe 3.0 interface, which is approximately two times faster than the PCIe 2.0 interface, and NVM express technology which accelerates the SSD’s overall speed. …
Samsung Now Mass Producing Industry’s First PCI-Express SSD for Ultra-slim Notebook PCs [July 17, 2013]
Samsung Electronics Co., Ltd. … has begun mass producing the industry’s first PCI-Express (PCIe) solid state drive (SSD) for next-generation ultra-slim notebook PCs. … Samsung started providing the new SSD to major notebook PC makers earlier this quarter. The XP941 lineup consists of 512, 256 and 128GB SSDs. … Samsung intends to continuously expand its production volumes of high-performance 10-nanometer class* NAND flash memory, in helping the company to maintain its lead in PCIe SSDs for ultra-slim PCs and notebook PCs. Furthermore, Samsung plans to introduce next-generation enterprise NVMe SSDs in a timely manner to also take the lead in that high-density SSD market, adding to its competitive edge.
Samsung Now Producing High-Performance SSD for Enterprise Servers and Data Centers [May 21, 2013]
… SM843T, for use in high-performance servers and storage in next-generation data centers, including Big Data systems. … strengthens Samsung’s SATA interface enterprise SSD product lineup. Offering up to 960 gigabytes (GB) of memory storage for faster and more efficient Big Data systems and cloud computing environments, the SM843T offers the industry’s most advanced performance level for SATA 6.0 SSDs. … IT managers will see a performance gain of 6 times and energy savings of 30 percent over the widely-used hard disk drives, enabling sharply improved system-level performance and greater energy efficiency at the same time.
… Through mass production of the SM843T, Samsung’s competitive edge in the advanced PC market has expanded into the high value-added enterprise SSD market with the company providing highest quality solutions to its customers. … According to IHS iSuppli, the global SSD market is expected to reach approximately USD 10 billion in revenues by the end of 2013, a 43 percent increase over the previous year, led by sales of enterprise SSDs which it expects will account for approximately 47 percent of the market in 2013.
From: SM843T Data Center Series: Data Center MLC-class SSD High-Performance, Consistently Low Latency and Extreme Write Endurance [Brochure, May 21, 2013]
Samsung has released the SM843T SSD, utilizing consumer 20nm-class MLC NAND Flash, which features consistently low latency, high write endurance, power-loss protection, coupled with a high-level of sustained writes (IOPS)—all at capacitates up to 960GB at an extremely affordable solution. Here are more details about his new drive’s outstanding features:
Exceptional Low Latency and High Write Endurance
…
Enterprise Power-Loss Protection
…
High-Capacity SSDs Available
…
A Workhorse of a Drive
The Samsung SM843T is optimized for sustained random read and write workloads (98,000 IOPS/15,000 IOPS). This represents a 6x increase in write IOPS, in the same price category, as our last generation, award-winning PM830 SSDs.
Samsung Now Producing Four Gigabit LPDDR3 Mobile DRAM, Using 20nm-class Process Technology [April 30, 2013]
Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today announced the industry’s first production of ultra-high-speed four gigabit (Gb) low power double data rate 3 (LPDDR3) mobile DRAM, which is being produced at a 20 nanometer (nm) class* process node.
The new 4Gb LPDDR3 mobile DRAM enables performance levels comparable to the standard DRAM utilized in personal computers, making it an attractive solution for demanding multimedia-intensive features on next-generation mobile devices such as high-performance smartphones and tablets.
… According to market research firm, Gartner, the DRAM market is forecast to grow by 13 percent year-over-year to reach $29.6 billion (US) in 2013, with mobile DRAM to exceed $10 billion in sales, for 35 percent of the total DRAM market.
Samsung Mass Producing High-Performance 128-gigabit 3-bit Multi-level-cell NAND Flash Memory [April 11, 2013]
Samsung Electronics Co., Ltd. … announced today that it has begun mass producing a 128-gigabit (Gb), 3-bit multi-level-cell (MLC) NAND memory chip using 10 nanometer (nm)-class* process technology this month. The highly advanced chip will enable high-density memory solutions such as embedded NAND storage and solid state drives (SSDs). … Samsung started production of 10nm-class 64Gb MLC NAND flash memory in November last year, and in less than five months, has added the new 128Gb NAND flash to its wide range of high-density memory storage offerings. The new 128Gb chip also extends Samsung’s 3-bit NAND memory line-up along with the 20nm-class* 64Gb 3-bit NAND flash chip that Samsung introduced in 2010. Further, the new 128Gb 3-bit MLC NAND chip offers more than twice the productivity of a 20nm-class 64Gb MLC NAND chip. …
President Park calls on Korean companies to participate in China’s western development project [arirangnews YouTube channel, June 30, 2013]
Regarding the #2 capex beneficiary, the Display Panel Segment (or as traditionally called: Samsung Display), we know the following:
From: Samsung bolsters OLED display biz [The Korea Times, July 22, 2013]
… The company currently dominates the global demand for small- and medium-sized OLED screens because it supplies such screens to Samsung Electronics, the world’s biggest smartphone manufacturer.
But the situation is different in the large-sized TV OLED screen segment. Its biggest rival, LG Display, is catching up by boosting its price-competitive white-based OLED technology.
Major TV manufacturers are also shifting their focus to OLED TVs, which have better profit margins than the currently popular LCD TVs.
In a statement to The Korea Times, Kim Ho-jung, senior manager of Samsung Display’s communication team, said that the firm’s so-called zero-pixel defect OLED (ZPD OLED) screens are better than those of rivals in terms of picture quality and customer value.
“As the world’s most-trusted TV manufacturer, Samsung was consistent in developing technologies. We are confident we will get tangible results by pushing for ZPD OLED screens. The displays have received a warm response both from individual and business clients,” said Kim.
The manager said that the firm has been adjusting panel designs and applying an advanced processing technology to address technological problems such as contamination control. Addressing such problems is key to OLED screens because of their complexity. …
… The company will order equipment from local manufacturers for its third OLED plant, A3. The plant will use sixth-generation glass-cutting technology to produce displays.
Samsung plans to produce OLED screens in this new plant for both tablets and TVs. The plant will go online during the first half of next year, market analysts expect.
They said Samsung Display will thrive because OLED applications will continue to increase, especially as Samsung Electronics releases new product variants using OLED screens. …
Samsung Display to commercialize ‘unbreakable display’ this year [MK News, May 15, 2013]
As early as the fourth quarter of this year, consumers may see a new smartphone whose display is unbreakable when it falls onto the floor. It marks the beginning of an age of flexible displays.
According to industry sources Wednesday, Samsung Display and Samsung Corning Precision Materials successfully developed an unbreakable display early this year and enter into commercial production in the second half of this year. This new display is likely to be used for Samsung’s new strategic smartphone Galaxy Note 3 to be released late this year.
To date, the smartphone display is made from glass, which can be easily broken. To resolve this, Samsung Display developed plastic AMOLED and Samsung Corning released a film-coated display with increased durability.
Consumers tend to be less attentive to the glass part of their smartphone and this is easily broken when it falls, but there will be no worry of this new display is available, an industry source noted.
Plastic materials will also contribute to making lighter smartphones. Plastic AMOLED weighs just one third that of LCD and one half that of conventional AMOLED. The display part accounts for about half of the weight of a smartphone device, causing companies to focus on lighter design.
“Galaxy Note 3” to feature Qulacomm chip, 5.7” screen [MK News, July 25, 2013]
The “Galaxy Note 3,” Samsung’s ambitious work slated for debut in the second half (H2) of this year, will feature a 5.7-inch display. The device will also come with Qualcomm’s mobile application processor “Snapdragon 800,” which runs on LTE-A service, offering twice the speed of LTE.
A knowledgeable source for Samsung Electronics noted, “Samsung had fine-tuning procedures over the specifications of the Galaxy Note 3, which will come to the market in September,” adding “the Galaxy Note 3 will arrive with a 5.7-inch screen, contradicting the earlier rumored 5.99-inch screen”
Samsung Electronics initially designed the Galaxy Note 3 with a 5.99-inch display, but decided to roll out the device with a 5.7-inch screen taking into account the market response to its phablet Galaxy Mega (6.3 inch/5.8 inch). The Galaxy Note series are evolving with wider screens of 5.3, 5.5 and 5.7-inch. The Galaxy Note 3 will offer a screen that seems to be as wide as six-inch, by utilizing the bezel technology like the five-inch “Galaxy S4.”
With the beginning of the LTE-A era, there will be a change in the application processor (AP) that the Galaxy Note 3 will be equipped with. The Galaxy Note 3 to be released in the LTE-available nations will carry Qualcomm Snapdragon 800 instead of Exynos 5 Octa.
Samsung reportedly will reveal the Galaxy Note3 at Berlin’s IFA 2013 conference in September.
But on the same day 갤노트 3, 휘는 폰으로 낼까 말까? article Asian Economy:
The Samsung Galaxy Note 3 to be released in September with plastic organic light-emitting diode (OLED) is struggling whether to mount with that or not. Originally equipped with plastic OLED Galaxy Note 3 is developing problems such as yield, but unto the end of August, it is expected to be finalized for deployment. There are 25 days, according to the industry, for the Samsung Galaxy Note 3 to decide whether the plastic OLED could be mounted or not. …
The major attraction of this business segment was last touted in AMOLED Displays to Have Major Influence on Innovation in the Cloud Computing Era [BusinessWire, May 21, 2013]
AMOLED Displays to Have Major Influence on Innovation in the Cloud Computing Era
“In the cloud computing era, AMOLED displays are most likely to have the greatest amount of influence on innovation in smart devices.” Kinam Kim, CEO of Samsung Display, delivered this statement as part of a keynote speech on “Display and Innovation” to attendees at the Society for Information Display’s Display Week 2013 in the Vancouver Convention Centre today.
During the keynote speech, Mr. Kim said that the future of displays will change considerably, with special attention to be given for the virtually infinite number of imaging possibilities in AMOLED (Active Matrix Organic Light Emitting Diode) display technology.
Mr. Kim emphasized that three evolving “environments” are likely to make displays the central focus of the increasingly pervasive use of electronic devices.
The first environment is the spread of cloud computing. In the cloud environment, the capability of electronic networked devices for data processing and storage will be extended infinitely, allowing users everywhere to easily enjoy content that only highly advanced devices can fully process today, including ultra HD (3840 x 2160) images and 3D games. Higher levels of display technology will be required to support our increasing reliance on the cloud.
The second environment is the accelerating evolution of high-speed networks. By 2015, the velocity of 4G LTE will rise to 3 gigabits per second (Gbps), so the transmission time for a two-hour UHD-resolution movie will be under 35 seconds. Mr. Kim said, “As image quality of video content improves, larger and even more vibrant displays will emerge as a key differentiating point in mobile devices.”
The third environment is the spread of connectivity among electronic devices. As the use of Wi-Fi networks explodes, the N-Screen era is on its way. A massive network environment will be established by connecting not only smartphones and tablet PCs but also automobiles, home appliances and wearable computing devices. Due to this explosion in “data flow,” there will be a huge surge of interest in touch-enabled displays.
Mr. Kim said that the innovative advantages of AMOLED technology will allow consumers to realize more possibilities in electronic convenience than we might have ever imagined.
The first innovative advantage of AMOLED, according to Mr. Kim, is the superiority of its color. AMOLED displays can embody true colors closest to natural colors with their color space 1.4 times broader than that of LCD displays. By offering the world’s broadest color gamut – supporting nearly 100% of the Adobe RGB color space, AMOLED will expand the range of displays well suited to printed media, where specialized color is frequently required.
The second innovative advantage of AMOLED is its flexibility and transparency. AMOLED displays can maximize portability by making devices foldable and rollable, and they can also lead innovation in product designs with advantages in curved forms, transparent panels, and lighter weight than other display technologies.
The third innovative advantage of AMOLED displays will be their responsiveness to touch and sensors for detecting all five human senses. Using Samsung’s new Diamond Pixel™ technology, which has been optimized for the human retina, AMOLED displays can now depict natural colors and images with super high resolution.
Mr. Kim went on to say that display applications, with advantages of AMOLED technology, will rapidly spread throughout other business sectors like the automotive, publishing, bio-genetic and building industries.
In the automotive business, AMOLED displays will replace conventional glass and mirrors that have been used for digital mirrors and head-up displays. Capitalizing on their advantages with flexibility, durability and high resistance to temperature changes, AMOLED display panels also will be used for watch displays and for products in the fashion and health care market sectors. Further, in publication and building, AMOLED displays will set the trend for the building market sector with AMOLED architectural displays in and outside buildings being used as highly desirable decorative and information-delivering products.
Mr. Kim expressed confidence that “The display market is unlimited in the amount of growth that it can achieve, as technical innovation continues to accelerate.” And he added that “Samsung Display will play a leading role in the global display industry, as the display company possessing the most advanced AMOLED technology.”
Samsung Curved OLED TV commercial 삼성전자 곡면 OLED TV [n35a2 YouTube channel, July 3, 2013]
Then there are the latest technology advances with Samsung Display Showcasing State-of-the-Art Mobile to Extra-Large-Sized Displays at Display Week 2013 [BusinessWire, May 20, 2013]
Display Week 2013
VANCOUVER, British Columbia–(BUSINESS WIRE)–Samsung Display announced today that it is showcasing several industry-leading technologies and mobile to extra-large-sized display prototypes at the Society for Information Display’s Display Week 2013, May 21-23, 2013, in the Vancouver Convention Centre (Booth 700). These include a Full HD (1920×1080) mobile AMOLED display with the world’s broadest color gamut, and an 85-inch Ultra HD (3840×2160) LCD TV panel with extremely vivid color and low power consumption.
In addition, Samsung Display shows a unique new Diamond Pixel™ technology being highlighted at the show, and a featured LCD technology that enables local-dimming control in direct LED-based LCD panels.
The world’s first mass-produced 4.99-inch Full HD mobile AMOLED display offers the world’s broadest color gamut with a 94 percent average rate of reproduction for the Adobe RGB color space. The Adobe RGB standard is about 30 percent broader than general sRGB standards.
Samsung Display fulfills the most advanced mobile AMOLED display demands with its Diamond Pixel™ technology. This technology, based on the idea that the human retina reacts more to green than other colors, places more green than red and blue pixels in the pixel structure of AMOLED display panels.
With the new technology, Samsung’s Full HD AMOLED display can provide text messages 2.2 times clearer than HD (1280×720) displays. So, when curvilinear letters on the panel are magnified two or three times, Samsung’s Diamond Pixel™ technology enables text to be reproduced more smoothly (fewer “jaggies”) and accurately than those produced with conventional LCD technology.
Samsung Display is also providing Display Week participants with firsthand experience comparing the color gamut, color accuracy and letter quality of Full HD AMOLED displays in a special “experience zone” within its booth. The booth will provide a clear comparison between AMOLED and LCD displays. Attendees can see not only true crisp colors in the intricate wing pattern of morpho butterfly images, but can also view an image of a strand of knitting wool so detailed that it can only be appreciated using a Full HD AMOLED display.
Furthermore, Samsung Display’s exhibit of an 85-inch ultra HD TV panel showcases a LCD technology that enables local-dimming control in a direct LED-based LCD panel. The panel can save 30 percent of typical LED BLU power consumption. Its local-dimming control enables vivid color rendering including incredible black images, 80 percent brightness uniformity, and a remarkably-enhanced contrast ratio.
For the latest in green technology, Samsung Display is highlighting advanced power-saving solutions for smart mobile devices including smartphones and tablets. Here, Samsung Display has innovatively reduced power consumption of AMOLED display by enhancing the luminous efficacy of AMOLED pixels. Samsung Full HD AMOLED displays provide a 25 percent power-savings over that of existing HD AMOLED displays.
Samsung Display is also exhibiting a 10.1-inch WQXGA (2560 x 1600) LCD for tablets and a 13.3-inch WQXGA+ (3200 x 1800) LCD for notebooks, which each can deliver 30 percent greater power-savings than that of existing LCD tablet displays, by decreasing the number of driver circuits and increasing the efficiency of the LED BLU.
Also, Samsung is spotlighting a 23-inch multi-touch LCD display that can detect 10 touch points simultaneously. The prototype enables playing of the piano with exceptional finesse, or drawing a highly detailed picture on a monitor or a tablet.
About Samsung Display Co., Ltd.
Samsung Display Co., Ltd. is a global leader in display panel technology and products. Employing approximately 39,000 people at seven production facilities and nine sales offices worldwide, Samsung Display specializes in high-quality displays for consumer, mobile, IT and industrial usage, including those featuring OLED (organic light emitting diode) and LCD technologies. As a total solution provider, Samsung Display strives to advance the future with next-generation technologies featuring ultra-thin, energy-efficient, flexible, and transparent displays. For more information, please visit www.samsungdisplay.com.
End of Updates
Praise from competing Taiwan attributing 30 years of Samsung’s well continued success in the “classic” high-tech component space of DRAMs to nothing else than the exceptional “talent management” practice, the cornerstone of the “New Management” introduced in 1993: The lesson to be learned from Samsung: Q&A with Inotera chairman Charles Kau [DIGITIMES, June 27, 2013], you can read the full interview in the end of this post
Q: The failure of Taiwan’s DRAM industry has somehow deepened local makers’ hostility against Samsung Electronics. What is your comment on Samsung?
My own insert here: “[1:10] The efforts and determination with ongoing enourmous investment have made Samsung the world’s leader in memory chip production since early days of 64K DRAM. [1:22]” from Samsung Electronics – Semiconductor Promotional Video 1997 [DatasheetArchiveLtd YouTube channel] below in order to show that the company’s high-tech lead was achieved before “New Management” (DRAM history info is from Samsung), although it was possible to continue only because of that, should be added here as well:
Sidenote #1: With this 256Mb DRAM chip Samsung was able to surge ahead of the Japanese in the same way as those were able to beat Intel, actually forcing Intel out of the DRAM business. See: It’s a Strategic Inflection Point [‘USD 99 Allwinner’, Dec 1, 2012]. If Samsung “New Management” had not been introduced in 1993 Samsung quite probably had been overtaken by the eager Taiwaneese DRAM manufacturers, in the same way as it happenned to Intel, and then to Japanese manufacturers. If you read the full interview in the end of this post you will understand the kind of “failure” of the whole Taiwanese DRAM industry in its entirety.
A: Many think that Samsung’s achievements rely on support from Korea’s government. But that is only half right. Indeed, Samsung did receive a large amount of government aid prior to 2000, but it has continued to strive after 2000 optimizing its management efforts under company chairman Lee Kun-hee.
Lee has stressed on cultivating its own pool of talent, considering it the most valuable asset of the company. But in Taiwan, most businesses have been focusing on how to reduce production costs and have ignored the value of talent.
Instead of devising measures to fight or compete against Korea companies, Taiwan companies should figure out how Samsung can become as powerful as it is today. After all, we should respect Samsung for its long-term efforts to cultivate its talent, and the way it treats talent – the people who have created the value of Samsung.
Thy typical Western view is not as mature as the Taiwanese one, with statements like “Samsung is a fast follower” in Samsung’s Secret Sauce: A Bloomberg Exclusive [Bloomberg YouTube channel, March 28, 2013], although recognizing Lee Kun-hee’s role
In fact there were 20 years (see later) of relentless talent management and design innovation, two core elements of Samsung’s global success: watch the Samsung PREMIERE 2013 GALAXY & ATIV Highlights June 20 event video as the latest “demonstration” of the “results” [SamsungTomorrow YouTube channel, June 24, 2013]
GALAXY NX, The First Interchangeable-Lens Camera with 3G/4G LTE & Wi-Fi Connectivity:– With the 3G/4G LTE technology, the GALAXY NX allows photographers to stay constantly connected with their world and to express their experiences immediately.– Superior image quality is available whenever and wherever with the GALAXY NX and array of interchangeable lenses. The 20.3MP APS-C Sensor produces images which are bright and detailed, even in low light conditions, while the DRIMe IV Image Signal Processor delivers the speed and accuracy which today’s photographers demand.– With Android 4.2 Jelly Bean, the functionality of a smartphone is utilized to improve the photographic experience.Versatile and easy to use, the GALAXY NX combines cutting edge optical performance with connectivity capabilities and galaxy of applications based on Android eco-system, all in one stylish package. The result is a new type of connected device which allows users to turn their experiences into a story that can be instantly shared with anyone they choose, from wherever they might be, in amazing color and outstanding detail. |
ATIV Q, A truly convertible Intel Haswell tablet device with the ability to change modes and the power to enjoy both Windows and Android– Sports an innovative hinge design that allows the user to transform the tablet into 4 functional modes. Float and adjust the display to a comfortable viewing angle Or flip the display to place in the stand mode to watch movies with ease.– Allows users to experience both Windows 8 and Android 4.2.2 on the same 13.3-inch QHD+(3200 x 1800) device . Users will not only get access to Android apps via Google Play but also be able to transfer files, to share folders and files from Windows 8 to Android, truly marrying the mobile and PC experiences.ATIV Tab 3, The world’s thinnest Windows 8 tablet with the ability to run all Windows app and programs:– The frame is incredibly thin and light at only 8.2 millimeters thick (as popular smartphones) and 550g in weight. It features up to 10 hours of battery life.– Shares the premium design of the GALAXY series. Has improved S-Pen functionality, including high level pen display and S-Pen compatibility with MS Office. |
GALAXY S4 zoom, A revolutionary new device that can fulfill the role of both an industry leading smartphone and a high-end compact camera:– Combining 10x Optical Zoom, 16 Mega Pixel CMOS Sensor, OIS and Xenon Flash with the very latest Samsung GALAXY S4 technology.GALAXY S4 mini, A powerful yet compact version of Samsung’s bestselling smartphone, GALAXY S4:– With a 4.3” qHD Super AMOLED display, 107g light weight and compact design, the GALAXY S4 mini is easy to carry and operate with one hand.– Also boasts powerful performance, equipped with a 1.7GHz dual core processor that allows users to quickly and easily perform data intensive tasks.GALAXY S4 Active, The perfect companion designed to enhance life experiences of the active user who wants to stay connected while exploring environments from the most rugged mountain trails to the roughest rivers.– Has qualified protection from dust and water (IP67), so you never have to leave the device at home during a long day at the beach or dusty hike. It is also equipped with a water-resistant earphone jack. |
SideSync, A technology which enables users to switch from working on their PC to an Android-based Samsung smartphone with simplicity and ease:– Simply use your PC keyboard to respond to a text on a mobile phone; view larger maps, photos and multimedia displayed on both devices to make editing files even easier.– Or use your PC to back up and charge your mobile device so you can get back to the task at hand with less interruptions to work and everyday life.ATIV One 5 Style, The first all-in-one launched since the expansion of the ATIV brand, representing ultimate convenience in at-home computing.– Features Samsung SideSync technology, which enables users to effortlessly share content between the PC (here the Windows 8 all-in-one) and their mobile phone.– Debutes a new feature called HomeSync Lite, which transforms the PCs hard drive into a personal cloud server. HomeSync Lite allows users to back-up their personal files, photos and videos from portable devices to PCs and access it remotely via a mobile device anytime, anywhere. Multiple family members can privately manage individual accounts via the One 5 Style, truly making it a hub for the entire family. |
Sidenote #2: Samsung Unifies Best-in-Class Windows PCs Under Newly Expanded ATIV Brand [Samsung Mobile Press release, April 25, 2013] i.e. “… expanding the ATIV brand to include its leading Windows® -based PCs. ATIV, previously the brand for the company’s Smart PC Windows-based hybrid devices, now represents the convergence of PC and mobile technologies and unites all of Samsung’s Windows PCs and devices under one cohesive brand. In tune with the needs and wants of today’s evolving consumers, the Samsung ATIV line offers a variety of market leading Windows PC devices designed to extend the mobile experience from your handset to laptop and vice versa, making work more seamless and life more convenient.” ATIV actually is coming from “CreATIV –> OriginATIV –> InnovATIV” (with the last letter “e” omitted as it is not pronounced either).
Then watch these technology videos in order to understand Samsung stance in areas which will be most critical for its ATIV effort (as its GALAXY effort is already a huge success):
– Samsung ATIV Q Hands On – Windows 8/Android Convertible Ultrabook [minipcpro YouTube channel, June 20, 2013]
– [Samsung ATIV] SideSync Introduction [SamsungNotebook YouTube channel, May 9, 2013]
– Introducing Samsung HomeSync Lite [SamsungNotebook YouTube channel, July 1, 2013]
– [MWC2013] Samsung HomeSync Presentation [SamsungTomorrow YouTube channel, Feb 26, 2013]
Next watch the details of the June 20 representative event in order to discover every aspect of Samsung innovations there: Samsung PREMIERE 2013 London (Full Version)
[SamsungTomorrow YouTube channel, recorded June 20, published June 24, 2013]
And here is a recent background article about those 20 years mentioned earlier:
Talent, design lead Samsung’s success [The Korea Times, June 20, 2013]
Kevin Lane Keller from Tuck School of Business, Dartmouth College, the United States, delivers a keynote speech during a forum organized to highlight the success of Samsung Group over the past two decades since chairman Lee Kun-hee declared his “New Management” philosophy in 1993, at The K-Hotel in southern Seoul, Thursday. / Courtesy of Samsung Group
Experts advise technology giant to focus more on marketing
Talent management and design innovation are two core elements that have spurred Samsung’s successful transformation into a global player over the past two decades, according to global business experts, Thursday.
They pointed out that Samsung’s future depends upon how it will improve marketing strategies and combine a new breed of software and hardware.
Such analysis came at an International Forum billed as “Twenty years of Samsung’s New Management” organized by The Korean Academic Society of Business Administration at The K-Hotel, southern Seoul.
Under the slogan “New Management,” Samsung Electronics Chairman Lee Kun-hee declared his goal in Frankfurt, Germany, in 1993, of shifting toward quality-focused growth not quantity-highlighted expansion. Lee then ordered employees to change everything but their wives and children.
“Samsung was a true transformer over the last 20 years in a very positive way. Its business transformation is a model for all modern multinationals and the transformation well illustrates the competitive advantage that form a strong link between business strategy and people strategy,” said Patrick M. Wright, bicentennial chair of the Darla Moore School of Business at the University of South Carolina, during the forum.
He cited talent assessment and review programs as one crucial part behind Samsung’s success.
“Samsung’s transformation has had people at the center. The human resources function at Samsung has played a critical role in enabling this transformation. The human resources system has developed to enable the transformation of the New Management that has constantly evolved to meet new challenges and achieve new objectives,” Wright said.
The scholar said that New Management was supported by strategy execution by top Samsung management.
“New human capital pools require new and different ways of attracting, developing, motivating and retaining those people. This requires human resources functions to design, develop and deliver human resources system and processes.”
“Samsung lets the core talent set the business goal rather than simply implementing the given goal. This creates more buy-ins, and makes the objectives more directly relevant to the situation,” the global human resources expert analyzed.
Amid the industry’s massive shift toward software, Samsung’s human resources head Won Ki-chan told The Korea Times that it has 36,000 software resources, globally, and added the firm is going to hire more, although the Samsung executive declined to elaborate further.
New Management also awakened the corporate for the importance of fine-tuned marketing strategies, said a marketing expert in the United States.
Kevin Lane Keller from Tuck School of Business, Dartmouth College, the United States, has given six marketing imperatives for better understanding of Samsung’s success story.
“Samsung puts a lot to design. Actually, it has a strong design philosophy. Samsung has developed creative ad campaigns, strong in-store programs and high-profile sponsorships,” said Keller, who is an international leader in the study of brands, branding and strategic brand management.
Emphasizing its consistency to launch new products to the time-to-markets, Keller said Samsung Electronics has been consistent in maximizing long-term growth by entering new markets. “This is the importance of innovation and relevance,” he said.
“Samsung has taken a big picture view of marketing effects and knew what’s working. It’s been achieving greater accountability for marketing investments in brands. Samsung was launching very clever marketing campaigns. Advertizing was another factor that lifted Samsung over the two decades.”
In 1993, Samsung was just a small supplier that sold cheap home appliances and handsets. Now, it is the world’s biggest technology firm by revenue, and a leading brand consultancy, InterBrand, ranked it as the ninth global brand in 2012.
Challenges
Keller advised Samsung to improve marketing, further, in a highly-competitive consumer electronics market.
“Be a leader, tap even more into emotions and manage brand architecture carefully,” he said.
“Yes, this is a challenge. But Samsung overcame Sony and Apple and now has achieved firm leadership. Leadership isn’t something that shouldn’t be earned in a single day. But Samsung must keep being innovative and relevant,” he stressed.
According to the professor, Samsung must be confident in communications and bold in action, while the company needs to cultivate yearning to purchase and pride of ownership.
“My final advice is that Samsung needs to recognize the pros and cons of flagship products. Keep it simple and clear.”
Hiroshi Katayama, professor at Waseda University in Japan, has pointed out that the future of Samsung’s next decades will be dependent upon how it advances its supply chain management system and how the company will develop and implement effective transfer methods in between sites, business functions, business divisions and industries.
Kenn Allen, president at the Civil Society Consulting Group, has urged Samsung to show more willingness toward corporate citizenship-related programs, internationally, so long as to be recognized as a true global leader.
“Primary investment for corporate citizenship programs is in Korea, thus limiting global impact internally and externally. Corporate volunteering needs to be valued more,” Allen said.
Then another article on the same subject: Talent, High-Speed Decision-Making Lead Samsung’s Success [Korea IT Times, June 24, 2013]
SEOUL, KOREA – Talent-oriented management and high-speed decision-making have led Samsung Group’s remarkable success, a global business expert said.
At an international forum titled the “Twenty years of Samsung’s New Management” held at the K-Hotel in Yangjae-dong, Seoul on June 20, Hiroshi Katayama, a professor of Waseda University in Japan, said, “The characteristics of Samsung’s quality management are speed management, timing management, pursuing perfection, talent-oriented management, seeking synergy effects and exact insight of business nature.
“Samsung succeeded in removing unnecessary business process and being equipped with globally standardized development system, producing a rapid decision-making system.”
He also pointed out at the forum organized by the Korean Academic Society of Business Administration that the future of Samsung’s next decades will be dependent upon how it advances its supply chain management system and how the company will develop and implement effective transfer methods in between sites, business functions, business divisions and industries.
Under the slogan “New Management,” Samsung Electronics Chairman Lee Kun-hee declared his goal in Frankfurt, Germany, in 1993, of shifting toward quality-focused growth not quantity-highlighted expansion. Lee then ordered employees to change everything but their wives and children.
Meanwhile, Prof. Kevin Lane Keller of the Tuck School of Business, Dartmouth College, the United States, has given six marketing imperatives for better understanding of Samsung’s success story.
Keller, who is an international leader in the study of brands, branding and strategic brand management, said, “Samsung puts an emphasis on design. Actually, it has a strong design philosophy. Samsung has developed creative advertisement campaigns, strong in-store programs and high-profile sponsorships.”
Stressing its consistency to launch new products to the time-to-markets, Keller said Samsung Electronics has been consistent in maximizing long-term growth by entering new markets.
He said, “Samsung has taken a big picture view of marketing effects and knew what’s working. It has been achieving greater accountability for marketing investments in brands. Samsung was launching very clever marketing campaigns. Advertizing was another factor that lifted Samsung over the two decades.”
Keller advised Samsung to improve marketing further in a highly-competitive consumer electronics market, noting “Become a leader, tap even more into emotions and manage brand architecture carefully.”
The professor also said, “Samsung must be confident in communications and bold in action, while the company needs to cultivate yearning to purchase and pride of ownership. Samsung needs to recognize the pros and cons of flagship products. Keep it simple and clear.”
Prof. Song Jae-yong of Seoul National University said, “Samsung is giant, but it is a rapid organization. It is well diversified by field and boasts of top-class global competitiveness in each sector. Its most powerful strongpoint is its managerial system that has optimized merits of the Japanese and American-style managerial systems.”
Commenting that Samsung is equipped with the fastest response system in the world by securing global ERP and SCM management systems through massive investment, Song said, “Samsung is possible to develop new products at a faster pace than its global competitors as it has secured both finished products such as smartphone and TV and relevant parts, including semiconductor and display.”
At the forum, Patrick M. Wright, bicentennial chair of the Darla Moore School of Business at the University of South Carolina, also said, “Samsung was a true transformer over the last 20 years in a very positive way. Its business transformation is a model for all modern multinationals and the transformation well illustrates the competitive advantage that form a strong link between business strategy and people strategy.”
Noting that talent assessment and review programs are important reasons behind Samsung’s success, Wright said, “Samsung’s transformation has had people at the center. The human resources function at Samsung has played a critical role in enabling this transformation. The human resources system has developed to enable the transformation of the New Management that has constantly evolved to meet new challenges and achieve new objectives.”
Saying that New Management was supported by strategy execution by top Samsung management, the scholar said, “New human capital pools require new and different ways of attracting, developing, motivating and retaining those people. This requires human resources functions to design, develop and deliver human resources system and processes.”
In the meantime, Prof. Kim Seong-soo of Seoul National University, said, “Samsung’s personnel management system has become a foundation to lead new management philosophy and basic strategies. To effectively cope with rapidly changing managerial environment, Samsung has secured talents to lead the future management preemptively and changed market strategies frequently by using the accumulated human resources.”
Lee Kun-hee, Jae-yong make business trip to Japan, China
Samsung Electronics Chairman Lee Kun-hee and his son Jae-yong, vice chairman of the electronics maker, flew to Japan and China, respectively, on June 20.
The junior Lee left the Gimpo International Airport around 9:00 a.m. for Beijing, along with Kim Suk, CEO of Samsung Securities, and Lee Sang-hoon, head of the managerial support office of Samsung Electronics.
His business trip is designed to check Samsung’s local corporations and business offshoots in China ahead of President Park Geun-hye’s official visit to China from June 27-30.
President Park Geun-hye will meet Chinese President Xi Jinping on the first day of her visit to China. The summit is expected to play a crucial role in inter-Korean relations, which are showing signs of improving as the two are set to hold a minister-level meeting for the first time in six years. Beijing has a big say in Pyongyang as its main benefactor.
The vice chairman is also scheduled to visit the U.S. before returning home.
Meanwhile, Chairman Lee left the Gimpo International Airport around 10:00 a.m. for Japan. His overseas trip this time seems to seek a new business strategy beyond the New Management.
Finally what 2 years ago The Paradox of Samsung’s Rise article by Khanna.T, Song. J and Lee.K in the Harvard Business Review [July-August, 2011, pp. 142-147] found after seven years of tracing Samsung’s progress:
Samsung’s unlikely success in mixing Western best practices with an essentially Japanese business system holds powerful lessons for today’s emerging giants.
As today’s emerging giants face the challenge of moving beyond their home markets, they have much to learn from the pathbreaking experience of South Korea’s Samsung Group, arguably the most successful globalizer of the previous generation.
Twenty years ago, few people would have predicted that Samsung could transform itself from a low-cost original equipment manufacturer to a world leader in R&D, marketing, and design, with a brand more valuable than Pepsi, Nike, or American Express. Fewer still would have predicted the success of the oath it has taken. For two decades now, Samsung has been grafting Western business practices onto its essentially Japanese system, combining its traditional low-cost manufacturing prowess with an ability to bring high-quality, high-margin brands products swiftly to market.
The two sets of business practices could not have seemed more incompatible. Into an organization focused on continuous process improvement, Samsung introduced a focus on innovation. Into a homogeneous workforce, Samsung introduced outsiders who could not speak the language and were unfamiliar with the company’s culture. Into a Confucian tradition of reverence for elders, Samsung introduced merit pay and promotion, putting some young people in positions of authority over their elders. It has been a path marked by both disorienting disequilibrium and intense exhilaration.
Like Samsung, today’s emerging giants-Haier in China, Infosys in India, and Koc in turkey, for instance-face a paradox: Their continued success requires turning away from what made them successful. The tightly integrated business systems that have worked in their home markets are unlikely to secure their future in global markets. To move to the next level, they, too, must reinvent themselves in ways that may seen contradictory. And when they reach new plateaus, they will need to do so again.
For seven years, we have traced Samsung’s progress as it has steadily navigated their paradox to transcend initial success in its home markets and move onto the world stage. It is a story we believe holds many important lessons for the current generation of emerging giants seeking to do the same.
The rise of a World Leader
My own insert here: History of Samsung [cnetuk YouTube channel, Feb 20, 2012]
Founded in 1938, the Samsung Group is the largest corporate entity in South Korea, with $227.3 billion in revenue in 2010 and 315 thousand employees worldwide. Best known for its flagship, Samsung Electronics (SEC)-producer of semiconductors, cell phones, TVs, and LCD panels-the group’s highly diversified businesses span a wide range of industries, including financial services, machinery, shipbuilding, and chemicals.
By 1987, when Lee Kun-Hee succeeded his father as only the second chairman in the company’s history, Samsung was the leader in Korea in most of its markets. But its overseas position as a low-cost producer was becoming untenable in the face of intensifying competition from Japanese electronics makers, which were setting up manufacturing plants in Southeast Asia, and rising domestic wages in South Korea’s newly liberalizing economy.
In the early 1990s, Lee spotted an opportunity in the reluctance of Japanese companies-the analog markets leaders-to adopt digital technology, with consumers were flocking to in cameras, audio equipment, and other electronic products. This opened the door for Samsung to surpass its rivals if it developed the agility, innovativeness, and creativity to succeed in the new digital market.
For those qualities Lee looked to the West. In 1993, he launched the New Management initiative to import Western best practices related to strategy formulation, talent management, and compensation into Samsung’s existing business model. The aim was to markedly improve marketing, R&D, and design while retaining core strengths in manufacturing, continuous improvement, and plant operations. Execution of this mix-and-match strategy took three broad forms:
- A formal process to identify, adapt, and implement the most appropriate Western best practices.
- Steady efforts to make Samsung’s culture more open to change by bringing outsiders in and sending insiders abroad.
- Intervention by Lee to protect longterm investments from short-term financial pressures.
In this way, slowly and steadily but not always smoothly, Samsung has built its hybrid management system as a series of experiments, first in SEC and eventually throughout the Samsung Group.
The results have been impressive: By 2004, SEC was delivering startling profitability numbers-$10.3 billion (almost 19%) on $55.2 billion in revenue-making it the world’s second most profitable manufacturer, behind Toyota. Since then, even in the wake of the recent global economic crisis, SEC’s profits have been higher than those of the five largest Japanese electronics firms (Song, Panasonic, Hitachi, and Sharp) combined. The company achieved record profits of about $14.4 billion on $138 billion in revenue in 2010, compared with $11.7 billion for Intel, $0.86 billion for Panasonic, and a net loss of $3.2 billion for Sony. From obscurity in the 1990s, the Samsung brand rose to number 19 on the 2010 Interbrand global making, with a value of $19.4 billion. But it wasn’t easy.
A Tightly Fitting System
Samsung’s Japanese roots are strong: when the company was founded, South Korea was a Japanese colony. Samsung’s first chairman, Lee’s father, was educated in Japan, and the company built its corporate muscle in industries-consumer electronics, memory chips, and LCD panels-that Japan once dominated. Accordingly, Samsung rose to prominence in its home market under the Japanese model of unrelated diversification and vertical integration in pursuit of synergies. Diversification suited South Korea’s weak external capital markets because it allowed the company to rely on internally generated cash from one operation to fund the others.
The Japanese hierarchical labor model also suited the Korean context. The institutions underpinning South Korea’s managerial labor markets were underdeveloped, making mobility across corporations rare. The absence of a well-developed stock market and of sufficient competition for talent, combined with a strong Confucian tradition of respect for elders, led to a seniority-based competitions and promotion system. To compete outsider its home markets, Lee knew, Samsung would need to move beyond its well-integrated system to engage with non-Koreans in non-Korean contexts. That means introducing practices inconsistent with the status quo.
Lee did not underestimate how unsettling that would be. Accordingly, he took great care to change only what needed to be changed and to ensure that Samsung adopted the most appropriate practices in a way people could understand and embrace. The company established new organizations to seek out and adapt best practices from abroad. Lee advocated directly for the practices he deemed most critical and solicited employees’ input in the development of each. Results were carefully measured. If resistance was too strong, the company delayed adoption, modified the practices, or-as was the case with stock options-abandoned it.
In this way, Samsung injected some highly incompatible business practices into its business model. Beginning in 1997, for instance, the company slowly introduced into its seniority-based pay structure a merit-based compensation system modeled after the best practices of General Electric, Hewlett-Packard, and Texas Instruments. The amount an excellent performer could be given relative to a poor performer in the same job increased each year, up to a differential of 50%. Similarly, Samsung took steps to allow high performers to advance more rapidly through its seniority-based promotion system by steadily shortening the minimum number of years they were required to stay at a particular level.
Other processes could be adapted and adopted more globally. GE’s six Sigma, for example, fit well with Samsung’s continuous improvement and specialists were involved in the system, whereas at Samsung the entire rank and file participated. Samsung similarly adopted a socialized profit-sharing program, modeled after HP’s, in which all employees, not just top and general managers, are eligible for a bonus based on a percentage of the salary.
This careful approach to importing Western business practices reduced disruption but also slowed progress. So, in a company where the chairman’s authority coexisted with a need for consensus in the managerial ranks, Lee sought to increase receptively to ideas from elsewhere. This he did from outside and by sending insiders abroad.
Bring outsiders in
It is perhaps an indication of the insularity of Samsung’s culture that for decades, the only outsiders the company recruited were ethnic Koreans, as far back as 1983, when it entered the memory chip business, the company had hired ethnic Korean engineers and executives away from Intel, IBM, and Bell Labs. These people had played crucial roles in Samsung’s ascent in less than a decade to global leadership in the chip industry. But when Lee tried to extend the approach to Samsung’s senior executive ranks-what the company refers to as S-level talent-the newcomers met with a formidable wall of resistance.
And little wonder. Because promotions at Samsung had always come from within, the newcomers were perceived to be taking advancement away from incumbents. Not surprisingly, incumbent managers closed ranks, setting the newcomers up to fail by withholding important information, exaggerating their mistakes, and excluding them socially.
To be fair, this reaction was in part justified: At first, some of Samsung’s recruits had a poor grasp of what was expected of them, and sometimes they were actually more junior than the company had intended. What’s more, success is contextual- to some degree S-level hires had performed well in their previous jobs because of their familiarity with the system. The tightly knit nature of Samsung’s culture was a separate issue that needed special attention.
Take the case of Eric Kim, who in 1999 was recruited to be SEC’s chief marketing office. Nowadays, most senior SEC executives recognize Kim as the person who initiated the “Samsung DigitAll: Everyone’s Invited” marketing campaign and established the strategy that turned Samsung into a truly global brand. SEC CEO Yun Jong-Yong threw his weight behind Kim from the start, declaring to his other senior executives, “Some of you may want to put him on top of a tree and then shake him down. If anybody tires that, they will be severely reprimanded.”
Nevertheless, through it all, Kim had a hard time getting support from other senior people. “Though Yun fully supported m and asked other senior executives to help me, they were reluctant to do so in my first two years at SEC,” he told us in 2004 interview. “Now they help me on my task-related issues, but I still feel that I am emotionally isolated from them.” In conversations we had in 2004 with senior executives at SEC, several were still downplaying Kin’s contribution to the dramatic improvement of SEC’s brand. Three months after those conversations, when Kim’s contact ended, he left SEC to become the chief marketing officer at Intel. Improving the quality of the S-level recruits-and their reception inside the company-was no small task, and Lee thought expansively about how to address it. Beginning in the early 1990s, Samsung sent international recruiting officers (IROS) abroad to familiarize themselves with foreign talent.
And in 2002, Lee made 30% of the annual performance appraisal of Samsung affiliates’ CEO dependent on hiring and retaining S-level talent. Thus motivated, Yun, for instance, took steps to ease newcomers into the organization by having them serve in an advisory capacity in their first months to get to know something of their colleagues, the culture, and the business before taking up their posts. He also instituted a formal mentoring program in which he met at least quarterly with each S-level recruit to give and receive feedback.
My own insert here: Samsung Global Strategy Group [hamho92 YouTube channel, Sept 12, 2012]
Samsung’s efforts to recruit and retain non-Korean MBAs and PHDs were hindered by cultural, social, and political tensions, all of which were magnified by the language barrier. To help assimilate these recruits, Lee in 1997 ordered group headquarters to set up a unique internal management consulting unit, the Global Strategy Group (GSG), which reports directly to the CEO. Its members-non-Korean graduates of top Western business and economic programs who have worked for such leading global companies as Mckinsey, Goldman Sachs, and Intel-spend fully two years in GSG and are required to learn rudimentary Korean before taking up their posts. Even so, many of them have eventually been assigned to overseas subsidiaries, usually in their home countries.
Culture fit is a hard nut to crack. Of the 208 non-Korean hired into GSG since it was created, 135 were still working for Samsung as of December 2010. The most successful are those who have taken the greatest pains to fit into the Korean culture.
Still, the rate of acceptance has been steadily rising. Before GSG, no non-Korean MBAs worked at SEC for more than three years, but fully 32% of the non-Korean MBAs recruited to SEC the year GSG was established were still with the company three years later. Over the next 10 years, that figure rose to 67%. The effect of these employees on the organizations has been something like that of a steady trickle of water on stone. As more people from GSG are assigned to SEC, their Korean colleagues have had to change their work styles and mind-sets to accommodate Westernized practices, slowly and steadily making the environment more friendly to ideas from abroad. Today, SEC goes out of its way to ask GSG for more newly hired employees.
Sending insiders out
In the late 19th century, the Japanese imperial government sent its elite officers overseas to study successful Western practices and institutions. They brought back, among other innovations, the British postal system, the French judicial system, the American system of primary education, and the German military organization, adding innovative features of their own acts similarly, sending high potentials to Japan for advanced degrees in engineering; to the United States for further education in marketing and management; and to Singapore, Hong Kong, and New York for training in high fiancé. On returning home, these employees fill key positions and, in implementing what they have learned, become important change agents.
Squarely in this position is Samsung’s regional specialist program, arguably the company’s most important globalization effort. Each year for more than two decades, Samsung has sent some 200 talented young employees through an intensive 12-week language-training course followed by one full year abroad. For the first six months, their only job is to become fluent in the language and culture and to build networks by making friends and exploring the country. In the second six months, they carry out one independent project of their choice. Initially sent mainly to developed countries, in the past 10 years they have gone more often to emerging regions, especially China and, most recently, Africa.
Like their colleagues who have trained abroad, the specialists come back to major posts at headquarters or in the business units at home and abroad. In those roles they disseminate information about how successful foreign companies operate, and they advocate for and experiment with best practices.
It would be hard to overestimate the value of the connections regional specialists forge. One of the first specialists, for example, went to Thailand in 1990, where he became fluent in the language and established relationships with prominent local figures. He stayed on to earn an MBA at the Sasin Graduate Institut of Business Administration at Chulalongkorn University, the same school that many of Thailand’s prime minister and high-ranking government officials and corporate CEOs have attended. From his immersion he gained a comprehensive understanding of the country’s regulation and tax systems. He close ties enable him to introduce SES’s TV, audio, and video products to Thailand’s elite and to recruit a vice president of Hitachi to Samsung at a time when Hitachi was a market leader and Samsung was virtually unknown.
He is hardly alone. Another regional specialist, who went to Indonesia in 1991, used his language fluency and personal networks to establish a sales subsidiary whose sales doubled annually for three consecutive years. A third, sent to Bangalore in 2009, devoted his project to aiding a rural community there and then applied the intimate knowledge he had gained to the development of home electronics that Samsung could sell in the region.
Regional studies are markedly out of fashion these days in business schools, as discipline-based research in economics, political science, sociology, and the like has taken precedence. This has had the inadvertent effect of diminishing geographic intelligence-a global institution void, we argue, that Samsung is a leader in filling. In fact, Samsung’s experience suggests that it may be time for Western companies and business schools to place more emphasis on developing strong regional connections and expertise.
What only the Chairman can do
Samsung’s globalization efforts have taken substantial investments of time, money, and executive will. Some S-level hires took the IROs 10 years to recruit. SEC spends about $ 100 thousand over and above annual compensation to train and support the opportunity costs and turnover risks the company incurs by taking elite employees away from key positions for 15 months. These investments-which require fundamental trade-offs between the short and the long term and between cultural fit and domain expertise-have been made in good times and in bad, often over the objections of Samsung’s top managers. That would not have been possible without Lee’s unambiguous and consistent involvement.
Five years after the launch of the S-level recruitment program, support for it from Samsung Group affiliates’ CEO was distinctly lukewarm and would probably have remained so had Lee no tied so much of their compensation to its success. The Global Strategy Group, known within the company as the “chairman’s project”, would probably not have survived the Asian financial crisis-so deep it helped usher the Daewoo Group into bankruptcy-had Lee not funded it even in the face of Samsung’s own record-breaking losses.
David Steel, executive vice president of SEC and the highest-ranking person to come out of the GSG noted that the commitment of top management and the support of the managerial ranks are both necessary for the success of a program like this. Much of the chairman’s influence is transmitted symbolically. But the substance and symbolism of that support that are no small thing.
My own insert: How Samsung Design Evolved [SamsungTomorrow YouTube channel, Aug 29, 2012]
Lee’s long-term focused has been essential to his most recent initiative: the development of Samsung’s design expertise, a capacity the chairman believes will be critical for the company’s continued growth. Just as many never imagined that Samsung could become a dominant global player, many question its design aspirations but Lee set the agenda back in 1996. That year Samsung established and funded the Samsung Art& Design institute in collaboration with Parson the New School for Design in New York.
My own insert here: Professional Assessment on Samsung’s Design [SamsungTomorrow YouTube channel, Aug 29, 2012]
A substantial number of graduates of the intensive three-year training course have joined Samsung as designers. Following that lead, SEC has established design research institutes in the United States, the United Kingdom, Italy, Japan, China and India. Each year SEC sends 15 designers abroad to prominent design schools for one to three years to learn cutting-edge trends awards. Combining this design excellence with its traditional technological competence has allowed the once low-cost imitator to sustain a high-price strategy for its TVs and cell phones.
As long and hard as the company’s transition has been, the hybrid model has brought Samsung not to a pinnacle but to another plateau, which it will once again need to transcend. To keep steadily moving upward, it will have to reach a higher level of diversity and decentralization-to become a Brazilian company in Brazil, for instance, not Korean company that does business in Brazil. It will need to find new models for new beyond its current strengths and deal with further paradoxes that may arise. That is an effort that bears watching not only by the new generation of emerging market companies but also by Western competitions, which someday may reach the limits of their ability to impose Western culture on the rest of the world. Advice from a recruiting executive
Choi Chi-Hun, a graduate of Tufts with an MBA from George Washington University, spent 19 years working at GE, six at its headquarters in the United States, before he was recruited to Samsung in September 2007. Although he was a native Korean who’d served in the country’s air force and even worked at Samsung for some months in 1985, he went through the external senior-level initiation process, spending seven months as an adviser to Yum Jong-Yong, the CEO of Samsung Electronics (SEC), and a year and seven months as president of SEC’s printer business before serving as CEO of Samsung SDI and now as CEO of Samsung’s credit card business.
As an outsider with deep inside knowledge, Choi took to fit into the culture and as a result saw none of the assimilation problems that dogged many of his senior-level colleagues. He did not speak English with his Korean colleagues. He showed full respect to subordinates older than he was. He generally behaved as other Korean employees of Samsung did.
His advice to his fellow senior-level recruits is to do the same. Choi points to one of his successful proteges, whom he helped Samsung recruit in part because he knew the man, would steep himself in Korean culture and be game, for instance, to eat kimchi and drink Korean wine at the dinner party given in his honor on his first day.
Still, Choi is clear about the critical benefits outsiders bring to the organization. As someone intimately familiar with GE’s talent management system, for instance, he was in the ideal position to share the challenge that companies like GE face, which generally do not come across in a benchmarking exercise, offer potential solutions, and suggest which parts of the system Samsung could successfully adopt. Senior recruits from other companies bring similar knowledge, along with a fresh eye for ineffective and inefficient practices that insiders may take for granted. Assimilated as he is, Choi has advocated for a more market-oriented, performance-oriented, and meritocratic culture, aiming to cultivate at Samsung the meritocracy he knew at GE.
Tarun Khanna is the Jorge Paulo Lemann professor at Harvard Business School and a coauthor of Winning in Emerging Markets: A Road Map For Strategy And Execution (Harvard Business Review press, 2010).
Jaeyong Song and Kyungmook Lee are professors at Seoul National University in South Korea.
Finally the full interview about The lesson to be learned from Samsung: Q&A with Inotera chairman Charles Kau [DIGITIMES, June 27, 2013] in order to understand the kind of “failure” of the whole Taiwanese DRAM industry in its entirety
Inotera Memories and Nanya Technology, two DRAM subsidiaries under the Formosa Plastics Group (FPG), have survived the latest industry consolidation. Nanya has transformed itself into a niche memory device provider, while Inotera has strengthened its ties with Micron Technology, making it a primary DRAM production base for the US-based memory chipmaker.
Charles Kau, chairman of Inotera and concurrently president of Nanya Technology, plays an important role in Nanya’s repositioning process and Inotera’s integration with Micron. Kau shared his insights into the supply-demand situation and technology development of the current DRAM industry, as well as the success of Samsung Electronics, in a recent interview with Digitimes.
Q: How is the recent upsurge in the mobile communications business impacting the memory industry?
A: The arrival of the mobile communications era actually is the beginning of the second-phase development of today’s Internet networks. The number of handset users is estimated at two billion at present and is likely to jump to five billion or over 70% of the global population by 2018. So the impact of the ongoing mobile device revolution will come greater than expected, and mobile communications combined with cloud storage and computing will be the mainstream of the future industrial development. There will be also tremendous business opportunities to emerge from related cloud computing and mobile communications sectors.
The DRAM industry will also benefit from the second-phase of the mobile device revolution. Previously, most smartphones came with built-in high capacity NAND flash chips, but with growing popularity of cloud storage, more and more digital information will be stored in the cloud in the future, while the memory capacity of handsets will no longer post strong growth. However, we have seen the development where the functions of handsets have become more and more complex, requiring strong computing capacity, and therefore ramping up demand for mobile RAM chips.
Q: The supply of PC DRAM is currently falling short of demand compared to a freefall in prices experienced previously. What is your opinion?
A: The increasing popularity of mobile devices, including smartphones and tablets, in the past two years has resulted in a sharp decline in demand for PCs and consequently for PC DRAM chips.
However, the rise of smartphones has then opened a new outlet for DRAM. Given global shipments of smartphones and tablet are expected to top 700 and 200 million units, respectively, in 2013, the consumption of DRAM chips by the mobile device sector will be enough to replace 60% of memory chips consumed by PC products previously.
Meanwhile, since PC DRAM has a price advantage over mobile RAM, white-box tablet vendors in China have been using PC DRAM, instead of mobile RAM, for production of mobile devices for cost reduction, while reducing power consumption. This alternative has also contributed to the recent shortfall of PC DRAM.
Q: How would the recent change in the supply side of the DRAM industry and the evolution of some key technologies affect the future development of the memory industry?
A: The withdrawal of Germany- and Japan-based players from the DRAM industry contributed to the recent capacity consolidation of the memory industry. Meanwhile, the industry has reached a critical point where the processing node of DRAM chips could not be further shrunk below 20nm.
The processing limit has prevented DRAM makers from committing to continual investment in the industry since it would not be a worthy investment for spending up to US$500-600 million to build a 12-inch fab for manufacture of only 20nm chips.
So chipmakers are waiting for the arrival of 18-inch fabs, rather than ramping up new capacity at 12-inch ones. From the point of view of Inotera and Nanya Technology, we certainly will not commit new investments to build 12-inch fabs, and instead will seek opportunities to step into the 18-inch segment.
Q: Taiwan’s DRAM industry seems to have retreated to the previous OEM business model instead of developing technologies in-house. What is your comment on this reverse transition?
A: The Taiwan government’s policy pertaining to the development of the DRAM industry has been wrong since the beginning; it should not have allowed the establishment of so many DRAM makers at the same time. The policy diluted the resources for DRAM makers and undermined Taiwan’s efforts to develop home-grown technologies.
DRAM companies set up during 1995-1996, including Powerchip Semiconductor Corporation (now Powerchip Technology), Winbond Electronics and Vanguard International Semiconductor (VIS), were basically small- to medium-size businesses, but all of them have since developed related technologies of their own, which is unfavorable to implementing a possible industry consolidation.
Q: What is the current strategy for handling the DRAM business at the Formosa Plastics Group (FPG) since Taiwan was expelled from the latest industry consolidation?
A: FPG’s investments in the DRAM industry have resulted in countless losses, but it is still committing new investments to the industry and has continued to survive, while managing to retain two valuable resources for Taiwan and for the industry. Firstly, FPG [via Inotera] and Micron Technology have jointly retained a DRAM production base outside Korea. It will be unfavorable to the supply chain of the global IT industry as well as system providers if only Korea makers are left for the manufacture of DRAM chips.
Secondly, Nanya Technology under the FPG has shifted its role from being a commodity DRAM chipmaker to a provider of niche memory devices, which are strategically important components for a wide variety of consumer electronics products.
The latest industry integration has also made the global DRAM industry an oligopoly market where Korea makers together hold a majority of market share. Additionally, the industry’s relocation of its capacity for production of mobile RAM chips for smartphones and tablets, as well as the existing demand for standard DRAM parts, has resulted in a tight supply of niche memory devices in 2013.
Since Nanya boasts its own technologies, products and fabs, and is focusing on production of specialty DRAM chips, it has been approached by large-scale China-based system operators for possible cooperation.
Q: The failure of Taiwan’s DRAM industry has somehow deepened local makers’ hostility against Samsung Electronics. What is your comment on Samsung?
A: Many think that Samsung’s achievements rely on support from Korea’s government. But that is only half right. Indeed, Samsung did receive a large amount of government aid prior to 2000, but it has continued to strive after 2000 optimizing its management efforts under company chairman Lee Kun-hee.
Lee has stressed on cultivating its own pool of talent, considering it the most valuable asset of the company. But in Taiwan, most businesses have been focusing on how to reduce production costs and have ignored the value of talent.
Instead of devising measures to fight or compete against Korea companies, Taiwan companies should figure out how Samsung can become as powerful as it is today. After all, we should respect Samsung for its long-term efforts to cultivate its talent, and the way it treats talent – the people who have created the value of Samsung.
Windows Azure becoming an unbeatable offering on the cloud computing market
Almost a year ago, when –among others– the Windows Azure Mobile Services Preview came out, it became evident that Microsoft has a quite old heritage in cloud computing as it is the case that The cloud experience vision of .NET by Microsoft 12 years ago and its delivery now with Windows Azure, Windows 8/RT, Windows Phone, iOS and Android among others [‘Experiencing the Cloud’, Sept 16-20, 2012]. Next, with Windows Azure Media Services, an interesting question came up: Windows Azure Media Services OR Intel & Microsoft going together in the consumer space (again)? [‘Experiencing the Cloud’, Feb 13, 2013]. Then just in the beginning of this month it was possible to conclude that “Cloud first” from Microsoft is ready to change enterprise computing in all of its facets [‘Experiencing the Cloud’, June 4, 2013]. The understanding of importance of the cloud for the company was further enhanced by finding a few days later that Windows Embedded is an enterprise business now, like the whole Windows business, with Handheld and Compact versions to lead in the overall Internet of Things market as well [‘Experiencing the Cloud’, June 8, 2013]. Finally we had a quite vivid example of the fact that Windows Azure is a huge ecosystem effort as well with: Proper Oracle Java, Database and WebLogic support in Windows Azure including pay-per-use licensing via Microsoft + the same Oracle software supported on Microsoft Hyper-V as well [‘Experiencing the Cloud’, June 20, 2013].
Now we have general availability of Windows Azure Mobile Services, Windows Azure Web Sites, as well as previews of improved auto-scaling, alerting and notifications, and tooling support for Windows Azure through Visual Studio. This made me conclude that Windows Azure is becoming an unbeatable offering on the cloud computing market.
Let’s see now the details which I will base not only on the Microsoft materials but on the first media reactions (also in order to have consistency with my post of yesterday on Windows 8.1: Mind boggling opportunities, finally some appreciation by the media [‘Experiencing the Cloud’, June 27, 2013]) as well:
Media reactions in the first 15 hours:
Specific reactions:
Windows Azure Mobile Services, Windows Azure Web Sites – general availability:
- Microsoft makes Windows Azure services generally available [by Mary Jo Foley on CNET, June 27, 2013 at 1:13 PM PDT, also on the ZDNET] “Microsoft is moving more of its Windows Azure products from preview to general availability. The latest: Azure Mobile Services and Azure Web Sites.”
- Windows Azure Web Sites, Mobile Services Now Generally Available [TechCrunch, June 27, 2013]
- Windows Azure Mobile Services and Web Sites now generally available [Neowin.net, June 27, 2013]
- Microsoft’s Azure Mobile Services & Azure Web Sites hit general availability [VentureBeat, June 27, 2013 9:45 AM]
- Microsoft Build 2013: Azure Mobile Services and Azure Web Sites become generally available [Computing News, June 27, 2013]
- Microsoft Launches Azure Mobile Services and Azure Web Sites [Virtualization Review, June 27, 2013]
Using Azure Mobile Services and Web Sites for a Mobile Contest pt. 1 [windowsazure YouTube channel, June 27, 2013]
Using Azure Mobile Services and Web Sites for a Mobile Contest pt. 2 [windowsazure YouTube channel, June 27, 2013]
Partner support:
- Microsoft Adds Engine Yard to its Azure Cloud [SiliconANGLE, June 27, 2013]
- Windows Azure: Microsoft Receives Support From RightScale, EngineYard [Talkin’ Cloud, June 27, 2013]
- Box releasing new SDK that enable developers to integrate Box into their Windows Phone apps with ease [WPSuperfanboy, June 27, 2013 at 20:56]
Xamarin with Craig Dunn [windowsazure YouTube channel, June 27, 2013]
Building a Comprehensive Enterprise Cloud Ecosystem [Windows Azure blog, June 20, 2013]
Over the past two decades, Microsoft has worked with OEMs, Systems Integrators, ISVs, CSVs, Distributors and VARs to build one of the largest enterprise partner ecosystems in the world. We’ve done this because customers – and the industry – need solutions that just work together. With our partners we built the most comprehensive enterprise technology ecosystem – and, now, we’re focused on the enterprise cloud.
That’s why you’ve seen us work with Amazon, to bring Windows Server, SQL Server and the entire Microsoft stack to Amazon Web Services, and with EMC who owns VMware and Pivotal – key competitors in their respective areas. We also work with innovative companies like Emotive, with Systems Integrators like Accenture and Capgemini and a host of other partners – large, small and non-commercial – around the world and across the industry.
The need for diverse technologies and companies to work together is clear – and that means competitors are often partners. To many in the industry that is a given – and it really should be. The need for technologies to work together is particularly clear in cloud computing – where platforms and services are so incredibly connected they must work together to deliver cloud computing benefits when and how customers want it.
So, it should not be a surprise when we partner with technology leaders who are also competitors. We partner with these companies (and plan to partner with more) to bring our products & services to as many customers as possible. We will continue to work across the industry to ensure our products & services work with the many platforms, business apps, services and clouds our customers use.
As you may have heard me say, it’s been an exciting year for Windows Azure – and we are just 6 months in. Stay tuned – there’s more to come!
Steven Martin
General Manager
Windows Azure
All other:
- Microsoft Adds Auto Scaling To Windows Azure [TechCrunch, June 27, 2013]
- Microsoft Tweaks Windows Azure With Autoscaling, More [eWeek, June 27, 2013]
- Microsoft adds mobile services, auto-scaling to Azure [iTnews.com.au, June 28, 2013 at 6:31 AM]
- Microsoft Gives Virtual Machines in Windows Azure a Security Boost [Virtualization Review, June 27, 2013]
- Windows Azure To Gain Auto-Scaling, Single Sign-On Improvements [Virtualization Review, June 27, 2013]
Overall reactions:
Windows Azure Now Stores 8.5 Trillion Data Objects, Manages 900K Transactions Per Second [TechCrunch, June 27, 2013]
Microsoft announced at the Build conference today that Windows Azure now has 8.5 trillion objects stored on its infrastructure.
The company also announced the following:
- Customers do 900,000 storage transactions per second.
- The service is doubling its compute and storage every six months.
- 3.2 million organizations have Active Directory accounts with 68 million users.
- More than 50 percent of the world’s Fortune 500 companies are using Windows Azure.
In comparison, Amazon Web Services said at its AWS Summit in New York earlier this year that its S3 storage service now holds more than 2 trillion objects. According to a post by Frederic Lardinois, that’s up from 1 trillion last June and 1.3 trillion in November, when the company last updated these numbers at its re:Invent conference.
So what accounts for the differene between Azure and AWS? It all has to do with how each company counts the objects it stores. With that in consideration, it’s likely Azure’s numbers are far different if the same metrics were used as AWS.
Nevertheless, the news highlights the importance of Windows Azure for Microsoft, especially as the enterprise moves its infrastructure, shedding data centers to consolidate and reduce their costs.
- Microsoft Beefs Up Azure Cloud Platform at Build [PCMag.com, June 27, 2013 02:09pm EST]
- Microsoft exec on the Valley’s bias against Azure: It’s ‘running out of excuses’ [VentureBeat, June 27, 2013 6:13 PM]
- Microsoft boosts mobile app development and brings Unity3D to Xbox One [Ars Technica, June 27 2013, 11:41pm CEDT] “Build iOS, Android, and Windows Phone apps (and websites) on Windows Azure.”
- Microsoft tunes Windows Azure cloud for developers [InfoWorld, June 28, 2013] “At Build conference, company debuts Azure Mobile Services for mobile back-end app capabilities, Azure Web Sites for ‘business-grade’ Web apps”
- Microsoft server unit shows off full plate of results [The Seattle Times, June 28, 2013 at 03:30 a.m.]
- Microsoft adds 1,000 businesses to its Azure cloud daily – expands focus on mobile apps [Siliconrepublic.com]
Build 2013 Keynote Day 2 Highlights [InfoQ, June 27, 2013]
Server & Tools Business President Satya Nadella opened the keynote this morning with some statistics about Windows Azure and the major Microsoft cloud services.
Windows Azure
– 50% of Fortune 500 companies are using Windows Azure
– 3.2 Million organizations with active directory accounts
– 2 X compute + storage every 6 months
– 100+ major service releases since Build 2012 to Windows Azure
Major Microsoft Cloud Services
– XBox Live 48 million subscribers
– Skype 299 Million connected users
– Outlook.com 1 million users gained in 24 hours
– Office 365 Nearly 50 million Office web apps users
– SkyDriver 250 million accounts
– Bing 1 billion mobile notifications a month
– XBox Live 1.5 Billion games of Halo
Nadella noted the wide variety of first party cloud services that Microsoft supports, and says it is important that they support them as well as provides good learning experience. In his words, “We build for the first party and make available for the third party.”
Scott Hanselman arrived on stage to discuss the latest for ASP.NET on VS2013. A big change is the simplification of starting an ASP.NET application in VS2013. The project types have been reduced to one, “ASP.NET”, and from there the new project wizard lets developers customize their project based on what they would like to create: web forms, MVC, etc.
VS2013 will ship with Twitter’s open source project Bootstrap, and it will be Microsoft supported just like jQuery is now.
An important debugging achievement was demonstrated where browsers can be associated with Visual Studio, allowing for real-time debugging and developing. Edit code in VS2013, and the browser(s) will reflect the updates. In this case the demo showed Hanselman editing cshtml, and via SignalR the updates were shown on the his selected web browsers of IE and Chorme.
In another example, Hanselman went to www.bootswatch.com to obtain a new CSS template which he used to overwrite his current file. Pressing CTRL-ENTER, the browsers reflected this update.
Then Hansleman opened a CSS file to show some new editor tricks. Hovering over CSS statements, VS has a hover window appear that indicates which browser a particular statement applies to. Another ability allows VS to trace and view live streaming trace logs from Azure.
Then Hanselman demonstrated his sample website producing a QR Code of a deep link. He then scanned this on his phone which allowed him to jump into his existing authenticated session, moving from his desktop session to the same screen on his phone.
Satya returned to the stage to announce the general availability of Windows Azure Web Sites, which habe been in preview since Build 2012. Now it is available with full SLA and enterprise support.
Josh Twist from Microsoft’s Mobile Services came on stage to demonstrate using a Mac to add Azure support to an iOS app. Twist noted that developers looking to explore Azure can now create a free 20 meg SQL database which in addition to the 10 free web services allowed.
In Twist’s demo, Azure was used to create a custom XCode project that was preloaded with the appropriate Azure URLs for the project being worked on. This simplifies getting up to speed with Azure development on Mac. Related to this convenience, Windows Azure Mobile Services now enables git source control so that you do not need to edit code on the web portal. So if you would rather develop with a locally (VS, Sublime, etc) you can do by pulling the files down from Azure and the push them back when edits are complete. Twist demonstrated this functionality using Sublime to edit a JavaScript file, and then using a Git push back into Azure.
VS2013 has a new Server Explorer, which is used to browse all of the Mobile Services on Windows Azure for your site/installation. A new wizard has been added which simplifies adding Push Notification for Windows Store based applications.
Satya Returns to Introduce Scott Guthrie.
The big news is the new auto-scaling on Windows Azure for billing. Developers can manage the instance count, target CPU, VMs, No billing when a machine is stopped (only pay when the machine is working.)
Per minute billing has been added, for greater granularity. Preview of Windows Azure AutoScale is now live
Windows Azure
– Active Directory for the Cloud
– Integrate with on-premises Active Directory
– Enable single sign-on within your cloud Apps
– Supports SAML, WS-Fed, and OAuth 2.0
Applications tab shows all apps registered with the current Active directory. Manage Application to integrate (external) app with Active Directory. For example, developers can Use Windows Azure AD to enable user access to Amazon Web Services.
Satya describes Office 365 as “…a programmable surface area”
Jay Schmelzer to demonstrated the changes being made to allow/promote Office 365 as a platform.
– Rich Office Model
– Use Web APIs to access
– Extend with Azure
– First class tools support in VS2013
– Office 365 Apps + Windows Azure
Increasing promotion of Windows Azure, MSDN subscribers receive greater discounts and incentives to use the Azure platform.
1. Use your MSDN Dev/Test licenses on Windows Azure
2. Reduced rates for Dev/test licenses up to 97% discounts
3. No Credit card required for MSDN members
Microsoft showcases developer opportunity on Windows Azure, Windows devices [press release, June 27, 2013]
…
Increasing importance of cloud services
Developers today are building multidevice, multiscreen, cloud-connected experiences. Windows Azure spans infrastructure and platform capabilities to provide them with a comprehensive set of services to easily and quickly build modern applications, using the tools and languages familiar to them.
“Developers are increasingly demanding a flexible, comprehensive platform that helps them build and manage apps in a cloud- and mobile-driven world,” [Satya] Nadella [, president, Server and Tools Business] said. “To meet these demands, Microsoft has been doubling down on Windows Azure. Nearly 1,000 new businesses are betting on Windows Azure daily, and as momentum for Azure grows, so too does the developer opportunity to build applications that power modern businesses.”
Delivering on its commitment to provide developers with the most comprehensive cloud platform, Microsoft announced the general availability of Windows Azure Mobile Services. Mobile Services enables developers building Windows, Windows Phone, iOS and Android apps to store data in the cloud, authenticate users and send push notifications. TalkTalk Business, a leading business telecommunications provider in the United Kingdom, chose Windows Azure Mobile Services to create new ways to engage with its customers and serve demand for mobile access.
Microsoft also announced the general availability of Windows Azure Web Sites, which allows developers to create websites on a flexible, secure and scalable platform to reach new customers. With the investments Microsoft has made in ASP.NET and Web tools, Web developers can now create scalable experiences easier than ever. Dutch brewer Heineken is using Windows Azure to power a social pinball game for the UEFA Champions League Road to the Final campaign, with the expectations of millions of interactions scaled on Windows Azure. Heineken exceeded its usage metrics by a wide margin yet experienced no scalability issues with Windows Azure.
[Scott] Guthrie[, Corporate Vice President, Windows Azure] also highlighted Microsoft’s continued enterprise cloud momentum by demonstrating several platform advancements, including previews of improved auto-scaling, alerting and notifications, and tooling support for Windows Azure through Visual Studio. In addition, he previewed how Windows Azure Active Directory provides organizations and ISVs, such as Box, with a single sign-on experience to access cloud-based applications.
Developers can go to the Windows Azure site today for a free trial:http://www.windowsazure.com/en-us/pricing/free-trial/?WT.mc_id=AE37323DE.
…
Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN [ScottGu’s Blog, June 27, 2013 at 10:41 AM]
This morning we released a major set of updates to Windows Azure. These updates included:
- Web Sites: General Availability Release of Windows Azure Web Sites with SLA
- Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA
- Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines
- Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines)
- MSDN: No more credit card requirement for sign-up
All of these improvements are now available to use immediately (note: some are still in preview). Below are more details about them.
…
Windows Azure: Major Updates for Mobile Backend Development [ScottGu’s Blog, June 14, 2013]
This week we released some great updates to Windows Azure that make it significantly easier to develop mobile applications that use the cloud. These new capabilities include:
– Mobile Services: Custom API support
– Mobile Services: Git Source Control support
– Mobile Services: Node.js NPM Module support
– Mobile Services: A .NET API via NuGet
– Mobile Services and Web Sites: Free 20MB SQL Database Option for Mobile Services and Web Sites
– Mobile Notification Hubs: Android Broadcast Push Notification Support
All of these improvements are now available to use immediately (note: some are still in preview). Below are more details about them.
Windows Azure: Announcing New Dev/Test Offering, BizTalk Services, SSL Support with Web Sites, AD Improvements, Per Minute Billing [ScottGu’s Blog, June 3, 2013]
This morning we released some fantastic enhancements to Windows Azure:
- Dev/Test in the Cloud: MSDN Use Rights, Unbeatable MSDN Discount Rates, MSDN Monetary Credits
- BizTalk Services: Great new service for Windows Azure that enables EDI and EAI integration in the cloud
- Per-Minute Billing and No Charge for Stopped VMs: Now only get charged for the exact minutes of compute you use, no compute charges for stopped VMs
- SSL Support with Web Sites: Support for both IP Address and SNI based SSL bindings on custom web-site domains
- Active Directory: Updated directory sync utility, ability to manage Office 365 directory tenants from Windows Azure Management Portal
- Free Trial: More flexible Free Trial offer
There are so many improvements that I’m going to have to write multiple blog posts to cover all of them! Below is a quick summary of today’s updates at a high-level:
…
From Announcing LightSwitch in Visual Studio 2013 Preview [Visual Studio LightSwitch Team Blog, June 27, 2013]
…
Sneak Peek into the Future
At this point, I’d like to shift focus and provide a glimpse of a key part of our future roadmap. During this morning’s Build 2013 Day 2 keynote in San Francisco, an early preview was provided into how Visual Studio will enable the next generation of line-of-business applications in the cloud (you can check out the recording via Channel 9). A sample app was built during the keynote that highlighted some of the capabilities of what it means to be a modern business application; applications that run in the cloud, that are available to a myriad of devices, that aggregate data and services from in and out of an enterprise, that integrate user identities and social graphs, that are powered by a breadth of collaboration capabilities, and that continuously integrate with operations.
Folks familiar with LightSwitch will quickly notice that the demo was deeply anchored in LightSwitch’s unique RAD experience and took advantage of the rich platform capabilities exposed by Windows Azure and Office 365. We believe this platform+tools combination will take productivity to a whole new level and will best help developers meet the rising challenges and expectations for building and managing modern business applications. If you’re using LightSwitch today, you will be well positioned to take advantage of these future enhancements and leverage your existing skills to quickly create the next generation of business applications across Office 365 and Windows Azure. You can read more about this on Soma’s blog.
…
Additional information:
– Announcing the General Availability of Windows Azure Mobile Services, Web Sites and continued Service innovation [Windows Azure blog, June 27, 2013]
– 50 Percent of Fortune 500 Using Windows Azure [Windows Azure blog, June 14, 2013]
– Azure WebSites is now Generally Available [Enabling Digital Society blog of Microsoft, June 27, 2013]
– New features for Windows Azure Mobile Services [Enabling Digital Society blog of Microsoft, June 14, 2013]
– Lots of Azure Goodness Revealed [Enabling Digital Society blog of Microsoft, June 3, 2013]
– BizTalk Services is LIVE! [To BizTalk and Beyond! blog of Microsoft, June 3, 2013]
– Hello Windows Azure BizTalk Services! [BizTalk Server Team Blog, June 4, 2013]
– Windows Azure BizTalk Services – Preview [The Enterprise Integration Space blog of Microsoft, June 4, 2013]
– Business Apps, Cloud Apps, and More at Build 2013 [Somasegar’s blog, June 27, 2013]
Day 2 Keynote [Channel 9 video, June 27, 2013] Windows Azure related part up to [01:31:12], click on the link or the image to watch the video
Speech transcript: Satya Nadella and Scott Guthrie: Build 2013 Keynote
Remarks by Satya Nadella, President, Server & Tools Business; and Scott Guthrie, Corporate Vice President, Windows Azure; San Francisco, Calif., June 27, 2013
ANNOUNCER: Ladies and gentlemen, please welcome President, Server and Tools Business, Satya Nadella. (Applause.)
SATYA NADELLA: Good morning. Good morning, and welcome back to day two of Build. Hope all of you had a fantastic time yesterday. From what I gather, there were half a trillion megabytes of downloads as far as the show goes in terms of show net, so we really saturated the show net with all the downloads of Windows 8.1. So that’s just tremendous to see that all of you took Steve’s guidance and said, “Let’s just download it now and play with it.” Hopefully you had fun with it, also had a chance to get Visual Studio and maybe hack some of those Bing controls last night after the party.
But welcome back today, and we have some fantastic stuff to show. There’s going to be a lot more code onscreen as part of this keynote.
Yesterday, we talked about our devices, and we’re going to switch gears this morning to talk about the backend.
The context for the backend is the apps, the technology, as well as the devices, experiences that all of us collectively are building. We’re for sure well and truly into the world of devices and services. There is not an embedded system, not a sensor, not a device experience that’s not connected back to our cloud service. And that’s what we’re going to talk about.
And we see this momentum today in how we are seeing the backend evolve. If you look at Windows Azure, we have over 50 percent of the Fortune 500 companies already using Windows Azure. We have over 250,000 customers. We’re adding 1,000 customers a day.
We have 3.2 million distinct organizations inside of Azure AD representing something like 65 million users active. That’s a fantastic opportunity, and we’ll come back to that a couple of different times during this keynote.
Our storage and compute resources are doubling every six months. Our storage, in fact, is 8.5 trillion storage objects today, doing around 900K transactions per second. Something like 2 trillion transactions a month.
The last point, which is around the hypervisor growth, where we’re seeing tremendous hypervisor share growth is interesting. Because we are unique in that we not only are building an at-scale public cloud service, but we’re also taking all of the software technology that is underneath our public cloud service and making it available as part of our server products for service providers and enterprises to stand up their own cloud. That’s something pretty unique to us.
Given that, we’re seeing tremendous growth for the high-end servers that people are buying and the high-end server software people are buying from us to deploy their own cloud infrastructure in support of the applications that you all are building.
Now, of course at the end of the day, all that momentum has to be backed up by some product. And in that case, Steve talked a lot about our cadence and increased cadence across our devices. But when it comes to Windows Azure and our public cloud service, that cadence takes on a different hyper drive, if you will, because we are every day, every week, every month doing major updates. We’ve done over 100-plus major updates to our services from the last Build to now.
In fact, this is even translating into a much faster cadence for our server. We now have the R2 updates to our 2012 that were made available yesterday. So all around, when it comes to server technology and cloud technology, we have some of the fastest cadences, but very targeted on the new scenarios and applications and technologies that you’re building to run these cloud services.
Now, one of the other things that drives us and is at play for us on a daily basis is the feedback cycle of our first-party workloads. We have perhaps the most diverse set of first-party workloads at Microsoft. You know, these are SaaS applications that we run ourselves.
Now, these applications keep us honest, especially if you’re in the infrastructure business, you’ve got to live this live site availability day in and day out. And the diversity also keeps us honest because you build out your storage compute network, the application containers, to meet the needs of the diversity these applications represent.
Take Xbox. When they started Xbox Live in 2002, they had around 500 servers. Now, they use something like 300,000 servers, which are all part of our public cloud to be able to really drive their experiences. Halo itself has had over a billion games played, and something like 270 million hours of gameplay. And Halo uses the cloud in very interesting ways for pre-production, rendering support, gameplay, post-production analytics, the amount of real-time analytics that’s driving the continuous programming of Halo is pretty stunning.
Take SkyDrive. We have over 250 million accounts. You combine SkyDrive with the usage of Office Web Apps, where we have more than 50 million users of Office Web Apps, you can see a very different set of things that are happening with storage, collaboration, productivity.
Skype is re-architecting their core architecture to take advantage of the cloud for their 190-plus million users.
Bing apps that you saw many of them yesterday as part of Windows 8.1 are using the Azure backend to do a lot of things like notifications, which is one of the core scenarios for any mobile apps. And it’s going to send something like a billion notifications a month.
So all of these diverse needs that we have been building infrastructure for, we have this one simple mantra where “first party equals third party.” That means we build for our first party and make all of that available for our third party. And that feedback cycle is a fantastic cycle for us.
Now, when you put it all together, you put what we’re building, what you’re building, we see the activity on Azure, we listen to our customers, and you sort of distill it and say, “What are the key patterns of the modern business for cloud? What are the applications people are building?”
Three things emerge: People are building Web-centric applications. People are building mobile-centric applications. And what we call cloud-scale and enterprise-grade applications. So the rest of the presentation is all about getting into the depth of each of these patterns.
Now, in support of these applications, we’re building a very robust Windows Azure app model. Now, of course, at the bottom of the app model is our infrastructure. We run 18-plus datacenters on our own, 100-plus co-locations. We have an edge network. And so that is the physical plant. But the key thing is it’s the fabric, the operating system that we build to manage all of those resources.
At the compute-storage-network level, at the datacenter scale and multi-datacenter scale. And that really is the operating system that is Windows at the backend, at this point, which in fact shipped even in Windows Server for a different scale unit.
But that infrastructure management or resource management is one part of the operating system.
Then about that, you have all the application containers. And we’re unique in providing a complete IaaS plus PaaS, which is infrastructure as a service and platform as a service capability when it comes to application containers. Everything from virtual machines with full persistence to websites to mobile to media services to cloud services. So that capability is what allows you to build these rich applications and very capable applications.
Now, beyond that, we also believe that we can completely change the economics of what complex applications have needed in the past. We can take both productivity around development and continuous deployment and cycling through your code of any complex application and reduce it by orders of magnitude.
Take identity. We are going to change the nature of how people set up your applications to be able to accept multiple identities, have strong authentication and authorization, how to have a directory with rich people schema underneath it that you can use for authorization.
Integration, take all of the complex business-to-business or EI type of project that you have to write a lot of setup before you even write the core logic; we want to change the very nature of how you go about that with our integration services.
And when it comes to data, there is not a single application now that doesn’t have a diverse set of needs when it comes to the data from everything from SQL to NoSQL, all types of processing from transactional to streaming to interactive BI to MapReduce. And we have a full portfolio of storage technologies all provided as platform services so that your application development can be that much richer and that much easier.
Now, obviously, the story will not be complete without great tooling and great programming model. What we are doing with Visual Studio, we will see a lot of it throughout the demos. .NET, as well as our support for some of the cloud services around continuous development — everything from source code control, project management, build, monitoring — all of that technology pulled together, really take everything underneath it to a next level from an application development perspective.
But also supporting all the other frameworks. In fact, just this week we announced with Oracle that we will have even more first-class support for Java on Windows Azure. And so we have support for node, we have support for PHP and so on. So we have a fantastic set of language bindings to all of our platform support and a first-class support for Visual Studio .NET, as well as TFS with Git when it comes to application development.
So that’s really the app model. And the rest of the presentation is really for us to see a lot of this in action.
Let me just start with our IaaS and PaaS and virtual machines. We launched our IaaS service just in April. In fact, we have tremendous momentum. Something like 20 percent of all of Azure compute already is IaaS capacity. So that’s tremendous growth.
The gallery of images is constantly improving and increasing in size, in depth, breadth, and variety. In fact, if you want to spin up Windows Server 2012 R2, I would encourage you to go off to the Azure gallery and spin it up because it’s available as of yesterday there, and so that will be a fantastic use of the Azure IaaS, and test that out.
So what I want to talk about is websites. We’ve made a lot of investments in websites. And when we say “websites” we mean enterprise-grade Web infrastructure for your most mission-critical applications. Because if you think about it, your website is your front door to your business. It could be a SaaS business, it could be an enterprise business, but it’s the front door to your business. And you want the most robust enterprise-scale infrastructure for it. And we’ve invested to build the best Web stack with the best performance, load balancing built in, elasticity built in, and from a development perspective, integrated all the way into Visual Studio.
So we think that what we have in our website technology is the best-in-class Web for the enterprise-grade applications you want to build.
Now, you can also start up for free, and you can scale up. So maybe even the starting process with our Web, very, very easy.
Now, of course having Web technology is one, but it’s also very important for us to have a lot of framework support. And we have a lot of frameworks. But the one framework that we hold close and dear to our heart is ASP.NET. This is something that we have continued to innovate in significant ways. One of the things that we’ve done with the new version of ASP.NET, which is in preview as part of .NET 4.5.1. is the one ASP.NET. Which means that you can have one project where you can bring all of the technologies from Web forms to MVCs to Web APIs to signal all together.
We also improved our tooling from a scaffolding perspective across all of these frameworks.
You’re all building even these rich Web applications. So these single-page Web applications. And for that, you need new frameworks. We have Bootstrap. You also want to be able to call into the server side, we made that easy with OLAP support, we made it easy with Web APIs. So this makes it much easier for you now to be able to build these rich Web apps.
And Entity Framework. We’ve now plumbed async all the way back into the server. So now, you can imagine if you’re building one of those social media applications with lots of operations on the client, as well as needing the same async capabilities on the backend, you now have async end to end.
So a lot of this innovation is, I think, in combination with our Web is going to completely change how you could go about building your Web applications and your Web technologies.
To show you some of this in action, I wanted to invite up onstage Scott Hanselman from our Web team. Scott? (Applause.)
SCOTT HANSELMAN: Hello, friends. I’m going to show you some of the great, new stuff that we’ve got in ASP.NET and Visual Studio 2013.
I’m going to go here and hit file, new, project. And you’ll notice right off the bat that we’ve got just one ASP.NET Web application choice. This is delivering on that promise of one ASP.NET. (Applause.)
Awesome, I’m glad you dig that. And this is not the final dialog, but there is no MVC project or Web forms project anymore. I can go and say I want MVC with Web API or I want Web forms plus MVC. But there is, at its core, just one ASP.NET.
We’ve got an all-new authentication system. I can go in here and pick organizational accounts, use Active Directory or Azure Active Directory, do Windows auth.
For this application, I’m going to use an individual user account. I’m going to make a geek trivia app. So I’ll hit create project.
Now, of course when you’re targeting for the Web, it’s not realistic to target just one browser. We’re not going to use just Internet Explorer; we’re going to use every browser and try to make this have as much reach as possible.
So up here, I’m going to click “browse with” and then pick both Internet Explorer and Google Chrome and set them both as the default browser. (Applause.)
Now, we’ll go ahead and run our application. And I’ll snap Visual Studio off to the side here. You notice Visual Studio just launched IE and Chrome.
You can see that we’re using Twitter Bootstrap. We’re shipping Bootstrap with ASP.NET; you get a nice, responsive template. We’ve got the great icons, grid system, works on mobile. And that’s going to ship just like we shipped jQuery, as a fully supported item within ASP.NET, even though it’s open source.
I’m going to open up my index.cs HTML over here. You can see we’ve got ASP.NET as my H1. Notice next to multiple browsers, we’ve got a new present for you. You see this button right here? We’re running SignalR in process inside of Visual Studio, and there’s now a real-time connection between Visual Studio and any number of browsers that are running.
So now I can type in the new geek quiz application and hit this button. And using Web standards and Web sockets, we’ve just talked to any number of browsers. (Applause.)
Now, this is just scratching the surface of what we’re going to be able to do. What’s important isn’t the live reload example I’ve just shown you, but rather the idea that there’s a fundamental two-directional link now between any browser, including mobile browsers or browser simulators and Visual Studio.
Now, this is using the Bootstrap default template, which is kind of default. So I’m going to go up to Bootswatch, which is a great website that saves us from the tyranny of the default template.
And I’m going to pick — this looks appropriately garish. I’m going to pick this one here. And I’m going to just right click and say “save target as” and then download a different CSS, and I’m going to save that right over the top of the one that came with ASP.NET.
And then I’ll come back over here and use the hotkey control/alt/enter and update the linked browsers. And you’ll see that right there, the hotdog theme is back today, and this is the kind of high-quality design and attention to — I can’t do that with a straight face — attention to detail and design that you’ve come to expect from us at Microsoft. That’s beautiful, isn’t it? You’ve got to feel good about that, everybody.
I’m going to head over into Azure. And I’m going to say “new website.” You know, creating websites is really, really easy from within the portal. I’ll say geek quiz. Blah, blah, blah, and I’m going to make a new website.
And this is going to fire up in the cloud right now. You can see it’s going and creating that. And that’s going to be ready and waiting to go when it’s time for me to publish from Visual Studio.
Now, I’m going to fast forward in time here and close down this application and then do a little Julia Child action and switch into an application that’s a little bit farther along.
So we’re going to write a geek quiz or a geek trivia app. And it’s going to have Model View Controller and Web API on the server. And it’s going to send JSON across the wire over to the client side. This trivia controller, which is ASP.NET, Web API is going to be feeding that.
This is code that I’m not really familiar with. I can spend a lot of time scrolling around, or I could right click on the scroll bar, hit scroll bar options, and some of you fans may remember this guy. It’s back. And now you’ve got map mode inside of the scroll bar. I can move around, find my code really, really easily. Here is the GET method. Notice that this GET method is going to return the trivia questions into my application here. And it’s marked as async. We’ve got async and await all the way through. So this asynchronous Web API method is then going to call this service call, next question async.
Now, I could right click and say “go to definition.” But I could also say “peek definition.” And without actually opening the source code, see what’s going on in that file. (Applause.)
I could promote that if I wanted to. You notice, of course, I’m using Entity Framework 6, I’ve got async and await from clients to servers to services all the way down into the database non-blocking I/O, async and await all the way down. I just hit escape to drop out of there. So it makes it really, really easy to move around my code.
So this is going to serve the trivial questions. I’m just going to hit control comma, go get my index.cs HTML.
Now, in this HTML editor that’s been completely rewritten in Visual Studio 2013, you notice that I’ve got a couple of things you may not have seen before in an ASP.NET app. I’ve got Handlebars, which is a templating engine, and I’ve got Ember. So we’ve got model view controller on the server and model view controller on the client. So we can start making those rich, single-page applications.
Now, this Ember application here has some JavaScript. And on the client, we’ve got a next question method. This is going to go and get that next question, and I’ve got that Web API call. So this is how the trivia app is going to get its information. And then when I answer the question, I’m going to go and send that and post that same RESTful service. So you’ve got really nice experience for front-end Web developers. That’s the Ember stuff.
Here, I’ve got the Handlebars. This is a client-side template. You can see right off the bat that I’ve got syntax highlighting for my Handlebars or my Moustache templating. And I’m going to go ahead and fire this up, and I’ll put IE off to the side there, and I’ll put VS over here.
And I’m going to log into my geek quiz app. See if I can type my own name a few times here, friends. There we go. And this is going to go and fetch a trivia question. See, it said, “loading question.” And then it says, “How many Scotts work on the Azure team?” Which is a lot, believe me.
You’ll see that that’s coming from this bound question tile. So we’ve got client-side data binding right there.
Now, I need to figure out what the buttons are going to look like. I’ve got the question, but I don’t have the buttons. I could start typing the HTML; that’s kind of boring. But I could use Visual Studio Web Essentials, which takes the extensibility points in Visual Studio and extends them even further.
And I could say something like hash fu dot bar and hit tab. And now I’ve got Zen Coding, also known as Emmet, built in with Web Essentials.
So that means I could go and say, you know, I need a button. And button has a button trivia class, but I need four of those buttons.
And then, again, I hit — you like that, kids? (Applause.) Then I hit refresh, and you’ll notice that my browser is updating as I’m going.
But that’s not really good. I need more information. I really want the text there that says “answer,” and I want to have answer one, answer two, answer three. So I’ll go like that. And then hit refresh, and then we’re seeing it automatically update.
So that looks like what I want it to look like. But I want to do that client-side data binding. So I’m going to take this here, and I’m going to spin through that JSON that came across the wire. So I’m going to go open Moustache, and I’m going to say for each, and again, syntax highlighting, great experience for the client-side developer.
I’m going to say for each option, and then we’ll close up each here. And answer one, just like question title is going to be bound. So I’m going to open that up, and I’m going to say option.title. And then when a user clicks on that button, we’re going to have an Ember action. I’m going to say the action is call that send answer passing in the question and then passing in the option that the user chose.
I just did an update with the hotkey, how many Scotts work on Azure? 42. How old is Guthrie? He is zero XFF because he’s quite old. What color is his favorite polo? Goldenrod, in fact, is my — no? I’m sorry, Goldenrod is the next version of Windows, Windows Goldenrod. So my mistake there.
That’s a pretty nice flip animation. Let’s take a look at that. I’m going to go ahead and hit control comma again and type in “flip.” Go right into the flip CSS. You’ll see that that animation actually used no JavaScript at all. That, in fact, was done entirely in CSS, which can sometimes be hard to figure out, but with Web Essentials, I can actually hover over a rule, and it’ll tell me which version of which browser which vendor prefix supports. (Applause.)
So that’s pretty hot. I’m going to go ahead and right click and hit publish. And because I’ve got the Azure SDK installed, I can do my publish directly from Visual Studio. We’re going to go and load our Azure website. Hit OK. It brings the publish settings right down into Visual Studio. And I can go and publish directly from here.
So now I’m doing a live publish out to Azure directly from Visual Studio. It goes and launches the browser for me.
And I can click over here on the Server Explorer, and Windows Azure actually appears on the side now. I can start and stop virtual machines, start and stop websites; they’re all integrated inside of the Server Explorer.
That’s my website. I can double click on it, and again, while I can go to the management portal, I can change my settings, my .NET version and my application logging without having to enter the portal.
So back over into my app, when I sign in, I know that people are going to be pushing buttons and answering questions backstage. I want to see that. I put in some tracing. So what I’m going to do is right click and say view streaming logs in the output window.
This is the Visual Studio output window. And I’m just going to pin that off to the side. And then as I’m answering questions, and it looks like someone backstage is answering questions as well. I’m getting live streaming trace logs from Azure fed directly into Visual Studio. (Applause.)
Now, you know that we’ve also rewritten the entire authentication infrastructure and made it based on OWIN, which is the Open Web Interface for .NET. It’s an open source framework that lets you have pluggable middleware. So identity and authorization has been rewritten in a really, really clean way. And it allows us to do stuff that we really couldn’t do before and extend it in a pretty funny way.
And I think that every good sample involves a QR code, right? Don’t you think? This will bring the number of times that you’ve seen a QR code scanned in public to three. (Laughter.)
So what I want to do is I want to install this QR sample because I know people are going and checking out these trivia stats. And I’ve got SVG and SignalR giving me real-time updates as people are answering trivia questions.
I’m logged in right now as CHanselman. I want to take this session and I want to deep link into an authenticated session on a phone and then view these samples and take them with me.
So I’ve gone and used NuGet to bring in the QR sample. And now I’m going to go and publish that again to the same site. This is an incremental publish now. So this is going to go and send that new stuff up to Azure.
And then I’ll bring up my phone here. I’ve got my phone. And my camera guy, he follows me around. And I’m going to click on trivia stats. And here are the real-time trivia stats.
And then I’m going to click on transfer to mobile up here in the auth area. And we’re going to do is we’re going to generate a QR code. I’m going to then scan that code, and we get a deep link that pops up generated by ASP.NET that’s then going to bring me in IE, and now I’ve got SingnalR, SVG, and Flot all running inside of my browser and I’ve jumped into my authenticated session using OWIN, ASP.NET, and HTML5. It’s pretty fabulous stuff. (Applause.)
So we’ve got the promise of one ASP.NET; we’ve got browser link, bringing all of those browsers together with Web standards using SignalR. You saw Web Essentials as our playground that we’re adding new features to Visual Studio 2013. We can make Azure websites easily in the portal, publish directly from VS, logging, SignalR everywhere. Thanks very much, I hope you guys have fun. (Applause.)
SATYA NADELLA: So I hope you got a great feel for how we’re going to completely change or revolutionize Web development by innovation in tools, in the framework, and in the Web server in Windows Azure. And round-tripping across all three such that you can really do unimaginable things in a much more productive way.
We have over 130,000 active websites or Web applications today using Azure websites. Some big-name brands — Heineken, 3M, Toyota, Trek Bicycle — doing some very, very cool stuff using some of this technology.
I’m very, very pleased that we’re using all of that feedback to announce the general availability of Windows Azure Websites. This has been in preview now since last Build, and we’ve had some tremendous amount of feedback from all of the customers who have been using it. Many of them, obviously, in production. But now you can start using it for full SLA and enterprise support from us. So we’re really, really pleased to reach this milestone. Hope you get a chance to start using it as well. (Applause.)
I’m also pleased to announce the preview of Visual Studio 2013. You got to see it yesterday, today, and you’ll see a lot more of it. It’s just pretty stunning improvements in the tool itself. And combined with the .NET 4.5.1 framework update, you now have the previews of both the framework and the tools, and we really encourage you to give us feedback like you did the last time in your app development, and we’ll be watching for that.
So now I want to switch to mobile. Now, when you think about mobile-centric application development, the key consideration perhaps more than anything else is how do you build these mobile apps fast? And since there’s not a single mobile experience or application you’re building which doesn’t have a cloud backend, then the natural question is: What can we do to really speed up the building of these cloud backends?
And that’s exactly what Azure Mobile Services does, which is we provide a very easy way for you to build out a backend for your mobile experiences and applications. We provide a rich set of services from identity to data to push notification, as well as background scripting.
And then, of course, we support all of the platforms, Windows, Windows Phone, Android, IOS, as well as HTML5.
To show you this in action, I wanted to invite up onstage Josh Twist from our Windows Azure Mobile Services team. Josh? (Applause, music.)
JOSH TWIST: Thanks. We launched Windows Azure Mobile Services into preview in August last year. And in case you weren’t familiar, mobile services makes it incredibly easy to add the power of Windows Azure to your Windows Store, Windows Phone, IOS, Android, and even Web and HTML applications.
To prove this to you, I’m going to give you a demo now of how easy it is to add the cloud services you need to an IOS application using this map.
Here we are in the gorgeous Azure portal, and creating a new mobile service couldn’t be easier. I click, new, compute, mobile service, create. I enter the name of my mobile service, and then I choose a database option.
And I want to point out, look at this new option we have here. You can now create a free 20-megabyte SQL database. Which means it’s now completely free for developers to work against Mobile Services with the 10 free services and that free 20-megabyte SQL database.
Now, I’ve already created a service we have here today that we’re going to use called My Lists. If I click on the name, I’m greeted by our quick start, which is a short tutorial that shows me how to build a to-do list application.
Now, I selected IOS, but this same mobile service could simultaneously power all of these platforms.
We’re going to create a new IOS application. And since it’s a to-do list app, I need a table to hold my to-do list items.
And then I’m going to download a personalized starter project. So here it comes. That’s a little zip file. And inside that zip file I’m downloading from the portal is an Xcode project. So if I double click this, it’ll open up in Xcode, and then we’re going to take a look at the source. Because what we’ve done is we’ve pre-bootstrapped the application to be ready to talk to Mobile Services. You’ll see it already contains the URL for my new mobile service.
So what I’m going to do is launch this in the simulator. And what we’ll see here is a little to-do list application that inserts, updates, and reads data from Windows Azure with each operation being a single line of code, even in Objective-C.
So I’m going to create a little to-do list item here to add to my tasks. Let’s just save that. So now that’s saved in Windows Azure. To prove that to you, I’m going to switch over to the portal. We take a look at the data tabs, and you’ll see I can drill into the table, view all of my data right here, and there’s the item I just added saved safely into a SQL database in Windows Azure.
Now, we have so many cool features in Mobile Services. Here’s another one. I can actually add a script that executes securely on the server and intercepts those CRUD operations.
So what I’m going to do here, just to give you a quick example, is I’m going to add a time stamp to items that are being inserted. So I simply say item dot created equals new date. I’m going to save that. And right here from the portal, that’s going to go live into Windows Azure and be updated in just a few seconds. So it’s done.
Switch back to the app. Let’s insert a new item. That’s now saved. So if I switch back to browse, we’ll see that data again, but notice how we’ve automatically created a new column, and we’ve got that extra piece of data in there that executed on the server.
Now, we have this amazing script editing experience here in the browser, but not everybody wants to edit code in the portal. And so we’ve added a new feature to Windows Azure Mobile Services that allows you to manage all of your source assets using Git Source Control.
So I’m going to show you how to enable that. We go to the dashboard. Just down here under quick glance, we’ll get an option to set up source controls. So I’m going to click on that and kick it off.
Now, this can take a minute or two. So while that’s running, I’m going to give you a tour of some of the other new features we’ve added to Mobile Services recently.
One of our most-requested features was the ability to have service scripts for execute on the server but not in conjunction with HTTP CRUD operations where I can create an arbitrary REST API.
We’ve added that feature, and it’s called Custom API. So I can now create a completely arbitrary REST API in a matter of minutes with Mobile Services.
We also have a scheduler that allows me to execute scripts on a scheduled basis. So I can execute these every 15 minutes, every night at 2 a.m., whatever I prefer. And we also make it incredibly easy for you to authenticate your users with Microsoft Accounts, Facebook, Twitter, and Google. It’s just a single line of code in your applications.
Now, our source control’s still running here. So what I’m going to do actually is switch to another service, not make you guys wait.
So we have one here where I pre-configured Git. So if we go to the configure tab, you’ll see what we have here is a Git URL. So I’m going to copy this to the clipboard and then switch the terminal. And we’re now going to pull all of the source files down from the server repo onto my local machine.
That’s going to take just a few seconds. It’s going to pull those files down so I can now work on them locally with my favorite tools.
So I’m going to just drive into this directory here and show you what the tree looks like. So you can see we can see all of the API files, the scheduler files, and my table files including that insert script that we just edited in the portal.
Let’s take a look at that in Sublime. And you can see there’s that change. Now, we can make more changes here. I’m just going to comment this out and save it. And then I’m going to do a Git push to push that back up. So let’s commit it to the tree. And then Git push, and in a matter of seconds, that change will go live into Windows Azure.
So enough with the Mac. Let’s talk about what’s happened since preview. We’re now supporting tens of thousands of services in production on Mobile Services to all kinds of scenarios from games to business applications and consumer engagement applications.
I want to talk to you today about one of my favorite applications that we have in the store. And it’s from a company called TalkTalk Business. TalkTalk Business are one of the U.K.’s leading telephony providers for businesses. And these guys have a serious focus on customer service. So they’ve created a Windows Phone app and a Windows Store app.
Let me show you the phone application now. So here’s the app on my Start screen. If we launch it, you’ll see we get an instant at-a-glance view of my billing activity, my account balance. I can see all of the services I can use with TalkTalk Business, and I get real-time delivery of up-to-the-minute service alerts.
Now, it should come as no surprise that best-in-class applications like this need best-in-class services. And this is actually built using Mobile Services and is live in the U.K. stores today.
Now, they also have a Windows Store application. And I actually have a replica of that project here on my Windows machine.
And you can see the project’s open in the next version of Visual Studio 2013. One of the capabilities this app has is it lets me manage my user profile.
Now, let me show you some of the code that does that. So over here in this file, you can see where we upload the user profile when we make a save. Notice how that’s just a single line of code to write that data all the way through to my database.
And here we load a user profile into the UI, again, with a single line of code.
Now, these guys also have tables and scripts. And I want to show you those, but instead of switching out to the portal, let’s do it using the new Server Explorer in Visual Studio 2013.
So I can open up the Server Explorer here, dive into Windows Azure, notice the new Mobile Services tab, expand that, and we’ll see enumerated all of our Mobile Services.
There’s my TalkTalk service. And if we open this, we’ll see all of the tables that are backing that service, including my user profiles table down here.
If we look in that, we’ll be able to see all of my scripts. The best thing is I can now edit them here in Visual Studio.
So I launched the script editor. I can make a change. And then when I hit save, this is going to deploy live to Windows Azure directly from Visual Studio in a matter of seconds. It’s done. (Applause.)
So the next thing I want to do is app push notifications for this application.
Now, setting up push traditionally is quite a few steps. I have to register my application with the Windows Store. I have to configure Mobile Services with my credentials to call Windows Notification Services. I have to require a channel URI on my client and upload that to Mobile Services so it’s ready to send the push.
Let me show you just how easy we’ve made this in the next version of Visual Studio.
I simply right click, add push notification, and this wizard is going to guide me through all of the steps necessary. So I’m just entering my credentials there for the Windows Store. And then it’s going to ask me to choose which application I want to associate. So I’m going to choose this one.
The next step, I’ll be asked to choose which mobile service I want to configure. I’m going to choose TalkTalk, and we’re done.
What’s going to happen now is this is going to make some changes to my mobile service and to my client application. In fact, it’s going to prewire a test notification so I can be superbly confident that everything is wired correctly and going to work. And to try that out, all I have to do is launch the application.
Let’s try that now. It’s going to take a second to deploy. And then what we should see is a push notification arrive in the top-right corner. And there we go. So that’s how easy we’ve made it now to add a push notification to your application with Mobile Services and Visual Studio 2013. (Applause.)
The next thing I want to do is create an ability for the administrators at TalkTalk Business to actually send these service alerts. And these guys use a Web portal. So let’s switch over to their Web project.
So here it is in Visual Studio. And you’ll see we have an index HTML file. Let’s open that up.
Now, notice how we pre-configured this with the Mobile Services JavaScript SDK that we added recently. It now means it’s super easy to add Mobile Services to your Web and HTML hybrid applications.
We’ve already added the client. So all I need to do now is add the code to invoke the service API that sends those messages. So let’s try that. So I start client dot invoke API. I need the name of the API I’m calling, which is send alert, in this case. And then since I’m doing a post, I need to specify the body. Body is service alert. And we’re done.
So I’m going to save that and launch it in the local browser. Now, since we’ve already pre-configured the client to receive push notifications, we can actually test this whole scenario end to end right here on this machine.
So what I’m going to do is send out a service alert for email in the midlands and western region that says SMTP upgrade complete. And when I hit send notification I should get a push notification in the top-right corner that was initiated from a website. And there we go. (Applause.) Thank you.
You can see just how easy it is to add some incredible capabilities to your apps using Windows Azure Mobile Services. I really can’t wait to see what you guys do with this. I’ll see you at 2:00. (Applause.)
SATYA NADELLA: Thanks, Josh.
As Josh was saying, we’ve been in preview, and we’ve got some tremendous feedback. We’ve had over 20,000 active apps on Azure Mobile Services to date, and TalkTalk Business is something that Josh showed. There’s a cool app written by Aviva, which is an application that collects telematic data from a mobile app and gives you a real-time quote based on your driving habits for your car insurance, which is a fascinating application, and there are many, many applications like that, which are getting written on top of Azure Mobile Services.
So I’m really, really pleased to announce the general availability of Azure Mobile Services today. We think that this is going to really help in your mobile development efforts across all devices, and we look forward to seeing what kind of applications you go build.
So now to take you to the next section, which is all around cloud scale and enterprise grade, let me invite up onstage Scott Guthrie. Scott? (Applause.)
SCOTT GUTHRIE: Well, this morning we looked at how you can use Windows Azure to build Web and mobile applications and host them in the cloud.
I’m now going to walk through how we’re making it even easier to scale these apps, as well as integrate them within enterprise environments.
Let’s start by talking about scale. Specifically, I’m going to use a real-world example, which is Skype.
Now, Skype is one of the largest Internet services in the world. And over the last year, they’ve been working to migrate that service to run on top of Windows Azure.
One of the benefits they get from moving to Windows Azure is that they can avoid having to buy and provision their own servers, and instead leverage a dynamic cloud environment.
Like most apps, Skype sees fluctuations in terms of load throughout the day, the week, even different parts of the year. And in a traditional datacenter environment, they need to deploy a thick set of servers in order to handle their peak load.
The downside with this, though, is that you end up having a lot of expensive, unused compute capacity during non-peak times.
Moving to a cloud environment like Windows Azure allows them to, instead, dynamically scale their compute capacity based on just what their service needs at any given point in time. And this can yield enormous cost savings to both small and especially to very large services.
Now, with Windows Azure, you’ve always been able to dynamically scale up and scale down your apps, but you had to typically write custom scripts or use other tools in order to enable that. What we’re excited to announce today is that we’re going to make this a lot easier by baking in auto-scale capability directly into Windows Azure. And this is going to make it easy for anyone to start taking advantage of these kind of dynamic scale environments and yield the same cost savings.
I’d like to invite Charles Lemanna onstage to show it off in action. (Applause.)
CHARLES LEMANNA: I’ll be giving a quick demo of the brand-new autoscale feature that supports Windows Azure Compute Services.
First, I’ll cover the website autoscale, then the cloud services, and then the virtual machine.
So if I navigate to the website you saw earlier from Scott Hanselman’s demo, the geek quiz website, we see all the normal metric information that Windows Azure is collecting for his deployment. In this case, CPU time, response time, and network traffic.
But now there’s a new prompt to configure autoscale for this particular website. In the past, when the website would get lots of traffic, people would come in and take the quiz. Scott would have to go in and manually drag the slider to increase his capacity so his response time is not impacted.
However with autoscale, I’m able to now configure a basic set of rules that will manage the capacity from my website automatically.
I can configure an instance count range with a minimum value that we’ll always honor, as well as a maximum value. In this case, we’ll never go above six instances, so you can be sure you won’t get a giant bill.
Next, you can also configure a target CPU range. In this case, I say choose 40 to 54 percent, and what that means is the autoscale engine for Azure in the background we’ll be turning off and turning on website instances so your CPU always stays in that range. In other words, if you go below 40 percent, we’ll turn off the machine to save you money, and if you go above 54 percent, we’ll turn on a new machine so none of your users are impacted.
And just like that, I click save, and Windows Azure will manage my website, scale, and capacity entirely on its own. (Applause.)
Next, I’ll hop over to the cloud service autoscale. I just have a simple deployment here with a Web front end where my customers can come and, say, place T-shirt orders or other memorabilia. And this front end puts items into a queue, which I have a background worker role, which will go and pull items from this queue and process them for billing or shipping.
For the Web role, I’ve already configured autoscale based on CPU, just like you saw for websites with an instance range and a CPU range. But I also can configure a scale up button, which impacts the velocity by which I increase my capacity. I’ve chosen to scale up by two instances with only a five-minute cool down because I want to respond immediately and quickly to spikes in customer demand.
For my background worker role, it’s a little bit different. I don’t care as much about CPU; I care about how many items are waiting in the queue to be processed, how many orders I have to go through.
In this case, I’ve already configured autoscale based on queue depth by selecting a storage count and queue name, as well as the target number of items in that queue per machine.
In this case, as the queue gets bigger, we’ll add more machines. Imagine it’s the holidays and a bunch of new orders come in; we’ll make sure you have enough capacity to process it in real time.
And imagine it’s a Sunday night and not as many people are coming to your website and placing orders. We’ll go down to your minimum to save you even more money on your monthly Azure bill.
Lastly, I’ll hop over to virtual machines. Virtual machines are just like cloud services in that you configure autoscale for a set of virtual machines based on either CPU or queue.
For the virtual machines, you can choose minimum-maximum instances, and we’ll move you up and down within that range by turning on and turning off those machines. And with the recent announcement of no billing while the machine’s stopped, you don’t have to worry about being charged in this case.
As you can see, it just took a few minutes to configure autoscale across all these different compute resources. And that’s what the power of autoscale brings to Windows Azure. In just a few minutes, you can make sure your cloud application runs, stays up and running for the lowest possible cost. Thank you. (Applause.)
SCOTT GUTHRIE: So as Charles showed you, it’s super easy to configure autoscale and set it up so you can really take advantage of some great savings. He also mentioned, two of the improvements that we made earlier this month is the ability now to stop VMs without incurring any billing compute charge, as well as the ability to now bill per minute. This means that if you run your site or you run your VM for only 20 minutes, we’re only going to bill you for the 20 minutes that you actually run it instead of the full hour.
And when you combine all these features together, it really yields a massive cost savings over what you can do today in the cloud, but in particular, also over what you can do in an on-premises environment.
We’re really excited to announce that the preview of Windows Azure Autoscale is now live. And you can actually all try it out for free and start taking advantage of it today. (Applause.)
So let’s switch gears now and talk a little bit about enterprise integration and some of the things that we’re doing to make it even easier for you to build cloud apps and integrate them within your corporate or enterprise environment. Whether you’re an enterprise building your own apps or you also hear a little bit about how we’re enabling ISVs that are building SaaS-based solutions to sell into an enterprise environment and monetize even more effectively.
There are a whole bunch of services that we have built into Windows Azure in the identity space that makes it really easy to do this kind of enterprise identity integration so that you can define an Active Directory in the cloud using a service we call Windows Azure Active Directory.
You can basically have a cloud-only directory, meaning you only have one directory, and it’s in the cloud, and you put all your users in it.
What’s nice about Windows Azure Active Directory though is it also supports the ability where you can synchronize it with an on-premises Active Directory that you’re running on Windows Server. And this is great for enterprises or corporates that already have Active Directory installed. And it allows them to very easily synchronize all their users into the cloud and allow cloud-based applications to start using that directory very easily to authenticate and enable single sign-on for all their customers.
And what’s nice about Windows Azure Active Directory is it’s built using open standards. So we support SAML, OAuth, as well as WS Federation, which makes it really easy for you as developers to start authenticating and enabling single sign-on within all your apps using existing libraries and protocols that you already use.
So what I thought I’d do is actually walk through a simple example of how this week we’re making it even easier in order to take advantage of that.
So what I’m going to show here is just a simple example where we have a company called Contoso that has an Active Directory on premises. And they’re going to basically spin up an Azure Active Directory running inside Windows Azure. And they can synchronize their directory up into the cloud. That means all their users are now available there.
And what they can then do is they can start to build apps, whether they’re mobile apps, Web apps, or any other type of app, deploy them in the cloud, and now any of their employees when they go ahead and access that application can enable single sign-on using their existing enterprise credentials and be able to securely login and start using that app. Let’s go ahead and walk through some code on how we do that.
So what I’m standing in front of here is the Windows Azure Management Portal, which you already seen Scott and Josh and Charles walk through earlier today.
What I’m going to do is click on this Active Directory tab that’s within the portal, which allows me to control and configure my Windows Azure Active Directory.
And what you can see here is the Contoso directory has already been created. I’m creating directories inside Windows Azure; it’s actually free; it doesn’t cost anything. So every developer they want can create their own directory, and companies can very easily go ahead and populate their directory with their information.
You can see here this directory; I already have a number of users that are stored within it. If I want to, I could directly inside the admin tool create new users and manage them through the admin console.
I could also click that directory integration tab and then set up a sync relationship with my on-premises Active Directory. That means every time a user is added or updated inside my on-premises Active Directory, it’ll be automatically reflected inside Windows Azure as well.
So once I have this, I basically have a directory that I can use within my applications to authenticate users.
So let’s build a simple app using the new Visual Studio 2013 and the new ASP.NET release coming out this week and show how I could basically integrate that within a Web app.
So I’m going to use the same Web application template that Scott showed earlier. Call this Simple App.
I can choose whatever frameworks I want within it. I can also click this change authentication dialog box that Scott touched on briefly in his talk.
And what I’m going to do is I’m going to click this organizational accounts tab. And I can go ahead now and enter in the name of the domain of my company. You’ll notice inside this dropdown we’ve added support so that both for internal apps within an enterprise that want to target a single company, they can do it. We also support the ability if you want to develop a SaaS application and target multiple enterprise customers, you can go ahead and select that as well. (Applause.)
I can then go ahead and just enter the password here. What I’m doing here is just registering this application with Windows Azure. And I just hit create project, and what this is literally going to go ahead and do now is create for me an ASP.NET project using whatever framework that I wanted to specify as registering that application with Windows Azure. So it’s basically saying I’m going to do secure sign-on with it.
And now if I go ahead and run this application in the browser, it’s going to launch, and one of the first things you’ll see it do is because I’ve enabled Active Directory single sign-on, it’s just going to automatically show me a single sign-on screen. And right now, I’m on the Internet, so that’s why it’s going to prompt me with this in HTML. I can also set it up if I was in an intranet environment where I wouldn’t have to explicitly sign in.
But right now, I can sign in. And I’m just going to say Contoso Build.com. If I do this now, I’m logged into this ASP.NET. I’m logged in using my Active Directory account that the employee has. And I’ve literally in a matter of moments set this thing up where I’m actually now using the cloud in order to actually use a single sign-on provider.
What this means is not only can I run this thing locally, but I can now just right click and hit publish, and I can publish this as a website, I can publish this as a virtual machine or in a cloud service. And now any of the employees within my organization that access it are integrated with their existing enterprise security credentials and can do single sign-on within the application. (Applause.)
So this makes it really, really easy for you now to build your own custom applications, host them in the cloud, and enable enterprise security throughout.
What we’re also doing with Windows Azure Active Directory is making sure that not only can you host your own applications, but we also want to make it really easy for enterprises to be able to consume and integrate existing SaaS-based solutions and have the same type of single sign-on support with Active Directory as well.
This is great for enterprises because it suddenly means that they can go ahead and take advantage of all the great SaaS solutions that are out there, and they can start to integrate more and more apps with less friction into their enterprise environment. And it’s really great from an ISV and developer perspective because it now means that you can go ahead and build SaaS solutions and sell them to enterprises at a fraction of the friction that was required today. That makes it much easier to go ahead and show the value quickly, makes it much easier to onboard your enterprise customers, and at the end of the day, enables you to make a lot more money.
So what I’m going to do is walk through an example of how this works. So we’re going back to the Windows Azure portal. And we’ve got our users, like we had before here. I’m now going to click this applications tab as well. And what the applications tab does is it’s going to show me all of the apps that have been registered with this directory. So any of the custom apps that I would build would show up here.
You’ll notice also inside this list, we have a bunch of popular SaaS-based solutions that have already been registered with Contoso as well. So we’ve got Box, Basecamp, and many others.
What I can do now inside the Windows Azure portal if I’m an administrator of the directory is I can go ahead and just click add. Click this manage access to an application link. And what we’re integrating is SaaS-based directory of existing SaaS-based solutions that this organization can now seamlessly integrate as part of their Windows Azure Active Directory system.
So, for example, I could do popular ones like DocuSign or Dropbox or Evernote.
We’ve got ones you might not expect at a Microsoft conference. We’ve got Google Apps. We’ve got Salesforce.com. We even just for giggles enabled Amazon Web Services. (Laughter.) Some of these we’d like you to use more than others. (Laughter.) But regardless, you can add any of these, and basically once you just click add, they’ll show up in this list. And then all you need to do in order to integrate your single sign-on with one of these apps is drill into it.
So in this case here, I’m going to drill into Box. Basically, I can just hit configure. I can say I want to enable my users to authenticate the Box using my Windows Azure Active Directory. Just paste in my Box tenant URL, which is the URL I get from Box. And I just download and upload a cert in order to make sure that we have a secure connection.
And once I do that, I then basically have integrated my Active Directory with Box. I can then go ahead and hit configure user access. This will bring up my list of all the users within my Windows Azure Active Directory. I can then go ahead and click on any of them, click enable access.
You’ll notice we’ve even integrated if the SaaS provider has roles defined within their application, I cannot only give this user access to Box, but I can actually map which roles within the Box applications they should have access to. And then hit OK and then literally in a matter of seconds, that user is now provisioned on Box and they can now use their Active Directory credentials in order to do single sign-on to that SaaS application. (Applause.)
So I’m going to switch gears now and go to another machine. So I was showing you kind of the administrator experience for how an administrator would login or enable that. I’m now going to kind of show you the end-user experience of what this translates into. And once we set up that relationship with that particular employee, that employee can go ahead and just go to Box directly and use their Active Directory credentials to sign in.
Or one of the other things that we’ve done which we think is kind of cool is integrated the ability so that the company can expose the single dashboard of all the SaaS applications that they’ve configured that employees can just go ahead and bookmark.
So in this case here, going ahead and logging into this. So this is kind of an end-user experience. All of the apps, SaaS solutions, or custom apps that the administrator of Active Directory has gone ahead and said you have access to will show up in this list. So you can see the Box app that we’ve just provisioned shows up here now. And as more get added, we’ll just dynamically show up.
And then what the user can do is just go ahead and click on any of them in order to initiate a single sign-on relationship. And that’s how easy now our Contoso employee is now logged into Box. And they can now do all the standard Box operations now using their Active Directory against it. (Applause.)
The beauty about this model is not only is it super easy to set up, you saw both on the administrator side, as well as on the developer side, it’s really, really easy to integrate. But it also means from an enterprise perspective, they feel a lot more secure. It means that if the employee ever leaves the organization or their account is ever suspended, they basically lose all access to the SaaS applications that they’ve been using on the company’s behalf. So the company doesn’t have to worry about the data leaving or the employee still able to kind of login and make changes to their data. So it enables a very nice model there.
And I think from a developer perspective, you know, one of the things to think about in terms of what we’re enabling here is not only is it easy, but it’s going to enable you to reach a lot of customers. We have more than 3.2 million businesses that have already synced their on-premises Active Directory to the cloud and more than 68 million active users that login regularly using that system.
That basically means as a developer, as a company that wants to sell to enterprises, you’ve got an awesome market that you’re now able to go ahead and sell to and makes it real easy for you to monetize.
And what I thought I’d do is actually invite Aaron Levie, who is the co-founder and CEO of Box to actually come onstage and talk a little bit about what this means to Box and some of the kind of possibilities this opens up for them.
AARON LEVIE: Hey, how you doing? (Applause.) How’s it going? So I’m really excited to be here. At Box, we help businesses store, share, manage, and access information from anywhere. And we’re big supporters of Microsoft. We build for the Windows desktop, we build on Windows 8, build on Windows 8 Phone. We love to integrate our work with SharePoint. Unfortunately, they haven’t returned our email yet, but maybe spam filter, we don’t know what’s going on there.
But it’s really exciting to see sort of an all-new Microsoft. I think the amount of support for openness and heterogeneity is incredibly amazing. I think you normally wouldn’t have seen a development preview on top of a Mac or whatever. I was actually afraid that Bill Gates was going to drop down from the ceiling and rip it off. So that was really exciting to see.
So we’re really excited to be supporting Windows Azure Active Directory. It helps reduce the friction for customers to be able to deploy cloud solutions, and we think it’s going to be great for developers. We think that’s going to be great for startups and the ecosystem broadly.
SCOTT GUTHRIE: Yeah, we were talking a little bit earlier about some of the friction that it reduces. I don’t know maybe you could talk as an enterprise SaaS solution what that friction is like, and how does something like this help?
AARON LEVIE: Yeah, I mean, if you think about how the enterprise software industry for decades basically if you wanted to deploy software or technology in your enterprise, you had to build this sort of massive competency in managing infrastructure and managing services and managing new software that you want to deploy. And there was so much friction for implementing new solutions into your business. So any new problem that you wanted to solve, you had to have the exact same amount of technology that you had to implement per solution.
Even harder was getting things like the identity to integrate and getting the technology to actually talk to each other. The power of the cloud is that any business anywhere in the world — and we’re talking millions of businesses that now have access to these solutions — can instantly on-demand light up new tools.
And so what that means is when you have lower friction, when you have more openness, we’re going to see way more innovation. And that creates an environment where startups can be much more competitive, where we can build much better solutions, and I think the ecosystem broadly can actually expand. And the $290 billion that is spent every year on enterprise software today on-premises can massively move to the cloud, and we can actually expand the amount of market potential that there is between the ecosystem.
SCOTT GUTHRIE: That’s awesome. You know, we’re kind of excited on our side in terms of the opportunity both kind of to enable that kind of shift. How we can use Windows Azure, how we can use the cloud in order to provide sort of this great opportunity for developers to basically build solutions that really can reach everyone.
You know, I think one of the other things that’s just nice is sort of how we can actually interoperate and integrate with systems all over the place. And that’s across protocols, that’s across operating systems, that’s devices, that’s even across languages. And I think as Aaron mentioned, it’s going to open up a ton of possibilities. And at the end of the day, I think really provide a lot of economic opportunity out there, hopefully for everyone in the audience.
AARON LEVIE: Cool.
SCOTT GUTHRIE: So thanks so much, Aaron.
AARON LEVIE: Thanks a lot, appreciate it. See you. (Applause.)
SCOTT GUTHRIE: I’m really excited to say that everything that we just showed here from a developer API perspective, you can start plugging into and taking advantage of this week. We’ve got a lot of great sessions on Windows Azure Active Directory where you can learn more, and you can start taking advantage of all the tools that we are providing in ASP.NET and with the new version of .NET and VS to get started and make it really easy to do it.
We’re then going to go ahead and soon have a preview of the SaaS app management gallery that you can also start loading your applications into, and we’ll start taking advantage of as an enterprise. So we’re pretty excited about that, and we think, again, it’s going to offer a ton of opportunity.
So let’s switch gears now. We’ve talked a little bit about identity and how we’re trying to make it really easy for you to integrate that within an enterprise environment. I’m going to talk a little bit about the integration space more broadly, and in particular talk about how we’re also making it really easy to integrate data, as well as operations in a secure way into your enterprise environment as well.
And we’ve got a number of great services with Windows Azure that make it really easy to do so.
One of them is something that we first launched this month called Windows Azure BizTalk Services. And I’m pretty excited about this one in that it really allows me to dramatically simplify the integration process. For people that haven’t ever tried to integrate, say, an SAP system with one of their existing apps, or ever tried to integrate an SAP system with an existing SaaS-based solution, there’s an awful lot of work involved in terms of doing that both in terms of code, but also in terms of monitoring and making sure everything is secure. And these types of integration efforts can often go on for months or years as you integrate complex line-of-business systems across your enterprise.
What we’re trying to do with Windows Azure BizTalk Services is just dramatically lower that cost in a really quantum way. And basically with Windows Azure BizTalk services, you can stand up an integration hub in a matter of minutes inside the cloud. You can do full B2B EDI processing in the cloud so you can process orders and manage supply chains across your organization.
We’re also enabling enterprise application integration support so that you can very easily integrate lots of different disparate apps within your environment, as well as integrate them with cloud-based apps, both your own custom solutions, as well as SaaS-based apps that your enterprise wants to go ahead and take advantage of.
You know, we think the end result really is going to be a game-changer in the integration space and opens up a bunch of possibilities.
So what I thought I’d like to do is walk through just sort of a simple example of how you can use it. So I’m going to go back to our little Contoso company.
And they want to be able to consume and use a SaaS-based app that does travel management. We’ll call it Tailspin Travel. And they want to be able to do single sign-on with their employees so that their employees can login using their Active Directory credentials.
But to really make it useful, they also want to be able to tie in their travel information and policies with their existing ERP system on premises, and that poses a challenge, which is how do you securely open up your ERP system and enable a third party to have access to it? How do you monitor it? How do you make sure it’s really secure?
And so that’s where BizTalk services comes into play. So with BizTalk services, you can go to Windows Azure, you can very easily and very quickly stand up a Windows Azure BizTalk service. And then we have a number of adapters that you can go ahead and download and run on-premises to connect it up.
In particular, we have an SAP adapter. We also have Oracle adapters, Siebel adapters, JD Edwards adapters, and a whole bunch more. So, basically, without you having to write any code, you can actually just define what we call bridges, which make it really easy and secure for you to go ahead and expose just the functionality you want.
That SaaS app or your own custom app can then go ahead and call endpoints within Windows Azure BizTalk Services using just standard JSON or REST APIs, and then basically securely go through that bridge and execute and retrieve the appropriate data.
Again, it’s really simple to set this up. What I’d like to do is just walk through a simple example of how to do it in action.
So what I have here is kind of the end-user app that our Contoso employees will use. It’s a Web-based application. Again, our Tailspin Travel. You’ll notice that the users are already logged in using the Windows Azure Active Directory already within the app. So this app could be hosted anywhere on the Internet.
I could then create new trips as an employee, or I could go ahead and look at existing ones that I’ve already booked. So here’s one, this is the return trip from Build. Right now, I’m flying in economy. I don’t know, maybe it would be nice to get upgraded. So I can go ahead and try to enter that.
But you’ll notice here at the top when I do it, a few seconds later, I’ve got a policy violation that was surfaced directly inside the Tailspin Travel app. And basically it just was saying I can’t just do this myself; my manager actually has to go ahead and approve it. And it’s coming directly out of the SAP system of Contoso.
So how did this happen? Well, on the Tailspin Travel side, this is the SaaS app, they’re building it in .NET. This is basically a simple piece of code that they have, which allows them on the SaaS side to actually check whether or not this trip is in policy.
Basically, the way they’ve implemented it is they’re just making a standard REST call to some endpoint that’s configured for the Contoso tenant. And this doesn’t have to be implemented with Azure, doesn’t have to be implemented with .NET, it can be implemented anywhere. And it’s just making a standard REST call. And depending on that action, the SaaS app then goes ahead and does something.
So how do we implement this REST call? Well, we could implement it in a variety of different ways on Windows Azure. We could write our own custom REST endpoint and process the code and handle it that way. We have lots of great ways to do that. Now, the downside, though. The tricky part of this is not going to be so much implementing the REST API; it’s actually implementing all the logic to flow that call to an on-premises SAP system, get the information validated, and return it.
Again, that would typically require an awful lot of code if you needed to do that from scratch.
What I’m going to do here is switch here to the other machine. And walk through how we can use BizTalk services to dramatically simplify it.
So you can create a new BizTalk service. Go ahead and just say new app service, BizTalk service custom create. I could say Contoso endpoint. And literally just by walking through a couple wizards here and hitting OK, I can basically stand up my own BizTalk service inside the cloud hosted in a high-availability environment literally in a matter of minutes.
And for anyone who’s ever installed BizTalk Server or an integration hub themselves, they’ll know that typically that does not take a couple minutes. And the nice thing about the cloud is we can really kind of make this almost instantaneous.
Once the service is created, you get the same kind of nice dashboard view and quick start view that you saw Josh with Mobile Services. And so there are ways that you download the SDK. You can also monitor and scale up and scale down the service dynamically.
And then as a developer, I can just launch Visual Studio. I can say new project. I can say I want to create a new BizTalk service, which will define all the mapping rules and the bridge logic that I want to use.
This is one I’ve created earlier. You’ll notice here on the left in the Server Explorer we have a number of LOB adapters that are automatically loaded inside the Server Explorer, so I can connect through my SAP system directly and do that.
Add it to the design surface, and then I can create these bridges that I can either define declaratively; I can also write custom code using .NET in order to customize. Basically, I can just double-click it. This little WYSIWYG designer here lets me actually map the REST calls that I’m getting from that Tailspin Travel SaaS app, transform it, and then I can basically map it to my SAP system.
And you can see here in our schema designer, we basically allow you to do fairly complex mapping rules between any two formats. So here on the right-hand side, I have my SAP schema that’s stored in my on-premises environment; the left-hand side here, there’s that REST endpoint. This is a very simple example with a lot of these integration workflows. You might have literally thousands of fields that you’re mapping back and forth.
Once I do the mapping, though, all I need to do is just go ahead and hit deploy, and this will immediately upload it into my BizTalk Azure service and at that point, it’s live on the Web. I can then choose who do I want to give access to this bridge? And I can now securely start transferring just the information I want into and out of my enterprise.
For an IT professional, they can then go ahead and open up our admin tool. They can see all the bridges that have been defined. And then one of the things that we also build directly into Windows Azure BizTalk Services is automatic tracking support. And what this means is now the IT professional can actually see all of the calls that are going in and out of the enterprise. It’s all logged; it’s all audited so it’s fully compliant, and they can basically now keep track of exactly all the communication that’s going on to make sure that it’s in policy.
Literally, you saw all of this sort of a simple example here, but this really starts to open up tons of possibilities where you can integrate either with other SaaS out there that your organization wants to use, or as you want to start building your own custom business application and host within Windows Azure, you can now securely get access to your on-premises line-of-business capabilities and very securely manage it. (Applause.)
And I’m excited to announce that everything we just showed here, as well as everything I showed when I created that Active Directory app, is now available for you to start using. You can go to WindowsAzure.com, and you can start taking advantage of Windows Azure BizTalk Services today. (Applause.)
So I talked a little bit about how we’re making it easy to integrate enterprise systems with the cloud, both on the identity side as well as the integration side. The other side of enterprise grade services that we’re delivering fall into the data space. And here we’re really trying to make it easy for you to store any data you want in the cloud, any amount of data you want in the cloud, and be able to perform really rich analysis on top of it. And so with Windows Azure storage, we have a really powerful storage system that lets you store hundreds of terabytes, or even petabytes, of storage in any format that you want. We have NoSQL capabilities that are provided as part of that as well as raw block capability. With our SQL database support, we now have a relational engine in the cloud that you can use. You can very easily spin up relational databases literally in a matter of seconds and start using the same ADO.NET and SQL syntax features that you are familiar with today.
We also a few months ago launched a new service that we call HD Insight. This makes it really easy for you to spin up your own Hadoop cluster in the cloud, and that you can then go ahead and access any of this data that’s being stored and perform map reduce jobs on it. And what’s nice about how we’re doing HD Insight, like you’ve seen with a lot of the openness things that we’ve talked about throughout the day, is it’s built using the same Hadoop open source framework that you can download and use elsewhere. We’re actually contributors into the project now.
And with Windows Azure, it’s now trivially easy for you to spin up your own Hadoop cluster, be able to point at the data and immediately start getting insights from it, and starting to integrate it with your environment. And so I think in the next keynote later today, you’re actually going to see a demo of that in action. So I’ll save some of that for them.
But the key takeaway here is just sort of the combination of all these capabilities in identity integration and data space really we think are game-changers for the enterprise, really enable you to build modern business applications in the cloud. I think they’re going to be a lot of fun to use. So we look forward to seeing what you build.
Thank you very much.
(Applause.)
SATYA NADELLA: Thanks, Scott.
So one last thing I want to talk about is Office and Office 365 as a programmable surface area. We talked a lot about building SaaS applications using services, Scott talked about it. But what if you were a large developer, line-of-business application developer, or a SaaS application developer and could use all of the power of Office as part of your application? And that’s what we’re enabling with the programming surface area of Office.
What that means is the rich object model of Office, everything from the social graph, the identity, presence information, document workflows, document libraries, all of that is available for you to use using modern Web APIs within your application. You can, in fact, have the chrome either in the Office client or in SharePoint, and you can have the full power of the backend in Azure. And, of course, the idea is here is to be able to do all of that with first-class tool support.
To show you some of this in action, I wanted to invite up onstage Jay Schmelzer from our Visual Studio team to show you some of the rapid application development in Office.
Jay, come on in.
JAY SCHMELZER: Thank you. The requirements and expectations and importance of business applications has never been greater than it is today. Modern business applications need to access data available inside and outside the organization. They need to enable individuals across the organization to connect and easily collaborate with each other in rich and interesting ways. And the applications themselves need to be available on multiple different types of devices and form factors.
As developers, we need a platform that provides a set of services that meet the core requirements of these applications. And we need a toolset that allows us to productively build those applications while also integrating in with our existing dev ops processes across the organization.
What I want to show you this morning is a quick look at some things we’re still working on inside of Visual Studio to enable developers to build these modern business applications that extend the Office 365 experience leveraging those services available both from Office 365 and the Windows Azure platform.
And, of course, doing it inside of a Visual Studio experience that allows the developer to focus on unique aspects of their business, and their application, not spending as much time in boilerplate code.
To do that, we’re going to focus on the human resources department at Contoso, who has been using Office 365 to manage the active job positions across the organization. And we want to create a new application that allows individuals in the company to submit potential candidates for open positions from within their Office 365 site using whichever device they happen to have available at the time.
To do that, we’ll switch over to Visual Studio, and we’ll see that we have a new Office 365 Cloud Business app project template available to us. This project goes and builds on the existing apps for Office and apps for SharePoint capabilities that are surfaced as part of that new cloud app model Satya was talking about. And it provides us a prescriptive solution structure for building a modern business application.
I mentioned data is a core part of this, and you see we’ve already started creating the definition for a new table that we’ll use to store our potential candidates. What Office 365 Cloud Business apps does for us is surface additional data types that provide access to these core capabilities of the Office 365 and Windows Azure platform.
Some examples of that we see here that the referred by is typed as a person, giving us access to all the capabilities in Office 365 associated with that Office 365 or Azure Active Directory user. The document, their resume, is stored as a typed document. So we can store it in a document library, and it leverages the rich content management and workflow capabilities associated with Office documents.
We also need to be able to go and pull in data from elsewhere. In our case, we want to go and grab data from that existing SharePoint list the human resources team is using to manage active positions, so that our users can choose a potential position they think those candidates are appropriate for. You see, I’ve already added that, so it’s in my project.
We’ll just go and connect it up between the candidate and our job postings, specify the relationship, and say OK. And now we have this virtual relationship between our Office 365 list and our SQL Azure Database.
OK, the next thing we want to do, though, is really enable that people interaction. If you notice, when I look over here at the candidate, if I select this, you’ll see right from here I have the ability to have the application interact with my corporate social network on my behalf as I’m doing interesting things in the application.
So we have the data model defined. The next thing we need to do is create the UI model. Users of business applications today expect a modern look and feel, a modern experience, but they also want it to be consistent. Visual Studio gives you great ways of doing this for providing a set of patterns that are going to be consistent across your applications. We’ll select a browse pattern, just choose, or the default pattern, choose the table we care about, and now let Visual Studio go and create for us a set of experiences for browsing, viewing, editing and updating that candidate information.
So we have our data model. We have our UI model. The last thing we want to do is go in and actually write some business logic. In this case, back on the entity designer, we’ll go in, and we’ll leverage the data pipeline where we can interact with data moving in and out of the application. In this case, we’ll use our validate. And what we’ll do is, we’ll just go in and make sure that the only folks that can go and actually set or modify the interview date are members of the HR department. And here’s another example where we see the power of surfacing those underlying platform capabilities. I’m able to reach in to the current user, into their Azure Active Directory settings, and grab the current department and validate it against the checks we want to make.
Let’s go ahead and set a breakpoint here. I think we’re probably in good shape. Anyway, so we’re going to launch the application, and Visual Studio is going to go package this up, send the manifest off to our remote Office 365 developer site, and then launch our application. We have no candidates yet, so we’ll create a new one. Last night when we were talking about this stuff, Scott seemed pretty excited about what we’re doing. So maybe he would be an interesting person for us to work with.
When I go in and actually start specifying who it is that’s going to refer this person, you see I’m by default getting the list of the users available on this Office 365 site because I typed that it’s a person. So we’ll select Jim there, one of our team members, go ahead and upload a document that is Scott’s resume. And we’ll specify an interview date, maybe we’ll go out here into September.
The last thing we want to do is go choose which of the positions we think is appropriate to Scott. He’s going to be new to the team, so we’ll maybe choose a little more junior role for him so that he can be successful. We hit save. If we’d actually set that breakpoint, we would see our business logic would have been executed, and we would be able to get that rich debugging experience you’ve come to know and expect from Visual Studio.
We now see we have our candidate. When I drill in and look at it, you see that we’re getting that consistency of experience. I’m getting presence information for the person. When I hover over it, we see the contact card. A little misplaced, but if I want to have a conversation with Jim right now, I can go ahead and do that right from within the application just because we’ve leveraged those underlying capabilities. Of course, in the document we can see the properties of the document. We can view it in the Web application right from the site, or we can follow it if we want to do that as well.
I noticed one thing here; I’ve got this extra ID showing up. So let me go flip over to Visual Studio, and we’ll look at the View Candidate page. And just like we can with any other Web development, we can just go in here and while the application is running we’ll just remove that. We’ll save those changes, flip back over here, just kind of do a little quick refresh, and now when I go in you’ll see that, hey, that extraneous value is no longer there.
The other thing you’ll notice is that in addition to the values we specified for our SQL data, we also have built in the ability to do the basic tracking of, hey, who was the last person who created or modified this record, just core requirements of a business application.
The last thing we’ll look at is on the newsfeed we’re going to click over to that, and you’ll see that the application has gone and interacted on my behalf, right, and entered things into our internal social network, letting people know that, hey, I just submitted somebody as a potential new candidate. So if you folks want to follow them, and so forth.
OK. Our application is looking good. It’s time to go get it integrated with our existing dev ops processes. To do that, we’ll just go over here to the solution explorer, we’ll right click on the solution, and we’ll start by adding this to source code control. In this case, we’ll add it to our Team Foundation Service instance. We’ll go right click; we’ll go check in all these changes that we just made, and while that’s happening I’m going to switch over and take a look at some of the build environments we have established in our Team Foundation Service.
In this case. we’ll see that we have an existing build definition for HR jobs. If I look at that definition, we’ll see that the things I can do is I can switch it to now be continuous, so that as we check in code we can go move on. The other interesting thing is here we’ve got a custom process template that understands how to take the output of the build and deploy it into our Office 365 test site. So this is all just basic power, and this is all built on the underlying technologies and capabilities inside of Visual Studio. That also means we can extend this beyond the SharePoint experience into the Office client experiences, as well.
So here I’ve also built a mail app that allows me to go and prepopulate information in the application from the content of the mail and shove it right into creating a new user, without having to go directly into the application. Hopefully with that, you got a really quick look at some things we’re still working on in Visual Studio, to enable developers to build modern business applications, extending the Office 365 experience, building on the capabilities of Office 365 and the Windows Azure platform.
Thank you very much.
SATYA NADELLA: Thanks, Jay. Thank you.
So hopefully, you got a feel for how you can rapidly build these Office applications, but more importantly, how you could compose these applications you build with, in fact, your full line of business application on Azure and enrich your SAS app, or your line of business enterprise app. I’m very, very pleased to announce that there is a subscription of my Office 365 Home Premium for 12 months that’s going to come to you via email later this afternoon. We hope you enjoy that subscription. (Applause.)
And I know everyone in the room is also perhaps an MSDN subscriber. So we are continuing to improve MSDN benefits. One of the things that we are doing with Windows Azure is to make it very, very easy for you to be able to do dev tests. So now you can use your dev test licenses on Windows Azure. In fact, the cost and the pricing for that is such that you can probably share something like 97 percent of your dev test expenses. We’re also going to give you credits based on your various levels of MSDN. So if you’re a premium subscriber, you get $100, which you can use across your VMs, databases, as well as doing things like load testing. So fantastic benefits I would encourage everyone to go take advantage of it. And also to reduce the friction even further, we have now made it possible for any MSDN subscriber to be able to sign up to Azure without any credit card. I know this is something that many of you have asked for. We’re really pleased to do that. (Applause.)
We had a whirlwind tour of the backend technologies. Really with Windows Azure, we think we now have a robust platform for you to be able to do your modern application development for a modern business. It could be Web, mobile, or this cloud scale and enterprise grade. So hope you get a chance to play with it. We welcome all the feedback, and have a great rest of the Build.
Thank you very, very much.
END
Proper Oracle Java, Database and WebLogic support in Windows Azure including pay-per-use licensing via Microsoft + the same Oracle software supported on Microsoft Hyper-V as well
While with the latter Hyper-V is gaining significant market advantage against the VMware vSphere it is even more important that Windows Azure is becoming a true open cloud computing platform, especially by fully supporting Java and Oracle developers (in addition to existing .NET and various web developers), and Oracle cloud offerings are also vastly extended, especially in the crucially important “pay-per-use” space as the cloud offerings of the Oracle software so far have been only:
– Oracle [Public] Cloud (Larry Ellison’s Oracle Cloud Announcement Highlights [Oracle YouTube channel, July 6, 2012] for when it was finally delivered and TechCast Live Introducing Oracle Public Cloud [Oracle YouTube channel, Dec 9, 2011] when it was pre-announced) which has application solutions in the cloud as well
– Amazon Relational Database Service (Amazon RDS) for Oracle available with “pay-per-use” (officially named “license included” by AWS, earlier named “on-demand hourly”) licensing since Q2 2011 (Amazon RDS for Microsoft SQL Server came a year later), as well as Oracle Fusion Middleware (which includes the GlassFish Java application server and the WebLogic web application server), and Oracle Enterprise Manager licensed in the AWS Cloud
The essence according to Java and other Oracle software heads to the Microsoft cloud [Ars Technica, June 24, 2013]
Microsoft and Oracle may compete head to head in many ways within the database realm, but today the two companies performed the most sweeping cross-join ever as executives from the two companies announced a broad partnership around cloud computing. In a conference call this afternoon, Microsoft CEO Steve Ballmer and Oracle President Mark Hurd discussed a partnership between the companies that will bring Oracle platforms—including Java middleware—into Microsoft’s Azure cloud.
Oracle has moved to certify and support its products, including Oracle WebLogic, the Oracle database, and Oracle Linux, for Azure and Microsoft’s Hyper-V hypervisor. “At the highest level, this partnership extends Oracle’s support of Windows Server to also include Windows Hyper-V and Windows Azure as supported platforms,” Ballmer said.
Oracle will provide full license mobility, Ballmer added, so that customers can move existing Oracle software licenses from on-premises physical or virtual servers to virtual servers on Hyper-V and in the Azure cloud. “There’s an immediate benefit for our customers,” he said. Support of Oracle’s database and application server products, and of Oracle Linux, is available immediately starting today.
Microsoft also agreed to license Oracle’s enterprise Java run-time and APIs and make Java “a first class runtime in Windows Azure, fully licensed and fully supported by Oracle” according to Satya Nadella, Microsoft’s president of Microsoft Corporation’s Server and Tools Business. Previously, Microsoft offered open Java SDKs, he said. “Now we have the licensed [Oracle] Java stack, plus the [Oracle] middleware stack, available. We think it makes Java more first class within Azure.”
Hurd said that in addition to allowing existing licenses to be moved into the Azure cloud, Microsoft would provide a mechanism to obtain licenses on demand “for those who don’t have licenses for Oracle or Java.” Nadella emphasized that Microsoft would “make it easier to spin up Oracle software in Azure with pay-as-you-go licenses,” including pre-built Oracle Linux images that can be deployed in Azure as server instances.
Oracle has been pursuing its own cloud strategy, but Hurd said he saw “nothing but good” coming from a partnership with Microsoft. “I think it just makes sense for us to continue to improve our capabilities but also form partnerships like this,” he said. “Java is the most popular development platform in the world. The fact that more people will get access to our IP is favorable.”
A general business media opinion:
Rivals Microsoft, Oracle bonding in the cloud [The Seattle Times, June 24, 2013]
The partnership looks to be a good move for both companies, while being bad for mutual competitor VMware, said veteran Microsoft and Oracle analyst Rick Sherlund, of investment bank Nomura.
Back in the day, Microsoft and Oracle were bitter rivals, competing over providing database and server products and trading barbs during the U.S. government’s antitrust suit against Microsoft in the 1990s.
Now they’re holding hands and looking at a future together.
Microsoft and Oracle announced Monday a cloud partnership in which customers will be able to run Oracle software (including Java, Oracle Database and Oracle WebLogic Server) on Microsoft’s Windows Server Hyper-V or in Windows Azure. Oracle will provide certification and full support.
Oracle Linux will also be made available to Windows Azure customers.
…
“I think they need each other,” Sherlund said. “They’re cooperating in areas that are mutually beneficial.”
Microsoft is getting Oracle’s support for Hyper-V, Microsoft’s hypervisor technology, which allows companies to run virtual servers. That’s important because Hyper-V competes against VMware, which is dominant in the server virtualization market. And many of the businesses that would be interested in such technology already use some Oracle software.
“It’s an advantage for Microsoft to be able to say: ‘All this Oracle stuff runs on Hyper-V,’ ” said Sherlund, who added that Oracle does not support VMware’s vSphere.
The move likely also allows Microsoft to say it’s being open with its Azure platform.
“That’s the rap you have against Microsoft: That it’s all the Microsoft platform,” Sherlund said. “If you’re in the cloud, it’s good that you’re supporting other platforms.”
Oracle, meanwhile, has traditionally delivered its software to its customers’ own premises. Now that it’s focusing more on delivering its software as services, it’s “motivated to make sure that [the services are] available on a lot of different cloud platforms,” Sherlund said. “So that’s good for Oracle.”
…
… these days, both companies are battling newer competition from the likes of VMware and Seattle-based Amazon.com.
Ballmer and Oracle President Mark Hurd said during the conference call after Monday’s announcement that their two companies would continue to compete.
But, Ballmer said, “the relationship between the two companies has evolved … in a very positive and constructive manner on a number of fronts.”
Hurd said, “The cloud is the tipping point that made this all happen.”
Hurd said Oracle would continue to offer its own public, private and hybrid platforms. But the fact that Java will be accessible to programmers who work in Windows Azure “is a good thing for us. … The fact that more people get access to our IP is favorable,” he said. “It’s good for our customers and therefore good for Oracle.”
Oracle CEO Larry Ellison had also said last week that the company would be announcing partnerships with Salesforce.com and NetSuite.
And an ICT analyst opinion: ORACLE EMBRACING THE BROADER CLOUD LANDSCAPE [James Staten on Forrester blogs, June 24, 2013]
It’s easy to accuse Oracle of trying to lock up its customers, as nearly all its marketing focuses on how Oracle on Oracle (on Oracle) delivers the best everything, but today Ellison’s company and Microsoft signed a joint partnership that empowers customer choice and ultimately will improve Oracle’s relevance in the cloud world.
The Redwood Shores, California software giant signed a key partnership with Microsoft that endorses Oracle on Hyper-V and Windows Azure, which included not just bring-your-own licenses but pay-per-use pricing options. The deal came as part of a Java licensing agreement by Microsoft for Windows Azure, which should help Redmond increase the appeal of its public cloud to a broader developer audience. Forrester’s Forrsights Developer Survey Q1 2013 shows that Java and .Net are the #2 and #3 languages used by cloud developers (HTML/Javascript is #1). The Java license does not extend to Microsoft’s other products, BTW.
This deal gives Microsoft clear competitive advantages against two of its top rivals as well. It strengthens Hyper-V against VMware vSphere, as Oracle software is only supported on OracleVM and Hyper-V today. It gives Windows Azure near equal position against Amazon Web Services (AWS) in the cloud platform wars, as the fully licensed support covers all Oracle software (customers bring their own licenses), and pay-per-use licenses will be resold by Microsoft for WebLogic Server, Oracle Linux, and the Oracle database. AWS has a similar support relationship with Oracle and resells the middleware, database, and Oracle Enterprise Manager, plus offers RDS for Oracle, a managed database service.
Bring your own license terms aren’t ideal in the per-hour world of cloud platforms, so the pay-per-use licensing arrangements are key to Oracle’s cloud relevance. While this licensing model is limited today, it opens the door to a more holistic move by Oracle down the line. Certainly Oracle would prefer that customers build and deploy their own Fusion applications on the Oracle Public Cloud, but the company is wisely acknowledging the market momentum behind AWS and Windows Azure and ensuring Oracle presence where its customers are going. These moves are also necessary to combat the widespread use of open source alternatives to Oracle’s middleware and database products on these new deployment platforms.
While we can all argue about Oracle’s statements made in last week’s quarterly earnings call about being the biggest cloud company or having $1B in cloud revenue, it is clearly no longer up for debate as to whether Oracle is embracing the move to cloud. The company is clearly making key moves to cloud-enable its portfolio. Combine today’s moves with its SaaS acquisitions, investments in cloud companies and its own platform as a service, and the picture clearly emerges of a company moving aggressively into cloud.
I guess CEO Ellison no longer feels cloud is yesterday’s business as usual.
Microsoft and Oracle announce enterprise partnership [joint press release, June 24, 2013]
Microsoft Corp. and Oracle Corp. today announced a partnership that will enable customers to run Oracle software on Windows Server Hyper-V and in Windows Azure. Customers will be able to deploy Oracle software — including Java, Oracle Database and Oracle WebLogic Server — on Windows Server Hyper-V or in Windows Azure and receive full support from Oracle. Terms of the deal were not disclosed.
As part of this partnership, Oracle will certify and support Oracle software — including Java, Oracle Database and Oracle WebLogic Server — on Windows Server Hyper-V and in Windows Azure. Microsoft will also offer Java, Oracle Database and Oracle WebLogic Server to Windows Azure customers, and Oracle will make Oracle Linux available to Windows Azure customers.
Java developers, IT professionals and businesses will benefit from the flexibility to deploy fully supported Oracle software to Windows Server Hyper-V and Windows Azure.
“Microsoft is deeply committed to giving businesses what they need, and clearly that is the ability to run enterprise workloads in private clouds, public clouds and, increasingly, across both,” said Steve Ballmer, chief executive officer of Microsoft. “Now our customers will be able to take advantage of the flexibility our unique hybrid cloud solutions offer for their Oracle applications, middleware and databases, just like they have been able to do on Windows Server for years.”
“Our customers’ IT environments are changing rapidly to meet the dynamic nature of the world today,” said Oracle President Mark Hurd. “At Oracle, we are committed to providing greater choice and flexibility to customers by providing multiple deployment options for our software, including on-premises, as well as public, private, and hybrid clouds. This collaboration with Microsoft extends our partnership and is important for the benefit of our customers.”
Additional information about support and the licensing mobility changes that went into effect today is available on Oracle’s blog at https://blogs.oracle.com/cloud/entry/oracle_and_microsoft_join_forces.
Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud [Oracle Cloud Solutions blog, June 24, 2013]
Oracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software.
Here are the key elements of the partnership:
- Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure
- Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure
- Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery
- Microsoft will offer fully licensed and supported Java in Windows Azure
- Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure
Oracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform. Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments.
For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment.
Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below:
MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V – NEW
…
MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation – UPDATED
…
Explanation for that is in Partners in the enterprise cloud [Satya Nadella on the The Official Microsoft Blog, June 24, 2013]
As longtime competitors, partners and industry leaders, Microsoft and Oracle have worked with enterprise customers to address business and technology needs for over 20 years. Many customers rely on Microsoft infrastructure to run mission-critical Oracle software and have for over a decade. Today, we are together extending our work to cover private cloud and public cloud through a new strategic partnership between Microsoft and Oracle. This partnership will help customers embrace cloud computing by improving flexibility and choice while also preserving the first-class support that these workloads demand.
As part of this partnership Oracle will certify and support Oracle software on Windows Server Hyper-V and Windows Azure. That means customers who have long enjoyed the ability to run Oracle software on Windows Server can run that same software on Windows Server Hyper-V or in Windows Azure and take advantage of our enterprise grade virtualization platform and public cloud. Oracle customers also benefit from the ability to run their Oracle software licenses in Windows Azure with new license mobility. Customers can enjoy the support and license mobility benefits, starting today.
In the near future, we will add Infrastructure Services instances with preconfigured versions of Oracle Database and Oracle WebLogic Server for customers who do not have Oracle licenses. Also, Oracle will enable customers to obtain and launch Oracle Linux images on Windows Azure.
We’ll also work together to add properly licensed, and fully supported Java into Windows Azure – improving flexibility and choice for millions of Java developers and their applications. Windows Azure is, and will continue to be, committed to supporting open source development languages and frameworks, and after today’s news, I hope the strength of our commitment in this area is clear.
The cloud computing era – or, as I like to call it, the enterprise cloud era – calls for bold, new thinking. It requires companies to rethink what they build, to rethink how they operate and to rethink whom they partner with. We are doing that by being “cloud first” in everything we do. From our vision of a Cloud OS – a consistent platform spanning our customer’s private clouds, service provider clouds and Windows Azure – to the way we partner to ensure that the applications our customers use run, fully supported, in those clouds.
We look forward to working with Oracle to help our customers realize this partnership’s immediate, and future, benefits. And we look forward to providing our customers with the increased flexibility and choice that comes from providing thousands of Oracle customers, and millions of Oracle developers, access to Microsoft’s enterprise grade public and private clouds. It’s a bold partnership for a bold new enterprise era.
IMPORTANT: for Java developers this strategic partnership will be really important when the latest versions will be covered on Windows Azure, see:
– Java EE 7 / GlassFish 4.0 Launch Coverage [Oracle’s The Aquarium blog, Jan 12, 2013]
Java EE 7, the standard in community-driven enterprise software, is now available. Back in April, Java EE 7 completed the JCP final approval ballot. Today, developers can learn all about Java EE 7 during the Java EE 7 Live Web Event, and get some hands-on experience with the arrival of the Java EE 7 SDK and GlassFish Server Open Source Edition 4.0. Of course, others have quite a bit to say about Java EE 7 as well, and this is just for starters:
- Oracle Announces Availability of Java Platform Enterprise Edition 7 (Press release)
- Oracle Officially Launching Java EE 7 and GlassFish 4 Today (InfoQ)
- Talking Java EE 7 with Anil Gaur, Vice President of software development at Oracle(JaxEnter)
- GlassFish 4 brings Java EE 7 (DZone / Markus Eisele)
- Java EE 7 Recipes and Introducing Java EE 7 (Josh Juneau)
- Java EE 7 and GlassFish Day at CloudBees (CloudBees)
- NetBeans 7.3.1 with Java EE 7 Support (NetBeans)
- What’s new in GlassFish 4 (C2B2)
- Java Magazine – Java EE 7 (Oracle)
- Oracle releases Java EE 7 with eye on HTML5 development (InfoWorld/Computerworld)
- Fifteen Java EE APIs Featured in the Java Spotlight Podcast (Oracle)
- Oracle releases Java Platform Enterprise Edition 7 (ZDNet)
- Oracle Announces Availability of Java Platform Enterprise Edition 7 (MarketWatch)
- Oracle Announces Availability of Java Platform Enterprise Edition 7 (MCPro)
- Oracle Announces Availability of Java Platform Enterprise Edition 7 (Data Manager Online)
- Java EE 7 officially launches, bringing HTML5 and WebSocket support (jaxenter)
- New Java EE 7 and GlassFish Support in OEPE 12.1.1.2.2 (Oracle)
- Working with Eclipse and GlassFish (Gerry Tan)
- Java EE 7 melds HTML5 with enterprise apps (The Register)
- Oracle Delivers Java EE 7 with HTML5 Support (eWeek)
- Java grows up in the enterprise (Holger)
- Java EE 7 tutorial released (Java Tutorials)
- No Clouds, Only Sunshine (Markus Eisele)
- Reference implementation for Java EE: GlassFish 4.0 Released (Markus Eisele)
- Newly released NetBeans IDE 7.3.1 Introduces Java EE 7 Support (Geertjan Wielenga)
– Java EE 7 SDK and GlassFish Server Open Source Edition 4.0 Now Available [Arun Gupta, Miles to go … weblog among Oracle technical blogs, June 12, 2013]
Java EE 7 (JSR 342) is now final!
I’ve delivered numerous talks on Java EE 7 and related technologies all around the world for past several months. I’m loaded with excitement to share that the Java EE 7 platform specification and implementation is now in the records.
The platform has three major themes:
- Deliver HTML5 Dynamic Scalable Applications
- Reduce response time with low latency data exchange using WebSocket
- Simplify data parsing for portable applications with standard JSON support
- Deliver asynchronous, scalable, high performance RESTful Service
- Increase Developer Productivity
- Simplify application architecture with a cohesive integrated platform
- Increase efficiency with reduced boiler-plate code and broader use of annotations
- Enhance application portability with standard RESTful web service client support
- Meet the most demanding enterprise requirements
- Break down batch jobs into manageable chunks for uninterrupted OLTP performance
- Easily define multithreaded concurrent tasks for improved scalability
- Deliver transactional applications with choice and flexibility
This “pancake” diagram of the major components helps understand how the components work with each other to provide a complete, comprehensive, and integrated stack for building your enterprise and web applications. The newly added components are highlighted in the orange color:
In this highly transparent and participatory effort, there were 14 active JSRs:
- 342: Java EE 7 Platform
- 338: Java API for RESTful Web Services 2.0
- 339: Java Persistence API 2.1
- 340: Servlet 3.1
- 341: Expression Language 3.0
- 343: Java Message Service 2.0
- 344: JavaServer Faces 2.2
- 345: Enteprise JavaBeans 3.2
- 346: Contexts and Dependency Injection 1.1
- 349: Bean Validation 1.1
- 352: Batch Applications for the Java Platform 1.0
- 353: Java API for JSON Processing 1.0
- 356: Java API for WebSocket 1.0
- 236: Concurrency Utilities for Java EE 1.0
The newly added components are highlighted in bold.
And 9 Maintenance Release JSRs:
- 250: Common Annotations 1.2
- 322: Connector Architecture 1.7
- 907: Java Transaction API 1.2
- 196: Java Authentication Services for Provider Interface for Containers
- 115: Java Authorization for Contract for Containers
- 919: JavaMail 1.5
- 318: Interceptors 1.2
- 109: Web Services 1.4
- 245: JavaServer Pages 2.3
Ready to get rolling ?
Binaries
Tools
- NetBeans 7.3.1
- GlassFish Tools for Kepler(Technology Preview)
- Maven Coordinates
Docs
- Java EE 7 Whitepaper
- Java EE 7 Tutorial (html pdf)
- First Cup Sample Application
- Java EE 7 Hands-on Lab
- Javadocs (online download)
- Specifications
- All-in-one GlassFish Documentation Bundle
A few articles have already been published on OTN:
- What’s new in JMS 2.0: Part 2 (Jun 2013)
- What’s new in JMS 2.0: Part 1 (May 2013)
- Java EE 7 and JAX-RS 2.0 (Apr 2013)
- JSR 356, Java API for WebSocket (Apr 2013)
- Ten ways in which JMS 2.0 means writing less code (Apr 2013)
- Higher Productivity and Embracing HTML5 with Java EE 7 (Feb 2013)
And more are coming!
This blog has also published several TOTD on Java EE 7:
- TOTD #212: WebSocket Client and Server Endpoint
- TOTD# 211: Chunked Step using Batch Applications
- TOTD #210: Consuming and Producing JSON using JAX-RS Entity Providers
- TOTD #203: Concurrency Managed Objects
- TOTD #202: Resource Library Contracts in JSF 2.2
- TOTD #199: Java EE 7 and NetBeans IDE
- TOTD #198: JSF 2.2 Faces Flow
- TOTD #196: Default DataSource in Java EE 7
- TOTD #194: JAX-RS Client API and GlassFish 4
- TOTD #192: Batch Applications in Java EE 7
- TOTD #191: Simple JMS 2.0 Sample
- TOTD #189: Collaborative Whiteboard using WebSocket in GlassFish 4
- TOTD #188: Non-blocking I/O using Servlet 3.1
All the JSRs have been covered in the Java Spotlight podcast:
- #136: Paul Parkinson on JSR 907/JTA 1.2
- #135: Marina Vatkina on JSR 318/Interceptors 1.2
- #134: Kin-man Chung on JSR 341/Expresion Language 3.0
- #133: Sivakumar Thyagarajan on JSR 322/Connectors 1.7
- #132: Shing-Wai Chan on JSR 340/Servlet 3.1
- #131: Nigel Deaking on JSR 343/JMS 2.0
- #130: Santiago Pericas-Geertsen on JSR 339/JAX-RS 2.0
- #129: Anthony Lai on JSR 236/Concurrency Utilities for Java EE 1.0
- #126: Jitendra Kotamraju on JSR 353/JSON 1.0
- #124: Chris Vignola from JSR 352/Batch 1.0
- #119: Emmanuel Bernard on JSR 349/Bean Validation 1.1
- #117: Danny Coward on JSR 356/WebSocket 1.0
- #115: Ed Burns on JSF 344/JSF 2.2
- #109: Pete Muir on JSR 346/CDI 1.1
- #90: Marina Vatkina on JSR 345/EJB 3.2
- #84: Anil Gaur on JSR 342/Java EE 7
The latest issue of Java Magazine is also loaded with tons of Java EE 7 content:
Media coverage has started showing as well …
And you can track lot more here.
You can hear the latest and greatest on Java EE 7 by watching replays from the launch webinar:This webinar consists of:
- Strategy Keynote
- Technical Keynote
- 16 Technical Breakouts with JSR Specification Leads
- Customer, partner, and community testimonials
- And much more
Do you feel enabled and empowered to start building Java EE 7 applications ?
Just download Java EE 7 SDK that contains GlassFish Server Open Source Edition 4.0, tutorial, samples, documentation and much more.
Enjoy!
Previous situation:
![]()
From Oracle Database Cloud Service [Oracle presentation, Feb 15, 2013]
as well as: New Java Resources for Windows Azure! [Windows Azure blog, July 31, 2012]
… Make the Windows Azure Java Developer Center your first stop for details about developing and deploying Java applications on Windows Azure. We continue to add content to that site, and we’ll describe some of the recent additions in this blog.
Using Virtual Machines for your Java Solutions
We rolled out Windows Azure Virtual Machines as a preview service last month; if you’d like to see how to use Virtual Machines for your Java solutions, check out these new Java tutorials. …
New in Access Control
Included in the June 2012 Windows Azure release is an update to the Windows Azure Plugin for Eclipse with Java (by Microsoft Open Technologies). …
The Java part of this partnership is dating back to GlassFish and Java EE 6 everywhere, even in the Azure cloud! [Oracle’s The Aquarium blog, Jan 18, 2011]
Microsoft’s technical architect David Chou has a detailed blog entry on how to run a recent GlassFish 3.1 build on the Microsoft Azure Platform (wikipedia). The article builds on this other recent blog entry on running Java applications in Azure and adds GlassFish-specific instructions.
In Azure terminology, the article discusses setting up a Worker Role using Visual Studio, reserving Ports, setting up a Startup Task (for the JVM), and configuring the Service, GlassFish in this case. This uses Windows Server 2008 (a GlassFish supported platform) and a zip install of GlassFish.
It’s early days (need best practices on working around some of the cloud-inherent limitations) but with this support of GlassFish, the Azure platform now has full support for Java EE 6!
which then was followed with a Java wishlist for Windows Azure [Arun Gupta, Miles to go … weblog among Oracle technical blogs, Feb 11, 2011]
TOTD #155 explains how to run GlassFish in Windows Azure. It works but as evident from the blog, its not easy and intuitive. It uses Worker Role to install JDK and GlassFish but the concepts used are nothing specific to Java. Microsoft has released Azure SDK for Java and AppFabric SDK for Java which is a good start but there are a few key elements missing IMHO. These may be known issues but I thought of listing them here while my memory is fresh 🙂
Here is my wish list to make Java a better on Windows Azure:
- Windows Azure Tools for Eclipse has “PHP Development Toolkit” and “Azure SDK for Java” but no tooling from the Java perspective. I cannot build a Java/Java EE project and say “Go Deploy it to Azure” and then Eclipse + Azure do the magic and provide me with a URL of the deployed project.
- Why do I need to configure IIS on my local Visual Studio development for deploying a Java project ?
- Why do I have to explicitly upload my JDK to Azure Storage ? I’d love to specify an element in the “ServiceConfiguration” or where ever appropriate which should take care of installing JDK for me in the provisioned instance. And also set JAVA_HOME for me.
- Allow to leverage clustering capabilities of application servers such as GlassFish. This will also provide session-failover capabilities on Azure 🙂
- Sticky session load balancing.
- If Windows VM crashes for some reason then App Fabric restarts it which is good. But I’d like my Java processes to be monitored and restarted if they go kaput. And accordingly Load Balancer switches to the next available process in the cluster.
- Visual Studio tooling is nice but allow me to automate/script the deployment of project to Azure.
- Just like Web, Worker, and VM role – how about a Java role ?
- And since this is a wishlist, NetBeans is the best IDE for Java EE 6 development. Why not have a NetBeans plugin for Azure ?
- A better integration with Java EE APIs and there are several of them – JPA, Servlets, EJB, JAX-RS, JMS, etc.
- The “happy scenario” where every thing works as expected is fine is good but that rarely happens in software development. The availabilty of debugging information is pretty minimal during the “not so happy scenario”. Visual Studio should show more information if the processes started during “Launch.ps1” cannot start correctly for some reason.
And I’m not even talking about management, monitoring, adminstration, logging etc.
Thank you Microsoft for a good start with Java on Azure but its pretty basic right now and needs work. I’ll continue my exploration!
Christmas is coming later this year … and I’ll be waiting 🙂
See also:
Running your Java EE 6 Applications in the Cloud (presentation by Arun Gupta, Java EE & GlassFish Guy, May 11, 2011) with agenda as seen on the right:- Using Java™ Technology in the Windows Azure Cloud via the Metro Web Services Stack [joint Sun Microsystems and Microsoft presentation, June 6, 2009]
Deep technical evangelism and development team inside the DPE (Developer and Platform Evangelism) unit of Microsoft
It is a fantastic gig – we’re working with developers, designers, and IT pros from across the industry – from the consumer to enterprise to startups to hobbyists – helping them create amazing next generation apps, build the frameworks that make all this easier, and share our experiences with the community.
[John Shewchuk, Technical Fellow at Microsoft, Chief Technology Officer for the Microsoft Developer Platform]
Source: My New Gig [JohnShew‘s MSDN Blog, May 12, 2013] from which the following excerpts will add more information to the above mission statement:
To do this work I have an incredible team with people like Eric Schmidt, who leads our consumer applications efforts and has done ground-breaking work on projects like [NBC’s] Sunday Night Football (which is up for a Sports Emmy for Outstanding Live Sports Series).
[In fact on May 7 the Sports Emmy was awarded, already 5th time from which the last four awards were won with the program using technology started with Silverlight 3.0 and IIS Smooth Streaming in 2009 for Sunday Night Football live streaming with highly advanced and customized viewing experience. This lead to a continously evolving and expanding cooperation which culminated on April 9th 2013 in the announcement that Microsoft Corp. and NBC Sports Group are partnering to use Windows Azure Media Services across NBC Sports’ digital platforms, including NBCSports.com, NBCOlympics.com and GolfChannel.com. The new alliance aims to deliver live and on-demand programming of more than 5,000 hours of sporting events plus Sochi 2014 Olympic Games for NBC Sports’ digital platforms. More details about that see later on.]
Patrick Chanezon just joined us from VMware where he was driving their cloud and tools developer relations – he has a ton of expertise in the open source space which will be increasingly important given our new Azure IaaS support for Linux.
… we also get to play with all the newest and coolest technologies we’re delivering to developers these days – everything from Windows to Xbox to Windows Phone – and we connect it to the latest cloud services from Azure, Office, and Bing.
James Whittaker [now as Partner Technical Evangelist at Microsoft] – a known industry disruptor and incredible speaker joins us from Bing where he has been leading the development team making Bing knowledge available programmatically – many people may know him from his viral blog post on why he left Google for Microsoft.
As far as John Shewchuk himself is concerned he is describing his latest achievement in the same post as:
As many of you know, for the last few years I’ve been plugging away deep in the plumbing of enterprise identity and Reimagining Active Directory for the cloud. It’s been a great experience and I couldn’t be more proud of all the cool stuff that has gone on across the industry to enable the world of claims-based identity and identity as a service. Over the years I’ve gotten to know many identity leaders including Kim Cameron, Craig Burton, and Andre Durand and have worked with many other great people at companies like Shell, Sun, IBM, Google, and Facebook.
Building on all this collaboration, just a few weeks ago here at Microsoft we reached a major milestone with the official release of Windows Azure Active Directory (AAD). Today all of Microsoft’s major organizational cloud services build on AAD – this includes Azure, Office 365, and Dynamics. AAD supports almost 3 million organizations through 14 global data centers with 99.97% availability. This level of scale and availability is unprecedented for a turnkey identity management service – it’s a huge accomplishment. Although I love the SaaS and scale aspects of AAD, I’ve spent my career working with developers – so I’m stoked that we have made all this available to developers through new technologies like the AAD Graph API.
It is always sad to move on from a great project, but with the release of AAD it is an ideal time to transition and start a new role. So I’m happy to announce that I’m headed to Microsoft’s Developer & Platform Evangelism (DPE) team, working for Steve Guggenheimer. My role is to lead the team doing the deep technical evangelism and development here in DPE.
If one adds to that John Shewchuk’s all contributions from his Experience profile on LinkedIn:
Technical Fellow
Microsoft
March 2008 – Present (5 years 2 months)Current responsibilities include delivering Windows Azure Identity, Access, and Directory Services and defining platform strategy for Microsoft’s Business Productivity Online Services (BPOS).
Recent deliverables include Windows Azure Access Control and Application Messaging / Service Bus Services, SQL Azure, and Active Directory.
Member of the Server and Tools Business (STB) Technical Leadership Team. Key participant in the definition of overall technical and business strategy for several divisions across STB.
Distinguished Engineer
Microsoft
2005 – 2008 (3 years)Delivered Windows Communication Foundation (WCF).
Responsible for Active Directory technical strategy. Worked to unify Active Directory product suite. Released Active Directory Federation Services (ADFS).
Software Engineer
Microsoft
1996 – 2005 (9 years)Member of architecture team that drove the first and subsequent releases of .NET.
Drove transformation of Visual Studio to enable web development.
Authored and drove technical strategy for Web standards. Responsible for key cross-industry collaborations with IBM, Sun, and many others. Key participant in defining strategy for enterprise development
Group Program Manager
Microsoft
1993 – 1996 (3 years)Drove the first release of Visual Studio.
Delivered web development tools including Visual InterDev. Later these became the basis for Visual Studio web tools and web execution platform.
Delivered advanced browser features including 2D layout and progressive rendering. Broad range of patents covering many core web technologies.
Vice President and Founder
Daily Planet Software
1990 – 1993 (3 years)Microsoft acquired Daily Planet Software in Q4CY93 [and morphed it into “Blackbird,” the online-content authoring system for MSN].
so after adding all those contributions, not only to Microsoft but to software engineering in general, only then one can really understand how much John Shewchuk is a true larger than life figure. Also note that Microsoft’s DPE unit never had such an outstanding contributor on its staff, not even the units organisationally preceding it (DRG (Developer Relations Group) formed in 1984, ADCU (Application Developer Customer Unit) introduced in 1997, evolved into DPE in October 2011). It is also the first time as Microsoft DPE has a developer related CTO organization properly staffed with excellent contributors. The size of this central to DPE team could be over 100 people and growing, this is the unofficial information. At the moment we know only the leadership figures of the CTO organization:
– James Whittaker for the partner activities (as coming from his new LinkdIn title given above)
– Patrick Chanezon “initially focused on the enterprise market” (as described by Chanezon in the below details)
– Eric Schmidt leading the consumer applications efforts (as explicitly stated by Shewchuk above)
So at this point we can understand this extremely important, we might say strategic addition to the DPE unit only via the professional stance of its leadership figures, including the leader of the team Shewchuk himself. This is why instead of the details sections I am providing here the following one:
More light on the leaders of the new the deep technical evangelism and development team:
– James Whittaker’s Quality Software Crusade from Academia to Microsoft, then Google and now back to Microsoft [this same ‘Experiencing the Cloud’ blog, March 14 – April 12, 2012]
– James Whittaker @docjamesw 8:19 AM – 8 Apr 13
I gave a blunt, incendiary talk at MS. My punishment: they made it my day job. Watch out world, Microsoft just gave me a speaking role.
– James Whittaker @docjamesw 3:54 PM – 8 May 13
I finally “met” the famous
@maryjofoley …nice talking to you today.
from which Mary Jo Foley published the following in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]
Whittaker’s most recent gig at Microsoft was development manager for the Microsoft knowledge platform as part of the Bing team.
“When Microsoft talks about devices and services, that’s a two-legged stool,” said Whittaker. The third leg is knowledge. We’re embedding knowledge into everything from Xbox, to Office, to third-party products.”
Whittaker said “dev platform” is no longer simply the operating system and related application programming interfaces (APIs). It’s the whole ecosystem, he said, including information that Bing extracts from the Web, like catalogs, weather, and maps. The goal is to make this available inside applications built by both Microsoft and third-party developers.
“Actions can be performed on these entities. We have hundreds of millions of things we can provide that go beyond the blue links (in search engines),” Whittaker said.
– A New Era of Computing [Channel 9 video of the ALM Summit 3 plenary session by James Whittaker, Jan 30, 2013], click on the image to watch (highly recommended)
History will look back and identify September 2012 as the dawn of a new computing paradigm and the official end of the “Search-and-Browse” era [of the 2000s] that Google dominated. James Whittaker talks about this momentous event, shares some history about prior eras, and looks ahead to what this new era brings.
Explanation from the video:
[19:58] September 2012 is “when total search volume went down first. We don’t need to search anymore. It turns out that if you search long enough you find a bunch of stuff, and you hav’nt to search for it anymore.”
[21:00] “Apps are ingesting the web too. Apps are better at searching than browsers and search engines.”
[22:08] “Apps are fundamentally a better way to search because they’re only looking at the part of the web you’ve been interested in. How do we know you are interested in? Because you are using the app.
So our habits are changing and this era has ended.”
In more than the middle [38:26 – 40:00] he is emphasizing the 3 “Experiences” out of Google’s current Top 10 revenue earners rather than “Apps” in the era “when the web goes away” as leading to “Data is currency” for the new era:
…
In the very end of his presentation (from [46:09] to [52:20]), as forward looking “Know & Do” experience, he is describing and a kind of “screenshot demonstrating” the “I need a vacation” experience which should naturally start in one’s calendar and ending there as well.
– Hello Microsoft! [Patrick Chanezon’s blog, May 13, 2013]
On april 29th 2013, I joined Microsoft’s legendary Developer and Platform Evangelism team, where I will initially focus on the Enterprise market. I will report to Technical Fellow John Shewchuk, joining his new team of top-notch technical evangelists, like Xoogler James Whittaker and Microsoft veteran Eric Schmidt. Mary Jo Foley wrote a nice piece about our team on ZDNet today. I will be based in theMicrosoft San Francisco office.
How did it happen?
I spent most of my career competing with Microsoft, at Netscape, Sun, Google and VMware. Competition builds respect, competitors force you to question your assumptions and to constantly evolve. For many of my friends, this move came as a total shock. What made me open to the idea of joining Microsoft is a presentation from Scott Guthrie about Windows Azure at NodeConf 2012 last summer. He presented from a Mac laptop, launched Google Chrome, went to the Cloud9 IDE, edited a Node app pulled from Github, and pushed it to Azure from the cloud IDE: to me this indicated a real change of mentality at Microsoft, and a new openness. Clearly they had listened to what developers ask from a cloud platform. Later on, when my friend Srikanth Satyanarayana pinged me to start conversations with Microsoft, I was open to it. I met with Satya Nadella, and realized that our visions for where the cloud was going were very aligned. Further conversations with Scott Guthrie about Azure, John Shewchuk and Steve Guggenheimer about developer evangelism convinced me this was an adventure I had to take!
Why Microsoft?
Joining Microsoft boils down to 4 reasons: People, Learning, Technology, Impact.
People: in my late 30′s I realized that the people you work with, for and around are as important as what you’re working on. Microsoft has many people I have admired from the outside, like Dare Obasanjo, Eric Meijer, Scott Guthrie, Jon Udell, Scott Hanselman, Jeff Sandquist, Andrew Shuman or Anders Hejlsberg. The team I join has a fantastic roster of A-players with whom I’ll have fun and from whom I will learn.
Learning: I’m a learner at heart. I am curious, I read a lot, and I like to learn from people I work with. I also love to share what I learned with others. My kids loved this book called My Friends, by Taro Gomi, which goes like this: “I learned to walk from my friend the cat, I learned to jump from my friend the dog…”.
In my career it worked the same way: I learned algorithmic from my teacher Christian Vial, I learned internet protocols from my friend Nicolas Pioch, I learned open source from my friend Alejandro Abdelnur, I learned social media from my friend Loic Lemeur, I learned developer relations from my friend Vic Gundotra, I learned platform strategy and storytelling from my friend Charles Fitzgerald… I love doing developer relations, and my two mentors in this area over the past 8 years, Vic and Charles, both came from the Microsoft DPE team. I’m coming to the source for more learning. This team is more than a 1000 people worldwide, and over the past 10 years they defined what tech evangelism is about: they operate at a larger scale and cover a wider scope than any of the teams I worked with. I am very excited to join them.
Technology: Windows Azure is Enterprise ready, more open than people think, and is a complete platform, from infrastructure to services, mobile and Big Data. Azure has matured a lot in the past few years, it covers IaaS, PaaS and Saas, their Paas service is multi-framework and multi-service, with a marketplace of add-ons, it has a mobile backend as a service for Windows Phone, iOS, Android and HTML5, and includes Hadoop and Big Data services. It is in production today, has been battle tested for years as the base for many Microsoft first party apps and services, and is ready for the Enterprise, with a true public/private/hybrid solution: with Windows Server 2013, System Center and Azure you can start building your hybrid cloud today.. The team ships important new features regularly, my favorite being the point to site and software vpn features announced a few weeks ago, which will drastically lower the barrier to create hybrid clouds. Azure is not a Windows/.NET only platform, it is more open than people give it credit for: you can provision Linux VMs, and the PaaS supports .NET, Java, PHP, Node, Python, Ruby, with open source (Apache 2 license) SDKs on Githuband an Eclipse plugin, built by the Microsoft Open Technologies team. Scott Guthrie gives a very good overview of Windows Azure in this video from the Windows Azure Conf 3 weeks ago.
Impact: as a kid, I was reading a lot of science fiction, and got my first computer (a TRS-80) when I was 10 years old. As I explain in many of my presentations (like Portrait of the developer as The Artist), my childhood dreams were to change the world through technology, and more specifically computers. My dreams are far from being fulfilled today: it is true that we have more powerful machines and software tools, and technology changed the world in many aspects, but machines are still hard to program, and software engineering needs to evolve to let us work at a higher level of abstraction.
The move to a devices and services world is an important architecture change like we see every 20 years in the software industry. Cloud platforms have the potential to help developers build smarter applications faster, and change entire areas of the human experience. It has started to happen in the consumer applications space, but the next big wave of change is the consumerization of Enterprise IT, where developers and IT professionals can completely transform the way enterprises work, driving business value faster, enabling new capabilities and business models. My goal is to help them in this transformation, and Microsoft is the place where I can have the most impact.
Here’s a quick video to summarize it all: developers, developers, developers, think big and look up at the sky, its color is Azure!
Developers, Developers, Developers A homage to you, developers I interacted with around the world, in the past 8 years doing developer relations at Google and VMware. http://wordpress.chanezon.com/2013/05/10/goodbye-vmware/
If you have never tried Azure, or have tried it a year ago, sign up for a free trial and give it a go! I hope to see many of you at the Build conference in June in San Francisco.
– Mary Jo Foley published the following about Chanezon in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]:
“We’re at a deep architectural inflection point right now in the enterprise,” said Chanezon. “Devs need new ways of working, new apps and new frameworks. There’s the whole dev-ops movement, plus the move to become more agile.”
Chanezon said he joined Microsoft because he felt the company’s new devices plus services strategy really embraces these changes. He said while Google had devices and services, too, it didn’t have the private/hybrid cloud component which Microsoft also brings to the enterprise-dev table. As a big believer in the power and potential contribution of open source, he said he was encouraged to see that Azure has become a very open-source-friendly platform.
– Mary Jo Foley published the following about Schmidt in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]:
Schmidt joined DPE six years ago [as director of DPE’s Media and Advertising Initiatives team], bringing his media specialization to the media and entertainment, social and gaming verticals. These are “where people are thinking about attaching devices to a lifestyle,” he said.
A big target for Schmidt is mobile developers, specifically those writing for iOS and Android who may not know how their skills can be transferred to Windows 8 and Windows Phone 8. “We’re showing them how what they already know is correlated,” he said, while playing up the message that the iOS and Android gold mines are drying up.
– Silverlight delivers online viewing experience for Sunday Night Football [Silverlight and Windows Phone SDK blog, Sept 10, 2009]
The NFL and NBC will be delivering the entire Sunday Night Football season by using Silverlight 3.0 and IIS Smooth Streaming. The first game of the season will be broadcast tonight, with the Tennessee Titans vs. the Pittsburg Steelers. Game starts at 5:00pm PST and you can watch online for free: http://snfextra.nbcsports.com/.
Here are a few of the benefits Silverlight delivers:
- A full screen video player that is capable of delivering 720p HD video. TV quality on the web.
- A main HD video feed, plus 4 user-selectable alternate synchronized camera feeds that allows users to switch camera angles themselves. Your TV can’t do that.
- Adaptive smooth streaming of live HD video, which enables the video player to automatically switch bitrates on the fly depending on networking/CPU conditions. No buffering/stuttering experience.
- DVR support of the live video, including Pause, Instant Replay, Slow Motion, Skip Forward/Back. You can pause and rewind on live video.
- Play-by-play data (touchdowns, fumbles, etc) inserted as tooltip chapter markers on the scrubber at the bottom allowing you to quickly seek to key moments. A smarter, contextual DVR.
- Highlights of major plays created within minutes of the play. NBC is cutting on-demand highlights and publishing them on-the-fly with Smooth Streaming.
- Sideline interviews with the players. No more channel surfing, you are one click away from additional content.
- Game statistics. These are live stats coming directly in real-time from the NFL.
- Game commentary and Q&A with the SNF hosts. Chat with the live TV broadcasters.
– Microsoft Silverlight and NBC Bring Winter Games to the Web in High Definition [Microsoft feature story, Feb 12, 2010]
Microsoft Silverlight is the player of choice for NBC’s online viewing experience of the 2010 Winter Games in Vancouver.
REDMOND, Wash. —Feb. 12, 2010 — NBC and Silverlight have once again teamed up to bring Winter Games coverage to the Web – this time in high definition.
For the next 16 days, people all over the world will watch the Winter Games on television. Increasingly, they’ll be tuning in online as the world’s top athletes compete for gold and glory.
NBC will once again use Silverlight, Microsoft’s fast-growing, smooth-streaming video and animation plug-in for browsers, to bring full coverage and highlights to NBCOlympics.com. In 2008 for Beijing, the NBC-Silverlight partnership yielded not only revolutionary Web coverage of a sporting event, but a record number of viewers: 52.1 million people logged on to watch 9.9 million hours of video.
At that time the Silverlight platform was so new that NBC also offered Windows Media Player alongside it. After the success of Beijing and with nearly 50 percent of Internet-connected devices running Silverlight, NBC decided to consolidate on Silverlight for the Vancouver Games.
Microsoft employees Jason Suess (left) and Eric Schmidt take
a break in an NBC production studio.In addition, NBC and Silverlight teams are working together on other major sporting events such as Wimbledon and NFL Sunday Night Football.
“It’s really been amazing to see that partnership and friendship with NBC grow over the last year and a half,” says Jason Suess, principal technical evangelist for Silverlight. “I expect many more events as our partnership gets tighter and tighter.”
With Silverlight, viewers can rewind and fast forward the action, or use pause and slow-motion. The player also scales the quality of the video to whatever a user’s machine can handle, delivering up to 720p – the highest resolution possible under current digital television standards.
“After Beijing, what we heard loud and clear was if you can provide a higher quality experience, users will definitely spend more time in that experience,” Suess says.
The Silverlight team also worked with NBC to provide special behind-the-scenes tools for the network, including the ability to insert mid-stream advertising, and a rough cut editor that allows NBC personnel to quickly edit and post highlights on the Web.
“With Michael Phelps going for eight gold medals in Beijing, every time he’d win there would be a massive rush to the site to see him winning the latest gold,” Suess says. “The challenge there was for NBC to have the content on the site in time to meet the demand. Now editors can go in literally while a (video) stream is happening and cut a highlight.”
Suess said the Winter Games are at a different scale from the massive Summer Games, with far fewer events and more niche sports. Still, Microsoft has worked hard to provide the most engaging photo and video experience possible, he says.
– Silverlight Powered Emmy Nominated Sunday Night Football [Silverlight Team on Silverlight Blog, April 19, 2010]
This NFL season, NBC thrilled football fans by broadcasting Sunday Night Football on 2 screens – television and online. And now, as a result of this great work, Sunday Night Football Extra and NBC Sports have been nominated for a 2010 Sports Emmy® Nomination! NBC Sports teamed with Microsoft Silverlight and Vertigo to design and develop a visual stunning, interactive online video experience. The Sunday Night Football Extra Player featured Microsoft Smooth Streaming technology providing a customized viewing experience that smoothly and automatically adjusted to individual users’ bandwidth and computer’s performance in real time. The SNF Extra Player also touted an interactive user experience featuring an unprecedented five synchronized camera angles all in true 720p HD, slow-motion replay, full DVR controls, real time key plays integration, real-time statistics, and live interaction with commentators.
The Sports Emmy® Awards will be held in New York City on Monday, April 26, 2010, and will recognize outstanding achievement in sports television coverage. This nomination is really the culmination of the innovative thinking, hard work and dedication demonstrated by the team that NBC Sports, Vertigo and a select team of key partners brought together for Sunday Night Football Extra — and Silverlight is the engine that made it possible. If you want to learn more about the nomination, you can also visit Vertigo’s site at http://bit.ly/vertigo-snf.
The Result?
- Number of Games: 17 football games streamed via Silverlight
- Average time tuned in: 29 minutes (about 24 minutes longer than average time spent tuning in on broadcast TV)
- Number of Viewers: Over 2.2 million football fans tuned in on NBCSports.com to watch the Season live and in full HD
- Hours of Video: Approximately 1 million hours of video streamed
- Peak users: 38,500 total peak concurrent users
- What technology made this possible😕 IIS 7, IIS Media Services and Silverlight Rough Cut Editor
Tons of great information about how SNF came together online can be found in the case study and whitepaper live on Microsoft.com.
The Sports Emmy® Awards will be held in New York City on Monday, April 26, 2010, and will recognize outstanding achievement in sports television coverage. This nomination is really the culmination of the innovative thinking, hard work and dedication demonstrated by the team that NBC Sports, Vertigo and a select team of key partners brought together for Sunday Night Football Extra — and Silverlight is the engine that made it possible. If you want to learn more about the nomination, you can also visit Vertigo’s site at http://bit.ly/vertigo-snf.
– Interactive Media Player to Bring PDC to Developers Worldwide [Microsoft feature story, Oct 27, 2010]
A new interactive media player will enable developers worldwide to virtually attend this week’s Professional Developers Conference at microsoftpdc.com. Using Silverlight and Windows Azure, Microsoft is providing many of the features NBC used when broadcasting the Olympics online.
…
With the player, Microsoft is introducing a new way of bringing a live, in-person event to a much broader audience, said Eric Schmidt, Microsoft’s senior director of Developer Platform Evangelism. “The goal is to narrow the gap between audience and speaker,” he said.
Schmidt heads up the team that has helped stream a number of major events recently, including the 2010 U.S. Open Golf Championship, the 2010 Wimbledon Championship, and NBC’s Sunday Night Football. The team’s objective has been to reach large online audiences with immersive and interactive experiences. Along the way, they developed new ways of delivering multi-camera video and built new interactive models inside what has traditionally been just a video player. The team also built out frameworks so that customers and partners can create similar experiences leveraging Microsoft’s platform technologies in a turnkey manner.
With the PDC10 virtual player, Microsoft is doing things it couldn’t have done just a few years ago, said Schmidt. All session content will be available live and on-demand in HD quality, and viewers will have the ability to pause and rewind the video at any point. They also can toggle back and forth between different camera feeds, allowing a viewer to cut between a presenter and the presentation material.
The PDC player has a number of built-in interactive features. Real-time polling will enable speakers to query both the online and in-person audience for live feedback. Live Q&A will help the audience interact with the presenters while they’re delivering a session. And an inline Twitter feed will extend the conversation beyond the online player and into the Twitter domain.
…
– NBC SPORTS GROUP COLLECTS 11 SPORTS EMMY AWARDS, MOST OF ANY SPORTS MEDIA COMPANY [press release, May 7, 2013]
London Olympics Garners Five Awards, Including Outstanding Live Event Turnaround
Sunday Night Football Wins Fifth Consecutive Emmy for Outstanding Live Sports Series; Super Bowl XLVI Wins for Outstanding Live Sports Special
Bob Costas, Al Michaels, Cris Collinsworth and Pierre McGuire Honored
NEW YORK – May 7, 2013 – NBC Sports Group won 11 Sports Emmy Awards, the most of any sports media company for the third straight year; the London Olympics received five Emmys, including Outstanding Live Event Turnaround; Super Bowl XLVII won for Outstanding Live Sports Special; Sunday Night Football won its fifth consecutive award for Outstanding Live Sports Series; and Bob Costas, Al Michaels, Cris Collinsworth and Pierre McGuire were all honored in their respective categories at the 34th Annual Sports Emmy Awards, presented tonight by the National Academy of Television Arts and Sciences at Frederick P. Rose Hall, Home of Jazz at Lincoln Center.
MARK LAZARUS, NBC SPORTS GROUP CHAIRMAN: “We could not be more proud of our dedicated team. Tonight is particularly special because we were recognized for our coverage of the London Olympics and the NFL, two properties that touch virtually everyone in the NBC Sports Group – and our on-air commentators. It’s rewarding to know that our talent continues to be recognized year in and year out by our peers.”
Formed in January, 2011, the NBC Sports Group consists of NBC Sports, NBC Sports Network, Golf Channel, NBC Olympics, 11 NBC Sports Regional Networks, two regional news networks, NBC Sports Radio and NBCSports.com.
NBCUniversal’s coverage of the London Olympics was honored with a total of five Emmy Awards in the following categories:
Outstanding Live Event Turnaround;
The George Wensel Technical Achievement Award – NBC, NBC Sports Network, NBCOlympics.com, Bravo, CNBC, MSNBC, Telemundo;
Outstanding Technical Team Studio;
The Dick Schaap Outstanding Writing Award;
Outstanding New Approaches, Sports Programming – NBCOlympics.com.
For the fifth consecutive year, NBC Sports won Outstanding Live Sports Series for Sunday Night Football. NBC Sports has now won the award in six of the last seven years, also winning in 2007 for its NASCAR coverage.
NBC Sports was also honored with the Emmy for Outstanding Live Sports Special for its coverage of Super Bowl XLVI. NBC Sports also received the Emmy in this category for its coverage of Super Bowl XLIII.
Bob Costas was awarded his 25th career Emmy and fifth consecutive for Outstanding Sports Personality-Studio Host. Costas hosted the London Olympics, is the host Football Night in America, NBC Sports’ acclaimed NFL studio show, and Costas Tonight, which airs on NBC Sports Network. He won the Emmy in the same category last year for his work on Football Night.
Al Michaels was awarded the Emmy Award for Outstanding Sports Personality – Play-by-Play, for his work on Sunday Night Football. For Michaels, who received the Lifetime Achievement Award at the 32ndAnnual Sports Emmy Awards in 2011, this marks his seventh career Emmy Award.
Cris Collinsworth was awarded his fifth consecutive Emmy for Outstanding Sports Personality-Sports Event Analyst. This marks Collinsworth’s 14th career Emmy, which includes wins in 2007 and 2008 in the Studio Analyst category for work on Football Night in America.
Pierre McGuire, NBC Sports Group’s “Inside the Glass” analyst for its NHL coverage, was awarded his first career Emmy for Outstanding Sports Personality – Sports Reporter.
– Microsoft Teams Up With NBC Sports Group to Deliver Compelling Sports Programming Across Digital Platforms Using Windows Azure [press release, April 9, 2013]
New alliance aims to deliver live and on-demand programming of more than 5,000 hours of sporting events plus Sochi 2014 Olympic Games for NBC Sports’ digital platforms.
LAS VEGAS — April 9, 2013 — Today at the National Association of Broadcasters Show, Microsoft Corp. and NBC Sports Group announced they are partnering to use Windows Azure Media Services across NBC Sports’ digital platforms, including NBCSports.com, NBCOlympics.com and GolfChannel.com.
Through the agreement, which rolls out this summer, Microsoft will provide both live-streaming and on-demand viewing services for more than 5,000 hours of games and events on devices, such as smartphones, tablets and PCs. These services will allow sports fans to be able to relive or catch up on their favorite events and highlights that aired on NBC Sports Group platforms.
“NBC Sports Group is thrilled to be working with Microsoft,” said Rick Cordella, senior vice president and general manager of digital media at NBC Sports Group. “More and more of our audience is viewing our programming on Internet-enabled devices, so quality of service is important. Also, our programming reaches a national audience and needs to be available under challenging network conditions. We chose Microsoft because of its reputation for delivering an end-to-end experience that allows for seamless, high-quality video for both live and video-on-demand streaming.”
NBC Sports Group’s unique portfolio of properties includes the Sochi 2014 Winter Olympic Games, “Sunday Night Football,” Notre Dame Football, Premier League soccer, Major League Soccer, Formula One and IndyCar racing, PGA TOUR, U.S. Open golf, French Open tennis, Triple Crown horse racing, and more.
“Microsoft is constantly looking for innovative ways to utilize the power of the cloud, and we see Windows Azure Media Services as a high-demand offering,” said Scott Guthrie, corporate vice president at Microsoft. “As consumer demand for viewing media online on any available device grows, our partnership with NBC Sports Group gives us the opportunity to provide the best of cloud technology and bring world-class sporting events to audiences when and where they want them.”
Microsoft has a broad partner ecosystem, which extends to the cloud. To bring the NBC Sports Group viewing experience to life, Microsoft is working with iStreamPlanet Co. and its live video workflow management product Aventus. Aventus will integrate with Windows Azure Media Services to provide a scalable, reliable, live video workflow solution to help bring NBC Sports Group programming to the cloud.
NBC Sports Group and iStreamPlanet join a growing list of companies, including European Tour, deltatre, Dolby Laboratories Inc. and Digital Rapids Corp., which are working with Windows Azure to bring their broadcasting audiences or technologies to the cloud.
In addition to Media Services, Windows Azure core services include Mobile Services, Cloud Services, Virtual Machines, Websites and Big Data. Customers can go tohttp://www.windowsazure.com for more information and to start their free trial.
– Mary Jo Foley published the following about Shewchuk, the head of the team in her Microsoft builds a deep-tech team to attract next-gen developers [ZDNet, May 13, 2013]:
“‘The platform’ is now a collection of capabilities across all of our products,” said John Shewchuk, the head of the recently formed technical evangelism and dev team. Our job is “helping devs stitch together solutions with these technologies.”
“Devs” also is a much broader target audience for Microsoft than it once was. Back in the early DPE days, devs meant professional, full-time programmers. The target audience for Microsoft’s new deep-tech team includes anyone who writes a consumer, business or hybrid application. That means startups, enterprise customers and top consumer and business independent software vendors (ISVs).
The Microsoft toolbox from which devs can choose to mix and match includes many technologies that didn’t exist a decade, or even just a few years, ago. They include everything from Windows Azure technologies, to Bing programming interfaces and datasets, to the WinRT framework underlying Windows 8 and Windows Server 2012. Microsoft’s next Xbox, Kinect, Windows Phones, Surfaces, Perceptive Pixel multitouch displays are among the targets for these technologies.
“This is a playground. We get to work with stuff from all the different Microsoft business groups,” said Shewchuk. “It’s like geek heaven.”
The idea of creating this kind of deep-tech team has been percolating since October 2012, when Microsoft veteran Steve Guggenheimer returned to Microsoft to head up DPE, according to Microsoft execs. Guggenheimer, in conjunction with Server and Tools Business chief Satya Nadella and with the blessing of CEO Steve Ballmer, set out to recruit some deeply technical evangelists with far-flung specializations.
Shewchuk, a 20-year Microsoft veteran and one of the company’s Technical Fellows, agreed to spearhead the team. (Microsoft isn’t saying how large the new team is, but I’ve heard it could be over 100 people in size and growing.) Shewchuk, who is now the Chief Technology Officer for the Microsoft Developer Platform, was working for the last several years on Windows Azure, where he helped the company build Windows Azure Active Directory, Service Bus and SQL services. Shewchuk also was a key contributor to a number of other Microsoft dev technologies, including .Net, Visual Studio, Windows Communication Foundation and the WIndows Identity Foundation.
“The idea is to bridge our inside developers to outside developers,” Shewchuk said. “We want to get the top developers to adopt our platform.”
Shewchuk described the new deep-tech team as a place where Microsoft pulls together its own “world-class” developers to exchange ideas among themselves and with the outside world. Because Microsoft’s new stack of technologies are all at different places, in terms of their maturity cycle, the Microsoft tech team will do everything from build new frameworks; develop code to tie together disparate products; and make available code and templates for external use using services like GitHub or CodePlex. In some cases, the “developers” who take advantage of these pieces may be Microsoft’s own product teams who may want to incorporate code (and even the developers who wrote it) directly into their units.
More information:
– John Shewchuk’s Profile [MSDN, May 2013]
John Shewchuk is a Technical Fellow and the CTO for the Microsoft Developer Platform. John leads the team responsible for technical evangelism and development in DPE; his team partners with developers, designers, and IT pros to build next gen applications using Microsoft’s devices and services and they share those experiences with the developer community. John has been with Microsoft for almost 20 years. Most recently John focused on Azure developing key platform services including Windows Azure Active Directory, Service Bus, and SQL services. He has been a key contributor on wide range of technologies including; Visual Studio, .NET, WCF, WIF, IE, and AD. John is an advocate and contributor to open source and Web standards – most recently he drove many of the contributions Microsoft made to OAuth 2. John has BS in Electrical Engineering from Union College and an MS in Computer Science from Brown University. He lives in Redmond with his wife and four children.
– Microsoft Big Brains: John Shewchuk [Mary Jo Foley for All About Microsoft blog of ZDNet, Nov 20, 2008]
Claim to Fame: One of the masterminds behind “Zurich,” a key component of Microsoft’s Azure cloud infrastructure, and a key player in Microsoft’s Federated Identity work [see also: Ozzie foreshadows ‘Zurich,’ Microsoft’s elastic cloud [same author, same place, July 24, 2008]
– Bytes by MSDN: John Shewchuk and Rob Bagby discuss “Project Dallas” [on YouTube MrAbdoul9 channel, Jan 29, 2010; on Channel 9, Aug 29, 2010] this is where OAuth is first mentioned
– Microsoft unveils AD Azure strategy, ID management reset [John Fontana for Identity Matters blog of ZDNet, May 25, 2012]
After two years of work, Microsoft has unveiled details and its strategy around Active Directory for the cloud, anointing it the centerpiece of a comprehensive online identity management services strategy it thinks will profoundly alter the ID landscape.
The company said changes to the current concepts around identity management need a “reset” to handle the “social enterprise.” Microsoft says it is “reimagining” how its Windows Azure Active Directory (WAAD) service helps developers create apps that connect the directory to SaaS apps and cloud platforms, corporate customers and social networks.
“The term ‘identity management’ will be redefined to include everything needed to provide and consume identity in our increasingly networked and federated world,” Kim Cameron, an icon in the identity field and now a distinguished engineer working on identity at Microsoft, said on his blog. “This is so profound that it constitutes a ‘reset’.”
At the center is WAAD, which is in use today mostly with Office 365 and Windows Intune customers. WAAD is a multitenant service designed for high availability and Internet scale.
In a companion blog post to Cameron’s, John Shewchuk [see also Part 2 of that], a Microsoft Technical Fellow and key cog in the company’s cloud identity engineering, provided some details on WAAD, including new Internet-focused connectivity, mobility and collaboration features to support applications that run in the cloud.
Shewchuk said the aim is to support technologies such as Java, and apps running on mobile devices including the iPhone or other cloud platforms such as Amazon’s AWS.
Shewchuk said WAAD will be the cloud extension to on-premises Active Directory deployments enterprises have already made. The two are married using identity federation and directory synchronization.
He said Microsoft made “significant changes to the internal architecture of Active Directory” in order to create WAAD.
As an example, he said, “Instead of having an individual server operate as the Active Directory store and issue credentials, we split these capabilities into independent roles. We made issuing tokens a scale-out role in Windows Azure, and we partitioned the Active Directory store to operate across many servers and between data centers.”
Some analysts are already noting the challenges Microsoft will have with its cloud directory.
Mark Diodati, a research vice president at Gartner focusing on identity issues, told me in a conversation about changes the cloud is forcing on enterprise ID management that, “the addition of tablets and smartphones into the enterprise device mix exceeds Active Directory’s management capabilities and there is an impedance mismatch using Kerberos across the cloud.”
While Shewchuk laid out the set-up for a Part 2 [see here: Part 2 where OAuth 2 is first mentioned as: “we currently support WS-Federation to enable SSO between the application and the directory. We also see the SAML/P, OAuth 2, and OpenID Connect protocols as a strategic focus and will be increasing support for these protocols”] of his blog that will focus on enhancements to WAAD, Kim Cameron painted the bigger picture on cloud identity going forward.
He said companies adopting cloud technology will see dramatic changes over the next decade in the way identity management is delivered. “We all need to understand this change,” he stressed.
Cameron said identity management as a service “will use the cloud to master the cloud”, and will provide the most reliable and cost-effective options.
“Enterprises will use these services to manage authentication and authorization of internal employees, the supply chain, and customers (including individuals), leads and prospects. Governments will use them when interacting with other government agencies, enterprises and citizens.”
And he added that enterprises will have to move beyond concepts that have guided their thinking to date.
Identity & Access [MSFTws2012 YouTube channel, Nov 20, 2012]
Current state-of-the-art:
– Welcome to the Active Directory Team Blog [MSDN blogs, April 15, 2013]
– Announcing some new capabilities in Azure Active Directory Graph Service [Windows Azure Active Directory Graph Team blog on MSDN, May 15, 2013]
– BUILD 2013, Windows 8.1, and Microsoft’s Deep-Tech Team: Hopeful News for Devs [Tim Huckaby on DevPro, May 16, 2013]
It’s hard to change a culture. Having worked for or with Microsoft for over 20 years, I can tell you that I have a myriad of colleagues that are Microsoft employees, most of whom I call my friends and respect very much. Over the last several months, I’ve had several discouraging private conversations about where the developer goals, mission, and strategy were headed for Microsoft. I could see the problems and mistakes. Microsoft employees could see them, too. You probably saw them, too. It’s been frustrating. When the head guy in charge of Microsoft development ignores feedback that includes internal feedback from Microsoft and external feedback from folks such as me and you, then that builds a culture of secrecy and fear. Although that head guy is gone now [obvious reference to Steven Sinofsky, ex Microsoft: The victim of an extremely complex web of the “western world” high-tech interests [‘Experiencing the Cloud, Nov 13-20, 2012], it’s still taken a long time to change that culture back to where it should be.
In all honesty, I can tell you that I haven’t been encouraged about the developer platform at Microsoft in a while. However, today I’m encouraged for the first time in a long time. I see the culture changing. I hear people at Microsoft saying that the culture is changing. And there’s several encouraging announcements that are emerging. Suddenly, I’m now excited about the Microsoft’s BUILD 2013 developer conference that’s being held in San Francisco from June 26 to June 28, and I’m not the old guy saying, “Get off my lawn!” However, I’d first like to present you all with some background that made me discouraged in the first place.
Microsoft’s Development Woes
I painfully read a recent blog post about Microsoft’s developer issues. I don’t even know who wrote it. This guy or gal didn’t put his or her name on the blog post. It’s painful because this person makes a ton of good points. Within this blog post, the author goes far enough back to put Win16 into perspective. It’s a very interesting read if you want to talk about the context of Microsoft’s developer problems through time and the speculation surrounding those problems. One of the main points in this article is that Microsoft has hung onto an obsolete Win32 API even though, a decade ago Intel took a completely different tact with the GPU and multi-core processors when it could have picked several versions of Windows over time to start over. However, Microsoft didn’t choose to do this, which has caused developers a lot of pain.
Related: “Windows 8 Start Button Shenanigans“
Most recently that developer pain has manifested with the introduction of the modern API in Windows 8. The modern API has many developers so confused and angered. A lot of these developers are experiencing anger because the most successfully adopted and beloved developer technology in Microsoft history was seemingly killed by this new modern API: Silverlight. Also seemingly killed was XNA. Several developers are also confused because Microsoft seems to be pushing the message to get users to build enterprise applications in HTML5 and deliver them through the Windows Store.
But, alas, there is hope! Recent announcements and speculations have me really encouraged.
Encouraging Announcements from Microsoft
On May 14, Microsoft officially announced the long rumored Windows Blue, which is officially called Windows 8.1. It will be a free update to Windows 8. Windows 8.1 promises to fix several different problems that folks have been complaining about. It’s important to note that Windows 8.1 isn’t a service pack. It’s a full blown upgrade to the OS. Microsoft promises several exciting things for the developer to be announced at BUILD, which includes the public release of Windows 8.1.
This month a minor Internet hysteria phenomena occurred with the revelation of the Microsoft deep-tech team. Mary Jo Foley wrote it best describing it as Microsoft’s new plan for reaching out to top-tier developers of all sizes to get them to take a look at the new and expanded Microsoft toolbox. There’s several “big guns” who will be leading the effort.
John Shewchuk is one of those “big guns.” I know John from a prior life at Microsoft. He’s a 20-year Microsoft veteran and one of the company’s Technical Fellows. He’s leading the team and serving as the Chief Technology Officer for the Microsoft Developer Platform. This is good news.
My guess is that the deep-tech team was the brainchild of Microsoft veteran Steve Guggenheimer, who took the reins of heading the Developer and Platform Evangelism (DPE) team in October 2012. Affectionately known as “Guggs,” Steve Guggenheimer has a long and storied career at Microsoft.
Patrick Chanezon is a new hire to Microsoft who will lead the enterprise evangelism efforts in Microsoft’s DPE unit from San Francisco. He joined Microsoft from VMware just weeks ago. This is a key hire that also seems to be really good news.
More about those Microsoft people I respect; the people who get it; the people who affect change. Scott Guthrie is one of them. But everyone knows who knows the Microsoft Platform knows who Scott Guthrie is. Another one of them is Gabor Fari. You probably don’t know his name. But Gabor is one of the many Microsoft folks who “gets it.” Internally, he’s willing to criticize the company he works for and loves when it deserves it. He’s also the first to garner praise where Microsoft deserves it. Gabor’s title is Director of Life Sciences Solutions, and his grasp of the developer platform at Microsoft is his passion. When discussing the problems of the past and the excitement of the future with Gabor he left me with this, and I believe it’s the perfect way to end this article:
“I am very excited about the latest developments and news that has been released, and I am eagerly anticipating additional news from the BUILD conference. The slumbering lion still has spectacular fangs and teeth; and now he has woken up and is ready to roar.”
Regarding Gabor Fari I will include here the following link:
– Sanofi: Global Healthcare Leader Deploys Intelligent Content Framework, Speeds Time-to-Market [Microsoft Case Study, April 16, 2013] from which the following excerpts describe Fari’s involvement and role in strategic developments the best:
In January 2011, Sanofi launched a program called CRUISE—Content Re-Use Information System for Electronic Health. Through CRUISE, the company set out to develop a content management solution that transverses the company’s research and development efforts. The program charter of CRUISE is to implement processes and tools that enable stakeholders to author, assemble, review, approve, reuse, publish, and deliver high-quality, consistent, and compliant content and documentation throughout the product development life cycle—aiding the submission to regulatory agencies and other industry audiences. “The idea is to find ways to intelligently and seamlessly manage content authoring and production,” says Bhanu Bahl, Senior Manager of Clinical Sciences and Operation Platform at Sanofi. “The key business objective is to reduce the effort required to prepare documents through a synergy of optimized processes and enabling technologies.”
CRUISE has three pillars. One pillar involves simplifying the documentation process in a way that makes it possible to reuse content in various materials. Another pillar revolves around services that involve the many different documentation deliverables. The third pillar focuses on the technology solution, which is designed as a content library that tags and classifies information so that it can be easily assembled and searched. “With CRUISE, we are not doing a process redesign,” says Bahl. “We’re building something more tangible, more simplified, and more standardized.”
To address the CRUISE mandate, Sanofi worked closely with Microsoft as well as two members of the Microsoft Partner Network, DITA Exchange and the ArborSys Group. Microsoft provided the Intelligent Content Framework (ICF) and underlying technologies based on Microsoft SharePoint Server and Microsoft Office. DITA Exchange delivered a solution that enables organizations to establish and maintain a “single source of truth” for their strategic content, and to deliver that content consistently across outputs. The ArborSys Group consulted on the tool and process redesign and helped achieve an end-to-end business and technology implementation for regulated industries.
…
Gabor Fari, Director of Life Sciences Solutions at Microsoft, served as an evangelist in helping to put together the CRUISE team. DITA Exchange had been working closely with Microsoft since 2008 to develop the ICF for regulated industries. It completed the first version of the XML-based solution in February 2009.
As the technology pillar of CRUISE and the engine of EnCORE, DITA Exchange software elevates SharePoint to an XML-based component content management and single-source publishing solution. It enables its customers to comply with regulatory requirements with tools for reusing content in a consistent and accurate way throughout the product development life cycle in the life sciences space. “Microsoft promoted our work to several pharmaceutical companies,” says Andersen. “It led the way in terms of bringing innovative ideas around SCM solutions.”
DITA Exchange began working on the CRUISE implementation in April 2011. The partner participated in planning and supplied the solution used to manage the document output maps, topics, and linking of topics to the maps. “DITA Exchange helped us with content design and the governance structures of information design,” says Allred. “The people at DITA Exchange are masters of their technological domain. They have experience in regulated industries and the knowledge required to get our vision into an operational model.”
The ArborSys Group joined the effort in April 2011. This partner provides business consultancy and technical implementation and helped Sanofi achieve measurable and sustainable results through the implementation of flexible IT solutions that can be adapted for change in a dynamic business climate.
The two partners collaborated on developing the EnCORE platform. The ArborSys Group scoped processes, integrated service management roles and extensions, and trained internal resources.
“Microsoft, DITA Exchange, and the ArborSys Group all provided expertise and leadership in terms of how we define processes and address the three pillars of CRUISE,” says Bahl. “The various disciplines they provided really helped us strategize our best opportunity in terms of development. We share a common vision that has resulted in a very rich, cutting-edge offering that other pharmaceutical companies will probably adopt three to five years from now.”
While many other regulated industries have embraced SCM in recent years, life science organizations have lagged. “It’s no secret that the pharmaceutical industry is conservative,” says Andersen. “People think very carefully before they start anything. Sanofi is absolutely the leader in innovating in the pharmaceutical content management space.”
…
New Asha platform and ecosystem to deliver a breakthrough category of affordable smartphone from Nokia
… by bringing premium experience to the entry-level smartphone market:
Update: In H2 CY12 we will witness whether it is possible to create a stable “bottom” smartphone segment with this exceptional added value on really bottom hardware or not!
The Nokia offensive of a year ago with “simple” Asha Touch was halted in Q1 CY2013.
(Note that Android smartphones are in the “free-fall” for the last 12 months and you can observe a “race to bottom” phenomenon among those vendors. See here, here and here.) New Nokia Asha 501 Television commercial [nokia YouTube channel, June 26, 2013]
Fastlane – Nokia Asha [nokia YouTube channel, June 28, 2013]
Living with Fastlane on the Nokia Asha 501 [Nokia Conversations, July 5, 2013]
… You’ll now get two home screens: Fastlane, and ‘Home’, which is the main menu. All you have to do is swipe left or right to access one or the other. … You can still customise the main menu so icons and apps can be easily accessed, but once you’ve been using the Asha 501 for a while, Fastlane means that you rarely need to access the second screen.
[July 5] The current lowest price is with a coupon offer for Rs. 4731 [$78.5]
[June 22] Pre-order Asha 501 at Rs. 5,199 [$88]; [June 15, list price] Rs. 6000 [$101]
(at the same time Lumia 520 in India is from Rs. 8,893 [$150], at Rs. 10,097 [$170] at the same Nokia Shop as the Asha 501 pre-order where the list price is Rs. 11,289 [$190])
see also: Nokia Asha 501 starts worldwide rollout [Nokia Conversations, June 24, 2013]:
… [Asha 501] goes on sale this week in Thailand and Pakistan, … Next week, the rollout will continue in India and progress onto countries in Europe, the Middle East and Africa, and Asia Pacific. In late summer, the Nokia Asha 501 will start selling in Latin American countries like Brazil. …
End of update
Peter Skillman (Head of Ux Design for Mobile Phones & HERE at Nokia) demonstrating
Swipe and Fastlane experiences on a greatly enlarged touchscreen,
actually from a ladder, at the May 9, 2013 launch in New-Delhi, India
- At its heart is a landmark new feature called Fastlane which was inspired by the much-loved swipe motion gestures on the iconic Nokia N9. Fastlane is designed so that you’re never more than a swipe away.
- Fastlane was inspired by how people really use their phone. Recently accessed contacts, social networks and apps, unique to each person, are stored and presented in Fastlane.
- Fastlane is an interactive second home screen which tracks your past, present and future, showing up to 50 of your most recent activities. It brings all the different elements of your smartphone experience together.
- It continues Nokia’s focus on the ‘smarter Internet’ with an updated version of the Nokia Xpress browser with a fresh new user experience
- There is Nokia Xpress Now, a new Web application that recommends content based on location, preferences and trending topics.
- Fully leverages Nokia’s investments in Smarterphone, which it acquired in 2012 and builds on the best aspects of Series 40 to create something fresh and innovative. It also comes with design cues from Lumia.
- Nokia gives developers the chance to make more money through the global reach of Nokia Store and tools like Nokia In-App Payment and Nokia Advertising Exchange (NAX), as well as Nokia’s unparalleled operator billing network. So developers will be incentivized to deliver quality apps, previously found only on high-end smartphones.
At the launch in New-Delhi, India there were the following notable remarks as well:
- ~80M people are using the Nokia Xpress browser now
- 20M Asha Touch devices were sold since its launch 10 months ago
- Nokia expects to sell 100 million of the new generation Asha smartphones over the coming years, beginning with the Nokia Asha 501
- Nokia expects to sell 100 million of the new generation Asha smartphones over the coming years, beginning with the Nokia Asha 501
- Nokia gives developers the chance to make more money through the global reach of Nokia Store and tools like Nokia In-App Payment and Nokia Advertising Exchange (NAX), as well as Nokia’s unparalleled operator billing network.
- There are 120 ad agencies involved in NAX in 200+ countries
- There are 158 operators involved in Nokia’s operator billing network in 59 markets
- All that will provide a 2.5X increase in terms of developers’ revenue
- Nokia is the first manufacturer to bundle Facebook for free with Nokia Asha 501
- Such partnership is quite important to Facebook as the company sees its biggest opportunity in getting 5B billion people on-line who were not before (so far “only” 750M people access Facebook from their mobile devices)
![]()
Happy Nokia presenters posing for photos
at the end of the launch in India
Making of the New Nokia Asha [nokia YouTube channel, May 9, 2013]
First hands-on with the Nokia Asha 501 [nokia YouTube channel, May 9, 2013]
The best thing is to watch The Nokia Asha 501 – Peter Skillman, Nokia Design Team [nokia YouTube channel, May 9, 2013]
Meet the next generation: Nokia Asha 501 [Nokia Conversations, May 9, 2013]
The aspirational meets the affordable in Nokia’s beautiful new touchscreen smartphone with social networking and a smarter Internet at its very core
The Nokia Asha 501 is set to break down a lot of barriers and smash people’s expectations of just how much ‘smartphone’ their money can buy.
It’s a touchscreen experience with social networks, content sharing and connectivity deeply integrated into a wonderful, responsive and revamped operating system.
Design and Colours
However, the first thing you will notice about the trail-blazing Nokia Asha 501 is the gorgeous design. Its lines and shapes are streamlined, compact and clean.
The seamless look and feel is of a premium product that is part of a unified modern design family, from the Lumia 920 to the Nokia 105.
After you’ve admired the durable two-part construction with the removable monobody, the next thing you’ll have to do is make a choice.
The Asha 501 is available in bright red, bright green, cyan, yellow, white and black.
The colour story continues with the red headphones that are included in the box. It’s sure to become a signature look!
Nokia Asha platform
The Asha 501 is powered by a new software platform, which fully leverages Nokia’s investments in Smarterphone, which it acquired in 2012 and builds on the best aspects of Series 40 to create something fresh and innovative.
The result is an evolutionary operating system that is fast, responsive and easy to use.
The Asha platform is faster, more responsive and more flexible too. This means new features and functionalities can be anticipated with future updates.
Developers will be able to create apps for the Nokia Asha 501 that will also be compatible with future Asha platform-based devices.
Living in the Fastlane
The forward-thinking approach to the Asha 501 extends to the user experience.
At its heart is a landmark new feature called Fastlane. Inspired by the much-loved swipe motion gestures on the iconic Nokia N9, Fastlane makes it faster and easier to access whatever is most important to you.
Whether it is the applications you use the most, the latest images you’ve captured or your social network updates, Fastlane is designed so that you’re never more than a swipe away.
Think of it as intelligent multitasking, or think of it as an interactive second home screen. Either way, Fastlane tracks your past, present and future, showing up to 50 of your most recent activities. It brings all the different elements of your smartphone experience together.
Smarter Internet
In just a few short years, more people will be accessing the Internet on a mobile phone than any other kind of electronic device.
This is why the Asha 501 continues Nokia’s focus on the ‘smarter Internet’ with an updated version of the Nokia Xpress browser with a fresh new user experience.
Of course, it still uses cloud-compression technology to reduce data by up to 90 per cent, making it both faster and cheaper for people to get online.
Hardware matters
Straight out of the box, there will be Facebook, Twitter, instant messaging and Weather Channel apps installed, together with premium games from Gameloft, such as Big Little City and Real Football 2013.
There’s also the now-legendary offer of 40 Free EA Games for you to download and keep forever from the Nokia Store.
The Asha 501 will be the first Nokia device at such a low price point to use a micro-SIM. Furthermore, it will come in a single-SIM variant and a Dual-SIM version with Nokia’s unique Easy-Swap SIM technology, which allows people to switch SIM cards without having to power off the device.
It features a 3.2-megapixel camera, WiFi, a lock screen with a glanceable clock and the 3-inch capacitive screen is made out of hardened glass. There’s 4GB of internal memory and support for a micro-SD card up to 32GB.
The battery life offers an incredible 48 days in standby and 17 hours of talk time – that means you could talk from 7am to midnight non-stop!
The Nokia Asha 501 will cost $99 before taxes and subsidies. It’ll be available in more than 90 countries worldwide from Q2.
See also: Nokia Asha 501: exclusive photos [Nokia Conversations, May 9, 2013]
Nokia Asha Platform Unlocks Sub-100 USD Smartphone Opportunity for Developers [press release, May 9, 2013]
New Asha platform delivers developers a consistent quality application experience in the world’s fastest growing smartphone category
New Delhi, India and Espoo, Finland – Nokia today announced a global initiative to unlock the sub-100 USD smartphone market for developers with the release of its Nokia Asha platform. Nokia also announced the Nokia Asha 501, the first smartphone built for the new platform.
Developers who write applications for the Nokia Asha 501 will reach all smartphones based on the new Asha platform without having to re-write code. Nokia expects to sell 100 million of the new generation Asha smartphones over the coming years, beginning with the Nokia Asha 501.
“We’ve seen a tremendous increase in consumer demand for apps for our Asha smartphones, as witnessed by the growth of downloads in Nokia Store,” said Marco Argenti, head of Developer Experiences at Nokia. “Consumers expect quality apps at every price point. With the new Asha platform, developers will be incentivized to deliver those quality apps, previously found only on high-end smartphones, thanks to unprecedented volumes and reach opportunities through one distribution channel and a single platform.”
Many of the most popular applications are already available or in development for the Nokia Asha platform, including CNN, eBuddy, ESPN, Facebook, Foursquare, Line, LinkedIn, Nimbuzz, Pictelligent, The Weather Channel, Twitter, WeChat, World of Red Bull and games from Electronic Arts, Gameloft, Indiagames, Namco-Bandai and Reliance Games. WhatsApp and other key partners continue to explore new Asha.
Developers will also get easy-to-use development tools and more ways to sell and promote apps, including the new Nokia In-App Payment tool.
New Nokia Asha SDK 1.0 and Nokia Asha web app tools
The new Nokia Asha Software Development Kit 1.0 is a suite of tools that support the development, testing, packaging and deployment of Java apps on the Nokia Asha platform.
The new Nokia Asha web app tools include a Web Development Environment (WDE), an integrated development environment (IDE) that developers can use to create and edit their Nokia Asha web apps; Web Inspector to help developers to debug and inspect elements in their web apps; and a new Web Designer Tool for creating great user experience for their web apps.
Nokia In-App Payment
Nokia also announced the new Nokia In-App Payment tool, designed to make it easier for developers to sell content from within their apps. It provides a simple and secure purchase experience for consumers and transparent payments for developers. Nokia In-App Payment will also be available for existing Asha and Series 40 phones, such as the Nokia 301. Nokia will release a public beta of Nokia In-App Payment in the coming weeks. Developers can sign-up for the beta at www.developer.nokia.com/inapppayment.
Developers voice support for new Nokia Asha platform
Dennis Crowley, CEO and co-founder of Foursquare: “Nokia continues to be a valued partner for Foursquare. The new Foursquare app on Asha delivers a fantastic search and discovery experience to help people make the most of where they are. As we head into the next wave of new Asha smartphones, we look forward to making Foursquare available for millions of Asha customers around the world.”
Michael Fisher, Director of Mobile Business Development, Twitter: “Twitter’s integration into the new Asha platform, along with preloaded Twitter application that ships on Nokia devices, offers people a richer Twitter experience. Whether you want to share a photo or news article, connect with people or find out what’s happening around the world, it’s now easier than ever to use Twitter on this family of devices.”
Sebastien Thevenet, General Manager SEA-Pacific, Gameloft: “As Nokia’s long term partner, with to date 200 million downloads recorded on Nokia Store, Gameloft is thrilled to offer four preloaded high quality games on the Nokia Asha 501 at launch (Assassin’s Creed 3, Bubble Bash 3, Real Football 2013, Little Big City) and overall more than 30 games to download on Nokia Store down the track. Those innovative titles are Try and Buy and Free to Play games making the most of Asha Full Touch capabilities and unique user interface, truly bringing a smartphone gaming experience at your fingertips.”
Akira Morikawa, CEO of Line Corporation: “Line’s partnership with Nokia is very important and it will continue on new Asha. Delivering Line on new Asha represents our commitment of ensuring that people around the world will experience the joy of communication through Line on Asha smartphones.”
Manish Agarwal, CEO, Reliance Games: “Reliance Games and Nokia have together demonstrated the combined power of localized content and a distribution platform in India. Our partnership with Nokia is a very cherished partnership for us to demonstrate the power of GoLocal. Reliance Games is committed to develop games on localized themes on the new Asha platform and entertain millions of people around the world by working closely with local Nokia teams in India, Asia Pacific, Latin America and other growth markets.”
Keshav Bajaj, VP Business Development, Nimbuzz: “Most of the 150 million and counting Nimbuzz users are from markets where Nokia Asha continues to gain momentum, including India, South East Asia, Middle East and Africa. We are very excited to have an application exclusively built for the new Asha platform to ensure the best user experience. This is yet another initiative from Nimbuzz for one of its most exclusive partners, Nokia.”
Alex Adjadj, Director of Strategic Development, Mobile Sales & Marketing, Namco-Bandai: “NAMCO BANDAI has been developing mobile games for over 10 years but there are still regions of the world where users haven’t seen or played PAC-MAN. Our 22 titles available in 13 languages for the Nokia Asha 501 is a testament to our commitment to Nokia to bring a great experience to mobile users of all demographics and budgets.”
Ramesh Kumar, Head of ESPNcricinfo and ESPN Digital Media India: “Given the popularity of Asha devices, the ESPNcricinfo app on the Asha 2013 platform is a dynamic way to reach growing numbers of mobile users in emerging markets. It is a rich platform where the ESPNcricinfo app can provide comprehensive cricket coverage tailored to suit on-the-go consumption of today’s passionate fans, including its famed match coverage, the latest news stories, insightful editorial pieces covering International & domestic cricket – all tailor-made for mobile consumption.”
New Nokia Asha 501 Dual SIM – One swipe to access everything you love [nokia YouTube channel, May 9, 2013]
Nokia introduces the Nokia Asha 501 [press release, May 9, 2013]
Nokia Asha 501 and Asha platform reinvent the affordable smartphone category
New Delhi, India and Espoo, Finland – Nokia today unveiled the first of a new family of Asha smartphones with the introduction of the Nokia Asha 501. The handset pushes the boundaries of affordable smartphone design with bold color, a high-quality build and an innovative user interface. The Nokia Asha 501 is the first device to run on the new Asha platform, which is designed to make the experience faster and more responsive. The Asha platform also helps developers to create, publish and make more money from apps made specifically for the new generation of Asha devices.
Standout design, innovative user interface
The Nokia Asha 501 makes high-end design and quality accessible to more people. The device is available in a choice of six striking colours that complement the elegant design. It comes in just two parts: a durable, removable casing and the scratch-resistant glass display, which features a three-inch, capacitive touchscreen and a single ‘back’ button. The compact new Asha weighs only 98 grams, for the ultimate portability.
The Nokia Asha 501 is built to make it easier for people to access everything they love, with a simple swipe and a choice of two main screens: Home and Fastlane. Home is a traditional, icon-based view for launching individual apps or accessing a specific feature, like the dialler or phone settings. The new Fastlane view was inspired by how people really use their phone. Recently accessed contacts, social networks and apps, unique to each person, are stored and presented in Fastlane. It provides a record of how the phone is used, giving people a glimpse of their past, present and future activity, and helping them multi-task by providing easy access to their favorite features.
Smarter and more personal Internet experiences
The new Asha comes with Nokia Xpress Browser pre-loaded, which compresses Internet data by up to 90%. This is aimed at making mobile browsing faster and more affordable. Nokia also announced the availability of Nokia Xpress Now, a new Web application that recommends content based on location, preferences and trending topics. It will be available via the Browser homepage or as a download from Nokia Store.
“Nokia has surpassed expectations of what’s achievable in the sub-100 USD phone category with a new Asha handset that is unlike any other, with design cues from Lumia and a mix of features, services and affordability that is valued by price-conscious buyers,” said Neil Mawston, executive director, Global Wireless Practice, Strategy Analytics. “This is a welcome addition to the market and a refreshing option for consumers looking to upgrade from feature phones.”
Asha platform for next-generation family of devices
The new Nokia Asha 501 was purpose-built to give people the best possible mobile experiences at an affordable price. It is highly efficient, with an industry-leading standby time of up to 48 days*. The Asha 501 is the first smartphone built on the new Asha platform, which leverages Nokia’s investments in Smarterphone, a company which Nokia acquired in 2012.
The new Asha platform provides developers with an open, standards-based environment for creating quality apps for consumers. Developers can create apps for the Nokia Asha 501 that will be compatible with future Asha platform-based devices. Nokia gives developers the chance to make more money through the global reach of Nokia Store and tools like Nokia In-App Payment and Nokia Advertising Exchange (NAX), as well as Nokia’s unparalleled operator billing network.
Many of the most popular applications are already available or in development for the Nokia Asha platform, including CNN, eBuddy, ESPN, Facebook, Foursquare, Line, LinkedIn, Nimbuzz, Pictelligent, The Weather Channel, Twitter, WeChat, World of Red Bull and games from Electronic Arts, Gameloft, Indiagames, Namco-Bandai and Reliance Games. WhatsApp and other key partners continue to explore new Asha.
The HERE experience, based on Nokia’s leading location-based platform, will also be available as a download for the Nokia Asha 501, starting in Q3 2013 and will initially include basic mapping services.
“The new Nokia Asha 501 raises the bar for what is possible in affordable smartphone design and optimization,” said Timo Toikkanen, executive vice president, Mobile Phones, Nokia. “The synergy between the physical design and the engine that is the new Asha platform has created a smartphone with both style and substance at a great price.”
Facebook and global operators to support Nokia Asha 501 with free data plans
The Nokia Asha 501 is expected to start shipping in June 2013. It is expected to be available through approximately 60 operators and distributors in more than 90 countries worldwide.
“We are very happy to offer the new Nokia Asha 501 through our subsidiaries in the continent. We are certain that this innovative device will follow the successful footprint of the Nokia Asha family, combining affordability with the best communication and Internet browsing capabilities,” said Marco Quatorze, Value Added Services Director for America Movil.
A leading operator in the Asia-Pacific region, Telkomsel is also supporting the arrival of the new Nokia Asha. “The Nokia Asha 501 will help us to boost the mobile Internet in Indonesia. It is powered by innovations like the Nokia Xpress Browser, based on a very efficient data consumption technology which allow us to offer best data plan tariff for people,” said Alistair Johnston, Chief Marketing Officer (CMO) of Telkomsel. “We have a billing agreement with Nokia that supports the creation of local applications absolutely relevant to Indonesian consumers.”
The popularity of the Nokia Asha family has also prompted innovative approaches to bundled mobile services. Nokia, Facebook and mobile network operator Airtel announced they have joined forces to offer data-free access to the standalone Facebook app, as well as the mobile site m.facebook.com. By the end of second quarter, current Airtel subscribers in Africa and India** will be able to enjoy unlimited, data-free access to Facebook from their Nokia Asha 501 for a limited period of time.
Commenting on the partnership, Andre Beyers, Chief Marketing Officer for Airtel Africa, said: “The collaboration with Nokia is in line with our strategy of enabling people to access data in Africa as we seek to bridge the digital divide across the continent. We’re already witnessing tremendous growth in data use across the 17 countries where we operate. The provision of free Facebook access is an excellent proposition to the millions of Airtel consumers. We are extremely delighted to partner with Nokia to give our consumers an even better mobile experience.”
Telkomsel will provide a specific Nokia Telkomsel Asha data plan that offers up to 500 MB of data use and includes 60 minutes of calls and 60 SMS. The company will also provide a one month free data plan to consumers using Nokia Asha 501 that can be used for all mobile Internet activities including access to Facebook or downloading apps.
“This bundle is a great way to discover Facebook on your Nokia Asha and enjoy the experience for longer without worrying about data charges,” said Vaughn Smith, VP mobile partnerships, Facebook. “Working in close partnership with Nokia and global operators made this offer possible and we’re excited to help connect the world on Facebook.”
MTN, a leading operator across Africa, said it will also offer the Nokia Asha 501 and ease access to Facebook. “We are excited to support this initiative with Facebook in Nigeria and Zambia and we are looking forward to expand it to other markets,” says Pieter Verkade, group chief commercial officer at MTN.
Product specifications and availability
The Nokia Asha 501 is available in single or EasySwap Dual SIM models. All come with WiFi and Bluetooth. Other specifications:
– Dimensions: 99.2 x 58 x 12.1 mm; 98 grams
– Camera: 3.2 MP
– Single SIM standby time: up to 48 days***
– Dual SIM standby time: up to 26 days***
– Talk time: up to 17 hours
– Additional memory of 4GB (card included in box), expandable up to 32GB
– Forty free EA Games worth €75 downloadable from Nokia Store
– Available colours: Bright Red, Bright Green, Cyan, Yellow, White and Black
– Suggested pricing is 99 USD before taxes and subsidies.Read more about the Nokia Asha 501 on Nokia Conversations: http://conversations.nokia.com/?p=120951.
* when using the single SIM model
**Under test conditions; actual results may vary, depending on use.
** *Time implementation differs by country
Intel CEO (Krzanich) and president (James) combo to assure manufacturing and next-gen cross-platform lead
Update: excerpts from Intel’s CEO Presents at Annual Shareholder Meeting Conference (Transcript) [Seeking Alpha, May 17, 2013]
Andy D. Bryant – Chairman of the Board:
In his most recent role as Chief Operating Officer, Brian [Krzanich] led an organization of more than 50,000 people. This included Intel’s technology and manufacturing group, its foundry and memory businesses, its human resources and information technology groups, and its China strategy.
Brian M. Krzanich – Chief Executive Officer:
I thought I would start off our conversation this morning talking about three main topics. First, I thought I give just a brief update on our business conditions, just a quick financial look at the company, and really what it returns to shareholders.
The next topic I thought I would talk about are what is really the mega trends that are driving our industry and technology. And that really will lead into the final section, I’ll try and talk about, which is, what are our imperatives for growth as a company and what’s the response from these mega trends? So hopefully today, you’ll get a picture of a great foundation, how we see the trends driving where we’re headed, and what it takes for us to grow moving forward.
Let’s start with just where are we as a business. And as you probably saw in our earnings announcement and as we’ve been watching the company over the last couple of years, we really had a solid foundation. We had net income of over $53 billion, excuse me, net revenue of over $53 billion, 62% margin, and an operating profit of over almost $15 billion. That puts us in the top 15 of the S&P 500 for net income.
…. So this foundation, this financial picture is what we will use now to move forward and really drive additional growth. And so I’d like to transition now to what are these mega trends? Where is the industry headed? And as a result, how does that drive our imperatives for growth moving forward?
I don’t think we can start a discussion like that without first, having a quick discussion about one of the key real trends that have occurred over the last couple of years. And that’s really this ultra-mobile and move to tablets and phones that has occurred in our industry. We see that we’ve been a bit slow to move into that space, but what I want to show you today is that, we see the movement, we’re well positioned already and the base of assets that we have will allow us to really grow in this area at a much faster rate moving forward.
So let’s start with mega trend number one, which is just that, it’s about ultra-mobile. We see the is becoming more and more a connected computing environment. The people want their computing next to them. They want to carry it with them. And that really means you have to have connectivity, you have to have more power, you have to have integration, and you have to be in these new markets and new devices that are moving towards more and more connectivity, we see it. We believe we are well positioned. We have 15 phones in 22 countries already, excuse me, 12 phones in 22 countries, 15 tablets both Android and Windows, and so we’ve got a good base. We see this trend, and I’ll show you in a little bit with our imperatives, we’re well positioned to move forward.
The next one is one that I think is really driving great growth and is a great opportunity, in some place we’ve really established well, is really that the Datacenter is continuing to grow at phenomenal rates. It’s growing because of the move to cloud and tied to that connective computing environment, people want to keep more and more and have more and more access to the cloud.
And then you’re also seeing a move in the Datacenter around big data, that as all of these connective devices continue to grow, it provides a relative information that companies can now use to offer better services and better understanding of what consumers want, and that’s really what big data is about. It’s about providing answers as you increase the data rate that’s available to you. We see that, again, we believe our products and our services are well positioned for this, and we’ll talk a little bit about that in our imperatives moving forward.
And the third trend is really around the foundation of Intel. It’s around integration and innovation, and I believe this is really what Intel does best. When you look at our name and where we came from, Intel is Integrated Electronics, that’s what the name stands for and this is what we’ve always done best. This allows us to combine our silicon technology, our architecture, our software and services to really drive the SOC or the System-On-A-Chip environment to levels that nobody has seen before we believe moving forward.
It means really going out and bringing in new innovations, new technologies, new communication capabilities, bringing those into silicon and using that more as long leading edge technology to allow us to drive these in a way faster than anybody else on the planet can. So those are the three big mega trends that we see driving technology and the industry moving forward.
And what I’m going to show you now is that, we have the assets that we can apply towards these mega trends and then how those drive the imperatives for the company moving forward. Let’s first take a look at the assets. And I believe this is an asset base that any company in the world would be end user.
We have our manufacturing assets, something that’s been near and dear to my heart over the years, 4 million square feet of manufacturing clean room. We have leading edge technology. We have 22-nanometers in production, the world’s only Tri-Gate FinFET technology is our third generation of High-k Metal Gate. We’re in the final stages of development prior to production or 14-nanometers, our second generation of Tri-Gate transistors, our fourth generation of High-k Metal Gate, that’s an asset that everybody on the planet would love to have at – to apply towards those mega trends that we just talked about.
We have our architecture, which really ranges from the Xeon architecture for data center and servers all the way down to the Atom Architecture, which allows us into microservers, but into that connected computing, and what you will see is a move more and more as we go forward to continue to drive that continuum of computing capability into more and more markets. That’s really an asset, again, very few companies if any have.
And the last is to tie it all together, software and services, we’ve talked – you’ve seen our acquisition of McAfee and Wind River, we’ve built a services business. What this allows us to do is take all of those assets and apply into each one of those markets that I talked about in the mega trend. And what it allows us to do is provide more than just silicon. It allows us to provide a platform and a user experience that nobody else can, and that’s a secure and user-friendly experience that allows us to provide everything to the OEM, who wants to bring a product to market.
All of those are surrounded by the 105,000 employees that are always Intel’s greatest asset. The ability of these employees is to have, when we apply them towards these markets and these imperatives that you will see in a second here, is by far the greatest asset Intel has and we will continue to be moving forward. So I’ve shown you our base, I’ve shown you the mega trends, I’ve shown you what I believe is the greatest assets of the world to apply to those, and so let’s talk about what the imperatives are then moving forward.
The first one is to drive PC innovation. We’ve talked a bit about this. It’s the foundation of that financial picture that I showed you at the beginning. With Haswell coming out this year, it’s launching actually right now and throughout the year as the Haswell products come out, with ultrabooks, we have the greatest level of innovation in the PC in its history. You’re going to see ultrabooks, you see two in ones, which are convertibles, which are bringing that tablet and a PC together.
And with Haswell, you see the largest improvement in battery life and continuing capability that Intel has ever brought to production. So we believe that we are well positioned for what will be truly the PCs greatest time of innovation that we’ve all seen in our life.
The next imperative is that aggressively move into this ultra-mobile space. As I said at the beginning, we’re well positioned. We’re already shipping 12 phones in 22 countries. We have 15 tablets out there both windows and Android. We’ve got products that are specifically designed for this ultra-mobile space that have been in the works for a couple of years, now you saw the Silvermont announcement [SEE SECTION 6. ON ‘Low-Power, High-Performance Silvermont Microarchitecture’ IN THE DETAILS PART BELOW] earlier this week.
You are going to see, you see the Bay Trail will come out in the fourth quarter, which is really a product targeted towards tablets and low-power
CRAM[C-RAN: Cloud Radio Access Network] cells and convertible devices. You can see Merrifield, which is our next generation phone device. And just as important is our LTE technology, which is critical for that second part of connecting computing, which is the communication. We have data-based LTE coming out this summer, and we have multi-mode LTE, which allows voice, data, and voice over data at the end of this year, and that really opens up all the rest to the markets to our phones and our connected devices.So we believe we’re well positioned. We’ve made the move, but we believe also that our architecture and the moves we’ve made allow us to move even quicker into this market down moving forward.
The third one again tied to the trends I showed you at the beginning is to accelerate growth in the Datacenter. We have a great position in the Datacenter already. We believe that real trends like big data, movement to the cloud, software to find networks, all of those things allow for phenomenal growth in this space, and we believe our product line is well positioned to let us lead there.
We have the Haswell, which I talked about, our second generation of 22-nanometer architecture, we’ll be shipping Xeon level or server level class product in mid-2013. We have Avoton, which is Atom from microservers. We’ll be the first to this microserver trend. You hear a lot about it. You hear a lot of people talking about it. You should know that Intel was first to this space. We didn’t wait for it to be created. We’re going to go move that space.
We’re going to go define that microserver space, and we have Rangeley, which is product for network in comps infrastructure, which really allows us to move into the other sides of the Datacenter, where communications and that networking infrastructure occur. So those products combined, we believe we are well positioned to accelerate this growth into the Datacenter.
And then lastly, is to continue our silicon leadership, talked early on about 22-nanometers, the first technology to bring out the target transistor, but more importantly as we have a roadmap of Morris Law that continues, that we see us growing further in along the Morris Law transitions. We have 14-nanometer in its final stages of development, ready for production at the end of this year and moving into next year.
We understand what is beyond 14-nanometers for Morris Law. That silicon leadership allows us to drive the innovation in every one of these other areas and really bring it together in tri-sector of cost, battery, and performance that allows us to bring products to anyone of these markets that’s required.
So to bring this to closure, as my – this is my first presentation as CEO I guess. I’ve shown you that we have a great basis from which to grow on, but financially the company is sound in a very strong position. I’ve shown you that, we understand the mega trends and then we understand exactly how the market is moving into these data center areas, the connected computing and ultra-mobility, and I try to show you we have laid out the imperatives and assets to really allow these as to move into these new areas.
And so with that, I would just like to bring this to closure to show you that, I believe we’re well positioned. I believe that we have the best position in Intel’s history and a long last while to grow into these areas, and we really look forward to the coming years.
And with that, I would like to call back up Andy and Renée for Q&A.
Q: Question one, it has been two years since we purchased McAfee. How has McAfee contributed to the bottom line? What is the long-term plan with this company?
A: from Renée James – President
When McAfee and the acquisition of McAfee is hot of a broader strategy that we’ve had to increase the overall security not only of our products, but as we move into cloud-based computing, and into ultra-mobility that Brian talked about. We believe that one of the opportunities faces for Intel is to provide a more secure solution, more secure platforms around your data, around the devices that we build, and around your own personal identity and privacy.So McAfee is one of many assets that we have acquired, they have been doing a very good job, and you may have read that we’ve added two McAfee over the course of the last two years. We’ve recently announced a week ago that we made an additional acquisition, which was always part of our strategy to grow what McAfee offered around the network and the cloud, and we continued to evolve their product line and this week we made an announcement around a personal identity and data security products for consumers that is bundled with our new platforms. So we’re very happy with them. It is part of a much broader strategy that’s consistent with what Brian just talked about, and we should look for more in that area.
Q: Over the last decade, our stock has been flat. It’s more or less tracked Microsoft has underperformed S&P 500 compared to QUALCOMM. QUALCOMM is up 300%; Apple, up 6,000%. QUALCOMM, for example, is now worth as much as Intel. Apple and QUALCOMM focus on communication products and mobile products, whereas we mostly use the market.
What’s worse is that we have the huge manufacturing capability that you talked about, maybe 3.5-year lead on competitors. So if weren’t just now coming out with Haswell, sophomore products et cetera, our design side of the house must be behind by 3.5 years or so, and that’s not good, because now we’re in catchup mode, and that’s risky. And this isn’t the first time in the last dozen years I missed the industry trend. So I’m very concerned about the product design side of the house. This company has been very focused on manufacturing from pub noise aren’t down, the microprocessor, the 4004 was afterthought.
The products mattered to this company. So I’m wondering if you think that the Board, the top management and the comp packages focus on product development well enough and if you’ve seen any improvements in last few years to improve the effectiveness of product design likely to be true?
A: from Brian M. Krzanich – Chief Executive Officer
So I started my presentation with an acknowledgment that we were slow to the mobile market. And I wanted to do that purposely to let the shareholders know we saw, but they were moving much more aggressively now moving forward, and we believe we have the right products. What we have to do is really make some decisions around; you see we bought assets to allow us to get into the LTE space. We’ve made transitions in what we design for Atom, and we’ve looked at how do we design our silicon technologies to allow integration of those, because COMs and the CPU are a little bit different in the silicon technologies they require.So we do believe we are positioned well moving forward. But you are asking a more fundamental question about how do we see market trends and how do we really make sure that we understand how the market is moving. And actually we spent a lot of time with the board over the last several months, partly in just the normal discussions with the board, and partly in this process of selection. And both Renée and I talked about how we’re going to build a much more outward sensing environment for Intel, so that we understand where our architecture needs to move first.
We actually understand that integration is occurring more and more, that it’s important more about integration than almost anything else right now, and that’s really how these new devices are occurring. We have plans to build a structure that allows us to have consultants and people from the outside to help us look at these trends and look at our architectural choices and make sure we’re making the right decisions. And we’re trying to build a much closer relationship with our customers, so that we understand where they want to go. We spent, actually Renée and I over the last week, a lot of time with and they are all showing us here is where the market is moving and here is where we need Intel to move.
We are going to make adjustments in our architecture, and our product choices to align to those much, much closure moving forward. So we do believe, we see what you’re talking about how we made those choices, but we believe we’ve made the right decisions and we have the right process moving forward to make sure, I wish they are aligned.
Q: … question is about the Software and Services Group as compared to the PC Client Group. The Software and Services is certainly expected to grow and I’m particularly interested in the gross margin contribution not just today, I’m interested in your vision three to five years from now, how you see the gross margin contribution of the Software Group, comparing and either increasing or decreasing relative to the PCCG Group?
A: from Renée James – President
The Software and Services Group as you know is a new reportable segment in the last several years for us. Software business, in general, are good opportunities for growth and once that are aligned with the market segments that we’re going to provide products into or provide products into today is a good opportunity for us to enhance our offering to our customers.In general, we have a very, very good business. Brian talked about the margin profile business we have today. The businesses that we are pursuing in Software and Services are equally good opportunities, and we expect that those businesses will continue to contribute as software companies do in the market and about the same way that they do in the market today.
Q: For the first time as a shareholder of Intel, I’m kind of wondering and curious about and look forward a decade from now, and here is a context to the question.
The CapEx spending has more than doubled in the last two years. R&D has gone up by 53%, you are making a really significant investment in the future that you talked about CEO Brian, okay. And you’ve made a transition over the FinFET, last week as preparation for the meeting, I looked at the ITRS road map and about 2020, it indicates that gate lines would be running around 10-nanometers.
When I look realistically of that, the question I have is one, what device architecture would you be using there more than likely? And number two, isn’t it time for a transition, an inflection point as Andy might have said to either switching photons or quantum computing or something else. So maybe part of the question is directed towards you Brian, and the other part could we possibly hear from your CTO or Head of TD?
A: from Brian M. Krzanich – Chief Executive Officer
I’ll start. It was a pretty long question, so I’m going to see if I can get most of your points. Your first point was CapEx has gone up, we’re spending a lot more on technology and is there a time for a transition in that technology, and I would tell you that we are the – we typically have about a 10-year view of Moore’s Law and we’ve always had a 10-year view. If you went back 10 years ago, we had a 10-year view. If you went back five years ago, we have a 10-year view, that’s about as far out as you can see, and we believe that we have the right architectures to continue to grow Moore’s Law in a silicon environment for at least that period of time.That’s not to say we don’t have efforts in photonics, we actually have efforts in photonics and we’re going to bring products to markets in photonics, more about switching in the datacenter [SEE SECTION 7. ON ‘PHOTONIC ARCHITECTURES’ IN THE DETAILS PART BELOW], but the fundamental silicon technology and our ability to continue to drive it beyond 10 nanometers, to be honest with you, we plan to be on 10 nanometers much earlier than 2020, I can tell you that, is we believe sound and fundamental and it’s why we made investments you saw us make an investment in ASML last year for almost $4 billion in total. That was really to drive EV technology for lithography to allow to keep pushing well below 10 nanometers from the Moore’s Law standpoint. So we think we are pretty well positioned to keep moving at least for the next decade in the current technologies. I don’t know if Bill…
A: from William M. Holt – Executive Vice President
General Manager, Technology and Manufacturing Group [“semiconductor CTO”]But if you look back at the last three or four generation each one has come with a substantial innovation or change, there is no simple scaling in our business anymore. And that will continue, and so each time we plan to advance the technology, we have to make changes relative to photonics and our quantum computing. We do have – Brian said, have efforts in those, but those are clearly not something that are anytime in the near horizon. There is lots of interesting work going on there, but none of it really is practical to turn into a real computing devices.
Q: How do you expect the foundry market to impact margins short and long-term?
A: from Brian M. Krzanich – Chief Executive Officer
So I think Stacy has talked in some of the earnings calls that we currently see margins to be in the range looking forward to 55% to, I believe, 65% was the range she gave. Those were inclusive of our foundry business. So I would tell you that we’ve already built the foundry growth into our current projections for margin, and we actually believe we are being selective, we’re not going into the general foundry business, we’re not opening up to anybody. We’re really looking for partners that can utilize and make it take advantage of our leading edge silicon and that’s why we are able to stay in that range we believe moving forward.Q: I agree with the President’s vision of future is the customer interface and have LTE and good processing that all make sense. [SEE ‘TRANSPARENT COMPUTING’ AS THE OVERALL VISION, AND PERCEPTUAL COMPUTING AS AN ADDITIONAL ONE IN THE BELOW DETAILS, PARTICULARLY SECTIONS 5.+8. AND SECTION 4. RESPECTIVELY.] I would rather usher with these executions. If you look at the mobile world right now the ARMs Holdings, they have 95% of the market share. I understand Intel has 1,000, I think 1,000 researchers I think they are doing purely basic research.
And how come interference see this mobile way coming and that the ARM Holdings taking maybe 5% market share. On top of that, Microsoft going to RT, it’s high this Windows RT, which are ARM Holding and HP just announced a new tablet with NVIDIA tablet processor, also based on ARM. So everybody is trying to take the CPU share away from you. And I understand Intel is having this Haswell should coming out in June, some questions, are you confident this Haswell can hold ARMs Holding back?
A: from Brian M. Krzanich – Chief Executive Officer
First, I’d say, in my presentation I talked about the fact that yes, we missed it. We were slow to tablets and some of the mobile computing. We do believe we have a good base right, 12 phones, 20 countries, 15 tablets, Android and Windows 8, it gets important that we’ve looked at both of those, and then we have these products moving forward. I would tell you that it’s more than just Haswell.
Haswell is a key product. It’s going to extend quorum much further on both ends from a high performance Xeon space to the low power space. You are going to see single digit power levels on a core product, which will allow it move into very mobile spaces, but that alone would not go beat ARM or go beat the competition into those spaces you talked about. What you really have to do is extend into that Atom space as well, and that’s where you see products like Clover Trail and Clover Trail+ today, Silvermont [SEE SECTION 6. ON ‘Low-Power, High-Performance Silvermont Microarchitecture’ IN THE DETAILS PART BELOW] and then moving into the rest of this year you see, Bay Trail.
Bay Trail will be one of the biggest advances we made in Atom that allows us to move into the mobile space much stronger.
And then thirdly, with the assets we purchased a few years back, which was the Infineon mobile group, which gave us the comp side of this. And I told you that we have comps’ LTE data in the middle of this summer and multimode at the end of this year. We’ll actually be the next meeting person in LTE space and that’s critical to get into those markets. You don’t want to have to dependent on others to provide that comp and then as we move into next year, you’ll see us integrating that, which we believe allow us to move back on to that leading edge. So stitch back to that, do we have a good product roadmap to allow us to go, win share in that space, we believe we do.
Next question is do we have a good ability to view that space moving forward because whatever it is today won’t be what it is five years from now, and that’s what Renée and I are committed to go, put in together because we absolutely believe this connected computing will continue to move down and we’ll continue on the products going forward.
End of [May 17, 2013] update
Intel Chairman Interview on New Intel CEO Brian Krzanich [SBARTSTV YouTube channel, May 2, 2013]
Intel’s CEO Pick Is Predictable, but Not Its No. 2 [The Wall Street Journal, May 2, 2013]
The selection of Mr. Krzanich, who is 52 and joined Intel in 1982, suggests that Intel will continue to try to use its manufacturing muscle to play a broader role in mobile chips.
But he said that the board was mainly convinced by a new strategy—devised with Ms. James—to help take Intel chips into new devices.
“That is absolutely what won them the job,” said Andy Bryant, the Intel chairman and former finance chief who led the search. “Brian and Renee delivered a strategy for Intel that is pretty dramatic.”
…
While Mr. Krzanich doesn’t expect the “full strategy” to become visible until later this year, he said it would help move Intel chips beyond computers and mobile devices into more novel fields, including wearable technology.
The strategy “went from the very low end of computing to the very top end of computing,” Mr. Bryant said.
…
Intel directors met last weekend for a final round of interviews and then vote on Mr. Krzanich’s selection, the person close to the situation said.
On Tuesday, Mr. Krzanich suggested to Mr. Bryant the appointment of Ms. James, which the board approved Wednesday, the Intel spokesman said.
Mr. Bryant, who is 63 years old, said he has helped mentor both executives and agreed to stay on in his position for an indefinite period to help them in their new roles.
What already available from recently accepted by Intel board strategy is detailed in the below sections of this post, namely:
- Intel® XDK (cross platform development kit) with the Intel® Cloud Services Platform (CSP)
- Porting native code into HTML5 JavaScript
- Parallel JavaScript (the River Trail project)
- Perceptual Computing
- HTML5 and transparent computing
- Low-Power, High-Performance Silvermont Microarchitecture
- Photonic achitectures to drive the future of computing
- The two-person Executive Office and Intel’s transparent computing strategy as presented so far
I am quite impressed with all of those pieces, just to give my conclusion ahead.
There is, however, a huge challenge for the management as the new two-person Executive Office of Brian M. Krzanich as CEO and Renée J. James as president is to lead the company:
– out of Intel’s biggest flop: at least 3-month delay in delivering the power management solution for its first tablet SoC [‘Experiencing the Cloud’, Dec 20, 2012]
– then Saving Intel: next-gen Intel ultrabooks for enterprise and professional markets from $500; next-gen Intel notebooks, other value devices and tablets for entry level computing and consumer markets from $300 [‘Experiencing the Cloud’, April 17, 2013] in short-term
– also capitalising on Intel Media: 10-20 year leap in television this year [‘Experiencing the Cloud’, Feb 16, 2013] as a huge mid-term opportunity (with Windows Azure Media Services OR Intel & Microsoft going together in the consumer space (again)? [‘Experiencing the Cloud’, Feb 17, 2013] or not)
– as well as further strengthening its position in the Software defined server without Microsoft: HP Moonshot [‘Experiencing the Cloud’, April 10, 2013] effort
– but first and foremost proving that the Urgent search for an Intel savior [‘Experiencing the Cloud’, Nov 21 – Dec 11, 2012] did indeed end with this decision by the Intel board
– for which the litmus test is the company success against the phenomenon of the $99 Android 4.0.3 7” IPS tablet with an Allwinner SoC capable of 2160p Quad HD and built-in HDMI–another inflection point, from China again [‘Experiencing the Cloud’, Dec 3, 2012] which is based on The future of the semiconductor IP ecosystem [‘Experiencing the Cloud’, Dec 13, 2012] being a more and more viable alternative to the closed Intel system of design and manufacturing.
Indeed, Intel completely missed the huge opportunities presented by the explosion in the mobile computing end of the market during the last 3 years resulting in entry level smartphone prices as low as $72+, only 77% higher than Intel’s latest available in products Atom Z2760 processor chip for smartphones and tablets at $41, and 71% lower than Intel’s latest available Core™ i3-3229Y processor chip for lowest power consumption ultrabooks at $250, so by now Intel’s whole business model is in jeopardy:
despite sufficiently early warnings by: ![]()
More information: Apple’s Consumer Computing System: 5 years of “revolutionary” iPhone and “magical” iPad[‘Experiencing the Cloud’, July 9, 2012]:
1. Overall picture at the moment
2. Current iPhone and iPad products
3. Earlier products
4. iCloud
5. iTunes
6. App Store
Let’s see now in detail how the Intel Board decision could be the right one based on deep analysis of the available information so far:
1. Intel® XDK (cross platform development kit) with the Intel® Cloud Services Platform (CSP)
The Intel® XDK (cross platform development kit) can be used to create applications using HTML5 and web services. One such set of services are the Intel® Cloud Services Platform (CSP). The Intel® XDK supports the full spectrum of HTML5 mobile development strategies, including:
- Classic Web Apps – No device interface, no on-device caching (only works online)
- Mobile Web Apps – HTML5 Caching (works online/offline), some device interface (GPS, Accelerometer)
- Hybrid Native Apps – Full device interface, identical to native apps
Each of these strategies has pros and cons – Intel makes it easy to develop using HTML5 and JavaScript, regardless of the precise deployment strategy you choose. Intel’s App Dev Center makes it easy to build and manage deployments to all popular app stores.
With the Intel® XDK, developers really can “write it once, deploy to many.” Currently build for iOS Tablets, iOS Smartphones, Android Tablets, Android Smartphones, Google Play Store, Amazon App Store, Mozilla App Store, Facebook App Center, and the Google Chrome store.
Intel® HTML5 XDK Demo [intelswnetwork YouTube channel, March 25, 2013]
More information:
– Create World Class HTML5 Apps & Web Apps with the XDK [Intel’s App Learning Center, March 1, 2013]
– The XDK turbocharges PhoneGap [Intel’s App Learning Center, March 1, 2013]
– Developing Applications for Multiple Devices [Intel HTML5 development documentation, March 15, 2013]
It is likely that any of your apps fall into one of two broad categories. The first category of apps includes fixed position apps, like a game or interactive app where the layout is fixed and all the assets are placed in a static position. The second app category is a dynamic layout app, like an RSS reader or similar app where you may have content that is in a long list and viewing a specific item just shows a scrolling view to acommodate varying content size. For the second category, positioning and scrolling can usually be handled by simple CSS. Setting your div and body widths to “width=100%” instead of “width=768px” is an example of an approach that should help you use the entire screen regardless of resolution and aspect ratio.
The first category is a lot more complicated and we have added some functions to help you deal with this issue. It should be noted that there is no magic “silver bullet” solution. However, if you design your app with certain things in mind and have a plan for other resolutions, we can take care of some complicated calculations and make sure things are scaled for the best user experience possible.
Before we explain how to use our functions to help with these issues, let’s look at some real devices and their resolutions to get a clearer picture of the issues.
…
Conclusion
Scaling a single codebase for use on multiple devices and resolutions is a formidable challenge, particularly if your app is in the category of apps that are fixed position apps rather than an app that uses a dynamic layout. By designing your app’s layout for the smallest screen ratio expected, you can rely on us to help by performing proper scaling and letting you know the new virtual available screen size. From there you can easily pad your app’s background or reset your application’s world bounds to adapt to different screens on the fly.
For more information, documentation is available at http://www.html5devsoftware.intel.com/documentation. Please email html5tools@intel.com with any questions or post on our forums at http://forums.html5dev-software.intel.com .
App Game Interfaces is a JavaScript execution environment that includes a minimal DOM, primarily to provide access to a partial implementation of HTML5 canvas that is optimized for the Apple iOS and Google Android platforms. The App Game Interfaces augment the Canvas object with multi-channel sound, accelerated physics, and accelerated canvas to provide more realistic modeling and smoother gameplay, more like native capabilities and performance – with HTML5!
The Intel® HTML5 Game Development Experience at GDC 2013 [intelswnetwork YouTube channel, April 5, 2013]
More information:
– HTML5 and Mobile are the Future of Gaming [Intel’s App Learning Center, March 1, 2013]
– Graphics Acceleration for HTML5 and Java Script Engine JIT Optimization for Mobile Devices [Intel Developer Zone article, Jan 4, 2013]
– Convert an App Using HTML5 Canvas to Use App Game Interfaces [Intel HTML5 development documentation, March 4, 2013]
– Application Game Interfaces [Intel HTML5 development Readme, March 1, 2013]
App Game Interfaces uses: 1. Ejecta - Dominic Szablewski - MIT X11 license
(http://opensource.org/licenses/MIT) 2. Box2D - Erin Catto - Box2D License 3. JavaScriptCore - The WebKit Open Source Project - GNU LGPL 2.1
(http://opensource.org/licenses/LGPL-2.1) 4. V8 JavaScript Engine - Google - New BSD license
(http://opensource.org/licenses/BSD-3-Clause) 5. IJG JPEG - Independent JPEG Group – None
(http://www.ijg.org/files/README) 6. libpng - PNG Development Group - zlib/libpng License
(http://opensource.org/licenses/Zlib) 7. FreeType - The FreeType Project - The FreeType License
(http://git.savannah.gnu.org/cgit/freetype/freetype2.git/tree/docs/FTL.TXT) 8. v8 build script - Appcelerator Inc - Apache License 2.0
(http://www.apache.org/licenses/LICENSE-2.0)
The Intel Cloud Services Platform beta provides a set of identity-based services designed for rich interoperability and seamless experiences that cut across devices, operating systems, and platforms. The initial set of services accessed via RESTful APIs provide key capabilities such as identity, location, and context to developers for use in server, desktop, and mobile applications aimed at both consumers and businesses.
For more information, please visit the Intel Cloud Services Platform beta.
Intel® Developer Zone Cloud Services Platform [intelswnetwork YouTube channel, March 26, 2013]
Plucky rebels: Being agile in an un-agile place – Peter Biddle at TED@Intel [TEDInstitute YouTube channel, published May 6, 2013, filmed March 2013]
Intel® Cloud Services Platform Demo at GDC 2013 [intelswnetwork YouTube channel, April 5, 2013]
Intel® Cloud Services Platform [CSP] Technical Overview [intelswnetwork YouTube channel, May 3, 2013]
More information:
– Intel® Cloud Services Platform Overview (video by Norman Chou on Intel Developer Zone, March 19, 2013)
– Intel® Cloud Service Platform beta Overview (presentation by Norman Chou on GSMA OneAPI Developer Day, Feb 26, 2013), see the GSMA page as well
Build apps that seamlessly span devices, operating systems, and platforms.
Learn how you can easily build apps with this collection of identity-based, affiliated services. Services available include Intel Identity Services, Location Based Services, Context Services and Commerce Services. This session will cover the RESTful APIs available for each service, walk you through the easy sign up process and answer your questions. Want to know more? Visit http://software.intel.com/en-us/cloud-services-platform.
2. Porting native code into HTML5 JavaScript
Currently porting native iOS code to HTML5 is supported but via an abstract format which potentially will allow portinf from other OS code in the futures as well:![]()
This app porting relies (or would soon rely, see later) on App Framework (formerly jqMobi) as the “definitive JS library for HTML5 app development” for which Intel is stating:
Create the mobile apps you want with the tools you are comfortable with. Build hybrid mobile apps and web apps using the App Framework and App UI Library, a jQuery-compatible framework that gives you developers all the UX you want in a tight, fast package.
The Intel® HTML5 App Porter Tool Demo at GDC 2013 [intelswnetwork YouTube channel, April 5, 2013]
More information: Intel HTML5 Porter Tool Introduction for Android Developer [Intel Developer Zone blog post, April 5, 2013] which presents the tool as:
![]()
and adds the following important information (note here that instead of App Framework/jqMobi that version relies on the less suitable jQuery Mobile):
The next release is expected to have better integration with Intel® XDK (Intel’s HTML5 cross platform development kit) and have more iOS API coverage in terms of planned features.
2. Porting translated application to different OSs
A translated HTML5 project has a jsproj file for Visual Studio 2012 JavaScript project in Windows Store apps which you are able to open on Windows* 8 in order to run in case of successfully translated application (100% translated API) or continue development in case of placeholders in the code.
While in the associated Technical Reference – Intel® HTML5 App Porter Tool – BETA [Intel Developer Zone article, Jan 17, 2013] you will find all the relevant additional details, from which it is important to add here the following section:
About target HTML5 APIs and libraries
The Intel® HTML5 App Porter Tool – BETA both translates the syntax and semantics of the source language (Objective-C*) into JavaScript and maps the iOS* SDK API calls into an equivalent functionality in HTML5. In order to map iOS* API types and calls into HTML5, we use the following libraries and APIs:
The standard HTML5 API: The tool maps iOS* types and calls into plain standard objects and functions of HTML5 API as its main target. Most notably, considerable portions of supported Foundation framework APIs are mapped directly into standard HTML5. When that is not possible, the tool provides a small adaptation layer as part of its library.
- The jQuery Mobile library: Most of the UIKit widgets are mapped jQuery Mobile widgets or a composite of them and standard HTML5 markup. Layouts from XIB files are also mapped to jQuery Mobile widgets or other standard HTML5 markup.
The Intel® HTML5 App Porter Tool – BETA library: This is a ‘thin-layer’ library build on top of jQuery Mobile and HTML5 APIs and implements functionality that is no directly available in those libraries, including Controller objects, Delegates, and logic to encapsulate jQuery Mobile widgets. The library provides a facade very similar to the original APIs that should be familiar to iOS* developers. This library is distributed with the tool and included as part of the translated code in the
libfolder.You should expect that future versions of the tool will incrementally add more support for API mapping, based on further statistical analysis and user feedback.
3. Parallel JavaScript (the River Trail project)
RiverTrail Wiki [on GitHub edited by Stephan Herhut, April 2313, 2013 version] [April 23]
Background
The goal of Intel Lab’s River Trail project is to enable data-parallelism in web applications. In a world where the web browser is the user’s window into computing, browser applications must leverage all available computing resources to provide the best possible user experience. Today web applications do not take full advantage of parallel client hardware due to the lack of appropriate programming models. River Trail puts the parallel compute power of client’s hardware into the hands of the web developer while staying within the safe and secure boundaries of the familiar JavaScript programming paradigm. River Trail gently extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. By leveraging multiple CPU cores and vector instructions, River Trail achieves significant speedup over sequential JavaScript.
Getting Started
To get a feeling for the programming model and experiment with the API, take a look at our interactive River Trail shell. The shell runs in any current version of Firefox, Chrome and Safari. If you are using Firefox and have installed the River Trail extension (see below on how to), your code will be executed in parallel. If you are using other browsers or have not installed the extension for Firefox, the shell will use a sequential library implementation and you won’t see any speedup.
You need to install our Firefox extension to use our prototype compiler that enables execution of River Trail on parallel hardware. You can download a prebuilt version for Firefox 20.x [April 23] running on Windows and MacOS (older versions for older browsers can be found here). We no longer provide a prebuilt Linux version. However, you can easily build it yourself. We have written a README that explains the process. If you are running Firefox on Windows or Linux, you additionally need to install Intel’s OpenCL SDK (Please note the SDK’s hardware requirements.).
…
River Trail – Parallel Computing in JavaScript [by Stephan Herhut from Intel Labs, delivered on April 2, 2012 at JSConf 2012, published on JSConf EU YouTube channel on Jan 20, 2013]
River Trail Demos at IDF 2012 [intelswnetwork YouTube channel, Sept 24, 2012]
More information:
– River Trail – Parallel Programming in JavaScript [Stephan Herhut on InfoQ, March 29, 2013] a collection which is based on his latest recorded presentation (embedded there) that was delivered at Strange Loop 2012 on Sept 24, 2012 (you can follow his Twitter for further information)
– River Trail: Bringing Parallel JavaScript* to the Web [Intel Developer Zone article by Stephan Herhut, Oct 17, 2012]
– Tour de Blocks: Preview the Benefits of Parallel JavaScript* Technology by Intel Labs [Intel Developer Zone article by Stephan Herhut, Oct 17, 2012]
– Parallel JS Lands [Baby Steps blog by Niko Matsakis at Mozilla, March 20, 2013], see all of his posts in PJs category since January 2009, particularly ‘A Tour of the Parallel JS Implementation’ Part 1 [March 20] and Part 2 [April 4], while from the announcement:
The first version of our work on ParallelJS has just been promoted to mozilla-central and thus will soon be appearing in a Nightly Firefox build near you. … Once Nightly builds are available, users will be able to run what is essentially a “first draft” of Parallel JS. The code that will be landing first is not really ready for general use yet. It supports a limited set of JavaScript and there is no good feedback mechanism to tell you whether you got parallel execution and, if not, why not. Moreover, it is not heavily optimized, and the performance can be uneven. Sometimes we see linear speedups and zero overhead, but in other cases the overhead can be substantial, meaning that it takes several cores to gain from parallelism. …
…
Looking at the medium term, the main focus is on ensuring that there is a large, usable subset of JavaScript that can be reliably parallelized. Moreover, there should be a good feedback mechanism to tell you when you are not getting parallel execution and why not.
…
The code we are landing now is a very significant step in that direction, though there is a long road ahead.
I want to see a day where there are a variety of parallel APIs for a variety of situations. I want to see a day where you can write arbitrary JS and know that it will parallelize and run efficiently across all browsers.
– Parallel javascript (River Trail) combine is not a function [Stack Overflow, April 16-25, 2013] from which it is important to include Stephan Herhut’s answer:
There are actually two APIs:
the River Trail API as described in the GitHub prototype documentation
the Parallel JavaScript API described in the ECMAScript proposal
The two differ slightly, one difference being that the ECMAScript proposal no longer has a combine method but uses a flavor of map that offers the same functionality. Another difference is that the GitHub prototype uses index vectors whereas the proposal version uses multiple scalar indices. Your example, for the prototype, would be written as
var par_A = new ParallelArray([3,3], function(iv) {return iv[1]}); par_A.combine(2, function(i) {return this.get(i) + 1} );In the proposal version, you instead would need to write
var par_A = new ParallelArray([3,3], function(i,j) {return j}); par_A.map(2, function(e, i) { return this.get(i) + 1; });Unfortunately, multi-dimensional map is not yet implemented in Firefox, yet. You can watch bug 862897 on Mozilla’s bug tracker for progress on that front.
Although we believe that the API in the proposal is the overall nicer design, we cannot implement that API in the prototype for technical reasons. So, instead of evolving the prototype half way, we have decided to keep its API stable.
One important thing to note: the web console in Firefox seems to always use the builtin version of ParallelArray and not the one used by a particular website. As a result, if you want to play with the GitHub prototype, you best use the interactive shell from our GitHub website.
Hope this clears up the confusion.
4. Perceptual Computing
Intel is supporting developers interested in adding perceptual computing to their apps with theIntel® Perceptual Computing SDK 2013 Beta. This allows developers to use perceptual computing to create immersive applications that incorporate close-range hand and finger tracking, speech recognition, facial analysis, and 2D/3D object tracking on 2nd and 3rd generation Intel® Core™ processor-powered Ultrabook devices and PCs. Intel has also released the Creative Interactive Gesture Camera as part of the SDK, which allows developers to create the next generation of natural, immersive, innovative software applications on Intel Core processor-powered Ultrabook devices, laptops, and PCs.
How to drive experience with perceptual computing – Achin Bhowmik at TED@Intel [TEDInstitute YouTube channel, published May 6, 2013, filmed March 2013]
Head Coupled Perspective with the Intel® Perceptual Computing SDK [intelswnetwork YouTube channel, March 25, 2013]
Perceptual Computing Challenge Phase 1 Trailer [IntelPerceptual YouTube channel, March 28, 2013]
More information:
– GDC 2013: Perceptual Computing, HTML5, Havok, and More [Intel Developer Zone blog post, April 2, 2013]
– Introducing the Intel® Perceptual Computing SDK 2013 [Intel Developer Zone blog post, April 5, 2013]
– Perceptual Computing: Ten Top Resources for Developers [Intel Developer Zone blog post, Jan 4, 2013]
5. HTML5 and transparent computing
Why Intel Loves HTML5 [intelswnetwork YouTube channel, Dec 20, 2012]
App Development Without Boundaries [Intel Software Adrenaline article, April 1, 2013]
HTML5 Reaches More Devices and More Users, More Effectively
There are a lot of reasons to like HTML5. It’s advanced. It’s open. It’s everywhere. And, it’s versatile.
But Intel loves HTML5 because our vision for the future is a world where developers can create amazing cross-platform experiences that flow freely from device to device, and screen to screen—a world where apps can reach more customers and get to market faster, without boundaries.
HTML5 helps make that world possible.
…
Many Devices, One Platform [Intel Software Adrenaline article, Dec 11, 2012]
The Three Design Pillars of Transparent Computing
Welcome to the new, transparent future, where users expect software apps to work equally well no matter what device they run on, whether on an Ultrabook™ device or an Android* phone, a netbook or a tablet. This is the concept of transparent computing: with the assumed level of mobility expected, today’s consumers demand seamless transitions for a single app on multiple platforms. Developers must deliver code that works just about everywhere, with standard usability, and with strong security measures.
It’s a tall order, but help is available. As long as teams understand some of the simple design considerations and usability frameworks, which are outlined in this article, they can expand their app appeal across many profitable niches and embrace transparent computing.
There are three key design principles that comprise the transparent computing development model:
Cross-platform support
Standard usability themes
Enhanced security features
If developers can think in these broad strokes and plan accordingly, the enhanced effect of multiple platform revenues and word-of-mouth marketing can result in the income streams that your entire app portfolio will appreciate.
…
More information:
– Transparent Computing: One Platform to Develop Them All [Intel Developer Zone blog post, Sept 13, 2012]
– Transparent Computing with Freedom Engine – HTML5 and Beyond [Intel Developer Zone blog post, Oct 15, 2012]
– Intel Cloud Services Platform Private Beta [Intel Developer Zone blog post, Oct 18, 2012]
– App Show 33: A Recap of Day Two at IDF 2012 [Intel Developer Zone blog post, Nov 9, 2012]
– Cross-Platform Development: What The Stats Say [Intel Developer Zone blog post, March 7, 2013]
– Intel’s Industry Expert Examines Cross-platform Challenges and Solutions [Intel Software Adrenaline article, April 16, 2013]
– Security Lets You Make the Most of the Cloud [Intel Software Adrenaline infographic, April 10, 2013]
– Mechanisms to Protect Data in the Open Cloud [Intel Software Adrenaline whitepaper, April 10, 2013]
– Intel and VMware security solutions for business computing in the cloud [Intel Software Adrenaline solution brief, April 10, 2013]
– The Intel® HTML5 Game Development Experience at GDC 2013 [Intel Developer Zone blog post, April 5, 2013]
– Intel Developer Forum 2012 Keynote, Renée James Transcript (PDF 190KB)
… transparent computing is really about allowing experiences to seamlessly cross across different platforms, both architectures and operating system platform boundaries. It makes extensive use of technologies like HTML5 – which we’re going to talk a lot more about in a second – and in house cloud services. It represents for us the direction that we believe we need to go as an industry. And it’s the next step really beyond ubiquitous computing.
…
We need three things. We need a programming environment that crosses across platforms and architectures and the boundaries. We need a flexible and secure cloud infrastructure. And we need a more robust security architecture from client to the data center.
…
We believe that HTML5 as the application programming language is what can deliver a seamless and consistent environment across the different platforms – across PCs, tablets, telephones, and into the car.
… transparent computing obviously relies on the cloud to provide the developer and the application transparent services that move across platforms and ecosystem boundaries.
…
Intel is working on an integrated set of cloud services for developers that we would host that would give some of the core elements required to really realize our vision around transparent computing. Some of them would be location services, like Peter demonstrated this morning; digital storefronts, federated identity attestation, some of the things that are required to know who’s where on which device, sensor and context APIs for our platforms, and, of course, business analytics and business intelligence.
We will continue to roll these things out over the course of the year, so you should look for more from us on that. And as I said, these will be predominantly developer services, backend services for developers as they create application.
…
For the cloud, as we migrate resources across these different datacenters and different environments, as we move applications and workloads, we have to do it in a secure way. And one of the ways that you can do that on our platforms, on Intel’s servers, is using Trusted Execution, or TXT. TXT allows data operations to occur isolated in their own execution environment from the rest of the system and safe from malware.
…
In transparent computing, the security of the device is going to be largely around identity management. In addition to device management and application and software security, which we’ve been working on for a while, we have a lot of work to do in the area of identity and how we protect people – not only their data, but who they are at transactions, as they move these experiences across these different devices.
Identity and attestation we believe will become key underpinnings for all mobile transparent computing across different platforms and the cloud. Underneath it all, we’re going to have to have a very robust set of hardware features, which we plan to have, to secure that information. It’s going to be even more critical especially as we think about mobile devices and we think about identity and attestation that we’re able to truly secure and know that it is as safe and as known good as possible.
…
We will continue to provide direct distribution support for your applications and services through AppUp, and those of you that know about it, fabulous. If you don’t, AppUp is the opportunity to distribute through a digital storefront across 45 countries, around Intel platforms. We support Windows and Tizen and HTML5, both native and other apps.
In addition to all of that, we will be revitalizing the software business network, which we’ve used to pair you up with other Intel distributors and Intel hardware partners for exclusive offers and bundles. As we see more and more solutions in our industry, we want to make sure our developers are able to connect with people building on Intel platforms. And other additional marketing programs and that kind of thing are all going to be in the same place.
And in Q4, we will have a specific program launched on HTML5. That program will help you write applications across multiple environments. We’ll be doing training, we’ll have SDKs, there will be tools. We will be working on how you run across IOS, Android, Windows, Linux, and Tizen. So, please stay tuned and go to the developer’s center for that.
Finally, today is just the start of our discussion on transparent computing. In the era of ubiquitous computing, we had that industry vision for a decade, and now that’s become a reality. And just like when we first predicted there was going to be a billion connected computers – I still remember it, it sounded so farfetched at that point in time decades ago – transparent computing seems pretty far away from where we stand today, but we have always believed that the future of computing is what we make it. And we believe that the developers, our developers around our platform, can embrace a new paradigm for computing, a paradigm that users want us to go solve. And we look forward to being your partner for the next era of computing, and delivering it transparently.
Chip Shot: Intel Extends HTML5 Capabilities for App Developers [Intel Newsroom, Feb 25, 2013]
To complement and grow its HTML5 capabilities, Intel has acquired the developer tools and build system from appMobi. Intel also hired the tool-related technical staff to help extend Intel’s existing HTML5 capabilities and accelerate innovation and delivery of HTML5 tools for cross platform app developers. Software developers continue to embrace HTML5 as an easy to use language to create cross platform apps. Evans Data finds 43 percent of all mobile developers indicate current use of HTML5 and an additional 38 percent plan to use HTML5 in the coming year. App developers can get started building HTML5 cross-platform apps today at: software.intel.com/html5. Visit the Intel Extends HTML5 Capabilities blog post for more information.
Intel extends HTML5 capabilities [Intel Developer Zone, Feb 22, 2013]
Developers continue to tell Intel they are looking to HTML5 to help improve time to market and reduce cost for developing and deploying cross-platform apps. At the same time, app developers want to maximize reach to customers and put their apps into multiple stores. Intel is dedicated to delivering software development tools and services that can assist these developers. I am pleased to let you know that Intel recently acquired the developer tools and build system from appMobi. While we’ve changed the names of the tools, the same capabilities will be there for you. You can check these tools out and get started writing your own cross platform apps now by visiting http://software.intel.com/html5 and registering to access the tools. Developers already using the appMobi tools will be able to access their work and files as well. If you weren’t already using appMobi development tools, I invite you to try them out and see if they fit your HTML5 app development needs. You will find no usage or licensing fees for using the tools.
We are also excited to bring many of the engineers who created these tools to Intel. These talented tool engineers complement Intel’s existing HTML5 capabilities and accelerate innovation and delivery of HTML5 tools for cross platform app developers.
I hope you will visit http://software.intel.com/html5 soon to check out the tools and return often to learn about the latest HTML5 developments from Intel.
One Code Base to Rule Them All: Intel’s HTML5 Development Environment [Intel Developer Zone, March 12, 2013]
If you’re a developer searching for a great tool to add to your repertoire, you’ll want to check out Intel’s HTML5 Development Environment, an HTML5-based development platform that enables developers to create one code base and port it to multiple platforms. Intel recently purchased the developer tools and build system from appMobi:
“While we’ve changed the names of the tools, the same capabilities will be there for you. You can check these tools out and get started writing your own cross platform apps now by visiting http://software.intel.com/html5 and registering to access the tools. Developers already using the appMobi tools will be able to access their work and files as well. If you weren’t already using appMobi development tools, I invite you to try them out and see if they fit your HTML5 app development needs. You will find no usage or licensing fees for using the tools.”
You can view the video below to see what this purchase means for developers who have previously used AppMobi’s tools:
For appMobi Developers: How Does Intel’s Acquisition Affect Me? [appMobi YouTube channel, Feb 22, 2013]
What is the HTML5 Development Environment?
Intel’s HTML5 Development Environment is a cloud-based, cross-platform HTML5 application development interface that makes it as easy as possible to build an app and get it out quickly to a wide variety of software platforms. It’s easy to use, free to get started, and everything is based right within the Web browser. Developers can create their apps, test functions, and debug their projects easily, putting apps through their virtual paces in the XDK which mimics real world functionality from within the Web browser.
This environment makes it as simple as possible to develop with HTML5, but by far the biggest advantage of using this service is the ability to build one app on whatever platform that developers are comfortable with and then deploy that app across multiple platforms to all major app stores. The same code foundation can be built for iOS, Web apps, Android, etc. using just one tool to create, debug, and deploy.
As appMobi is also the most popular HTML5 application development tool on the market with over 55,000 active developers using it every month to create, debug, and deploy, this tool is especially welcome. The HTML5 Development Environment makes it easy to create one set of code and seed it across multiple cross-platforms, making the process of development – including getting apps to market – more efficient for developers.
HTML5 is quickly becoming a unifying code platform for both mobile and desktop development. Because of this, Intel and appMobi have teamed up to support quick HTML5 app development for both PCs and Ultrabook™ devices. The XDK makes developing apps as easy as possible, but the best part about it is how fast apps can go from the drawing board to consumer-facing stores. Developers can also employ the XDK to reach an ever-growing base of Ultrabook users with new apps that utilize such features as touch, accelerometer, and GPS.
The Intel HTML5 XDK tools can be used to create apps for a whole new market of consumers looking to access all the best features that an HTML5-based app for Ultrabook devices has to offer. For example, every 16 seconds, an app is downloaded via Intel’s AppUp store, and there are over 2.6 billion potential PCs reachable from this platform. Many potential monetization opportunities exist for developers by utilizing Intel Ultrabook-specific features in their apps such as touch, accelerometer, and GPS, features traditionally seen only in mobile and tablet devices. Intel’s HTML5 development tools give developers the tools to quickly create, test, and deploy HTML5-based apps that in turn can be easily funneled right into app stores and thus into the hands of PC and Ultrabook device users.
Easy build process
The App Starter offers an interactive wizard to guide developers gently through the entire build process. This includes giving developers a list of the required plugins, any certificates that might be lacking, and any assets that might need to be pulled together. It will generate the App Framework code for you.
Developers can upload their own projects; a default template is also available. A demo app is automatically generated. Once an app is ready to build, developers are given an array of different services to choose from. Click on “build now”, supply a title, description and icon in advance, and the App Starter creates an app bundle that can then be submitted to different app stores/platforms.
The XDK
One of the HTML5 Development Environment’s most appealing features is the XDK (cross-platform development kit). This powerful interface supports robust HTML5 mobile development, which includes hybrid native apps, enhanced Web apps, mobile Web apps, and classic Web apps to give developers the full range of options.
The XDK makes testing HTML5 apps as easy as possible. Various form factors – phones, tablets, laptops, etc. – can be framed around an app to simulate how it would function on a variety of devices. In addition to tablet, phone, and PC emulations, there is also a full screen simulation of different Ultrabook device displays within the XDK. This is an incredibly useful way to test specific Ultrabook features in order to make sure that they are at maximum usability for consumers. The XDK for Ultrabook apps enables testing for mouse, keyboard, and touch-enabled input, which takes the guesswork out of developing for touch-based Ultrabook devices.
One tool, multiple uses
Intel’s HTML5 Development Environment is a cross-platform development service and packaging tool. It enables HTML5 developers to package their applications, optimize those applications, test with features, and deploy to multiple services.
Rather than building separate applications for all the different platforms out there, this framework makes it possible to build just one with HTML5 and port an app to multiple platforms. This is a major timesaver, to say the very least. Developers looking for ways to streamline their work flow and get their apps quickly to end users will appreciate the user-friendly interface, rich features, and in-browser feature testing. However, the most appealing benefit is the ability to build one app instead of several different versions of one app and deploy it across multiple platforms for maximum market exposure.
Chip Shot: Intel Expands Support of HTML5 with Launch of App Development Environment [Intel Newsroom, April 10, 2013]
At IDF Beijing, Intel launched the Intel® HTML5 Development Environment that provides a cross-platform environment to develop, test and deploy applications that can run across multiple device types and operating system environments as well as be available in various application stores. Based on web standards and supported by W3C, HTML5 makes it easier for software developers to create applications once to run across multiple platforms. Intel continues to invest in HTML5 to help mobile application developers lower total costs and improve time-to-market for cross-platform app development and deployment. Developers can access the Intel HTML5 Development Environment from the Intel® Developer Zone at no cost.
Intel Cloud Services Platform Open beta [Intel Developer Zone blog post, Dec 13, 2012]
Doors to our beta open today. Welcome! For those who participated in our private beta, thank you. Your feedback and ideas were awesome and will clearly make our services more useful for other developers. We are continuing to work out the kinks in our Wave 1 Services (Identity, Location and Context) and your ideas help us build what you want to use. We are at a point where we feel ready to invite others to try our services. So, today we open the doors to the broader developer community.
Our enduring mission with the Intel Cloud Services Platform beta is to give you key building blocks to deliver transparent computing experiences that seamlessly span devices, operating systems, stores and even ecosystems. With this release, “Wave 2”, we introduce a collection of Commerce Services that provide a common billing provider for apps and services deployed on the Intel Cloud Services Platform. Other cool stuff we’ve added includes Geo Messaging and Geo Fencing to Location Based Services and Behavioral Models for cuisine preferences and destination probability to Context Services.
For the open beta, we are introducing a Technical Preview of Curation, Catalog and Security. These are early releases, so some features may change, but we want to get you coding around these, so you can tell us what you think. We know building apps that provide users with a high degree of personalization often means spending WEEKS of valuable development time. Also, developing apps that are truly cross platform, cross domain and cross industry is still extremely difficult to do. So, our objective with Curation and Catalog Services is to make it really easy for you to create complex functionalities such as schemaless catalogs, developer- or user-curated lists, and secure client-side storage of data at rest. Play around with these services and give us feedback.
In addition to new services, we have invested heavily in a scalable and robust infrastructure. You need to be able to trust that our services will just work. To help you out, we have created a support team that you’ll want to call and talk to. We have 24×7 support and various ways you can reach out to us. You can contact us by phone (1-800-257-5404, option 4), email or our community forums.
To get the latest on what’s new and useful, check out our community. If you haven’t checked out our Services – remember the door is open. Try them. If you have thoughts about our platform, I want to hear them. Find me on twitter (@PNBLive).
6. Low-Power, High-Performance Silvermont Microarchitecture
Intel’s new Atom chips peak on performance, power consumption [computerworld YouTube channel, May 7, 2013]
Intel Launches Low-Power, High-Performance Silvermont Microarchitecture [press release, May 6, 2013]
NEWS HIGHLIGHTS:
- Intel announces Silvermont microarchitecture, a new design in Intel’s 22nm Tri-Gate SoC process delivering significant increases in performance and energy efficiency.
- Silvermont microarchitecture delivers ~3x more peak performance or the same performance at ~5x lower power over current-generation Intel® Atom™ processor core.1
- Silvermont to serve as the foundation for a breadth of 22nm products targeted at tablets, smartphones, microservers, network infrastructure, storage and other market segments including entry laptops and in-vehicle infotainment.
SANTA CLARA, Calif., May 6, 2013 – Intel Corporation today took the wraps off its brand new, low-power, high-performance microarchitecture named Silvermont.
The technology is aimed squarely at low-power requirements in market segments from smartphones to the data center. Silvermont will be the foundation for a range of innovative products beginning to come to market later this year, and will also be manufactured using the company’s leading-edge, 22nm Tri-Gate SoC manufacturing process, which brings significant performance increases and improved energy efficiency.
“Silvermont is a leap forward and an entirely new technology foundation for the future that will address a broad range of products and market segments,” said Dadi Perlmutter, Intel executive vice president and chief product officer. “Early sampling of our 22nm SoCs, including “Bay Trail” and “Avoton” is already garnering positive feedback from our customers. Going forward, we will accelerate future generations of this low-power microarchitecture on a yearly cadence.”
The Silvermont microarchitecture delivers industry-leading performance-per-watt efficiency.2 The highly balanced design brings increased support for a wider dynamic range and seamlessly scales up and down in performance and power efficiency. On a variety of standard metrics, Silvermont also enables ~3x peak performance or the same performance at ~5x lower power over the current-generation Intel® Atom™ processor core.1
Silvermont: Next-Generation Microarchitecture
Intel’s Silvermont microarchitecture was designed and co-optimized with Intel’s 22nm SoC process using revolutionary 3-D Tri-gate transistors. By taking advantage of this industry-leading technology, Intel is able to provide a significant performance increase and improved energy efficiency.
Additional highlights of the Silvermont microarchitecture include:
A new out-of-order execution engine enables best-in-class, single-threaded performance.1
A new multi-core and system fabric architecture scalable up to eight cores and enabling greater performance for higher bandwidth, lower latency and more efficient out-of-order support for a more balanced and responsive system.
New IA instructions and technologies bringing enhanced performance, virtualization and security management capabilities to support a wide range of products. These instructions build on Intel’s existing support for 64-bit and the breadth of the IA software installed base.
Enhanced power management capabilities including a new intelligent burst technology, low– power C states and a wider dynamic range of operation taking advantage of Intel’s 3-D transistors. Intel® Burst Technology 2.0 support for single- and multi-core offers great responsiveness scaled for power efficiency.
“Through our design and process technology co-optimization we exceeded our goals for Silvermont,” said Belli Kuttanna, Intel Fellow and chief architect. “By taking advantage of our strengths in microarchitecture development and leading-edge process technology, we delivered a technology package that enables significantly improved performance and power efficiency – all while delivering higher frequencies. We’re proud of this accomplishment and believe that Silvermont will offer a strong and flexible foundation for a range of new, low-power Intel SoCs.”
Architecting Across a Spectrum of Computing
Silvermont will serve as the foundation for a breadth of 22nm products expected in market later this year. The performance-per-watt improvements with the new microarchitecture will enable a significant difference in performance and responsiveness for the compute devices built around these products.
Intel’s quad-core “Bay Trail” SoC is scheduled for holiday 2013 tablets and will more than double the compute performance capability of Intel’s current-generation tablet offering1. Due to the flexibility of Silvermont, variants of the “Bay Trail” platform will also be used in market segments including entry laptop and desktop computers in innovative form factors.
Intel’s “Merrifield” [aimed at high-end smartphones, successor to Medfield] is scheduled to ship to customers by the end of this year. It will enable increased performance and battery life over current-generation products1 and brings support for context aware and personal services, ultra-fast connections for Web streaming, and increased data, device and privacy protection.
Intel’s “Avoton” will enable industry-leading energy efficiency and performance-per-watt for microservers2, storage and scale out workloads in the data center. “Avoton” is Intel’s second-generation Intel® Atom™ processor SoC to provide full server product capability that customers require including 64-bit, integrated fabric, error code correction, Intel virtualization technologies and software compatibility. “Rangeley” is aimed at the network and communication infrastructure, specifically for entry-level to mid-range routers, switches and security appliances. Both products are scheduled for the second half of this year.
Concurrently, Intel is delivering industry-leading advancements on its next-generation, 22nm Haswell microarchitecture for Intel® Core™ processors to enable full-PC performance at lower power levels for innovative “2-in-1” form factors, and other mobile devices available later this year. Intel also plans to refresh its line of Intel® Xeon® processor families across the data center on 22nm technology, delivering better performance-per-watt and other features.
“By taking advantage of both the Silvermont and Haswell microarchitectures, Intel is well positioned to enable great products and experiences across the full spectrum of computing,” Perlmutter said.
1 Based on the geometric mean of a variety of power and performance measurements across various benchmarks. Benchmarks included in this geomean are measurements on browsing benchmarks and workloads including SunSpider* and page load tests on Internet Explorer*, FireFox*, & Chrome*; Dhrystone*; EEMBC* workloads including CoreMark*; Android* workloads including CaffineMark*, AnTutu*, Linpack* and Quadrant* as well as measured estimates on SPECint* rate_base2000 & SPECfp* rate_base2000; on Silvermont preproduction systems compared to Atom processor Z2580. Individual results will vary. SPEC* CPU2000* is a retired benchmark. *Other names and brands may be claimed as the property of others.
2 Based on a geometric mean of the measured and projected power and performance of SPECint* rate_base2000 on Silvermont compared to expected configurations of main ARM*-based mobile competitors using descriptions of the architectures; assumes similar configurations. Numbers may be subject to change once verified with the actual parts. Individual results will vary. SPEC* CPU2000* is a retired benchmark; results are estimates. *Other names and brands may be claimed as the property of others.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to: www.intel.com/performance.
For more information see the “Intel Atom Silvermont” Google search between May 6 and 8. From the accompanying Intel Next Generation Low Power Micro-Architecture webcast presentation I will include here the following slide only:
![]()
about which it was noted in the Deep inside Intel’s new ARM killer: Silvermont [The Register, May 8, 203] report that:
Now that Intel has created an implementation of the Tri-Gate transistor technology specifically designed for low-power system-on-chip (SoC) use – and not just using the Tri-Gate process it employs for big boys such as Core and Xeon – it’s ready to rumble.
Tri-Gate has a number of significant advantages over tried-and-true planar transistors, but the one that’s of particular significance to Silvermont is that when it’s coupled with clever power management, Tri-Gate can be used to create chips that exhibit an exceptionally wide dynamic range – meaning that they can be turned waaay down to low power when performance needs aren’t great, then cranked back up when heavy lifting is required.
This wide dynamic range, Kuttanna said, obviates the need for what ARM has dubbed a big.LITTLE architecture, in which a low-power core handles low-performance tasks, then hands off processing to a more powerful core – or cores – when the need arises for more oomph.
“In our case,” he said, “because of the combination of architecture techniques as well as the process technology, we don’t really need to do that. We can go up and down the range and cover the entire performance range.” In addition, he said, Silvermont doesn’t need to crank up its power as high as some of those competitors to achieve the same amount of performance.
Or, as Perlmutter put it more succinctly, “We do big and small in one shot.”
Equally important is the fact that a wide dynamic range allows for a seamless transition from low-power, low-performance operation to high-power, high-performance operation without the need to hand off processing between core types. “That requires the state that you have been operating on in one of the cores to be transferred between the two cores,” Kuttanna said. “That requires extra time. And the long switching time translates to either a loss in performance … or it translates to lower battery life.”
Intel’s 1h20m long Intel Next Generation Low Power Micro-Architecture – Webcast is available online for further details about Silvermont. The technical overview starts at [21:50] (Slide 15) and you can also read a summary of some of the most interesting points by CNXSoft.
7. Photonic achitectures to drive the future of computing
TED and Intel microdocumentary – Mission (Im)possible: Silicon photonics featuring Mario Paniccia [TEDInstitute YouTube channel, published May 6, 2013; first shown publicly in March 2013]
[2:14] You can do now a 100 gig, you can do 200 gig. You can imagine doing a terabit per second in the next couple of years. At a terabit per second you’re talking about transferring or downloading a season of HDTV from one device to another in less than a second. It’s going to allow us to keep up with Moore’s law, and allow us to move information and constantly feed Moore’s law in our processors and so we will not be limited anymore by the interconnect, or the connectivity. [2:44]
Intel considered this innovation an inflection point already back in 2010, see:
Justin Rattner, Mario Paniccia and John Bowers describe the impact and significance of the 50G Silicon Photonics Link [channelintel YouTube channel, July 26, 2010]
Now as the technology is ready for commercialisation this year Intel is even more enthuasiastic: Justin Rattner IDF Beijing 2013 Keynote-Excerpt: Silicon Photonics [channelintel YouTube channel, May 6, 2013]
Silicon photonics uses light (photons) to move huge amounts of data at extremely high speeds over a thin optical fiber rather than using electrical signals over a copper cable. But that is not all: Silicon Photonics: Disrupting Server Design [DataCenterVideos YouTube channel, Jan 22, 2013, Recorded at the Open Compute Summit, Jan 17, 2013, Santa Clara, California]
More information:
– Intel, Facebook Collaborate on Future Data Center Rack Technologies [press release, Jan 16, 2013]
New Photonic Architecture Promises to Dramatically Change Next Decade of Disaggregated, Rack-Scale Server Designs
-
Intel and Facebook* are collaborating to define the next generation of rack technologies that enables the disaggregation of compute, network and storage resources.
-
Quanta Computer* unveiled a mechanical prototype of the rack architecture to show the total cost, design and reliability improvement potential of disaggregation.
-
The mechanical prototype includes Intel Silicon Photonics Technology, distributed input/output using Intel Ethernet switch silicon, and supports the Intel® Xeon® processor and the next-generation system-on-chip Intel® Atom™ processor code named “Avoton.”
-
Intel has moved its silicon photonics efforts beyond research and development, and the company has produced engineering samples that run at speeds of up to 100 gigabits per second (Gbps).
– Silicon Photonics Research [Intel Labs microsite]
– The Facebook Special: How Intel Builds Custom Chips for Giants of the Web [Wired, May 6, 2013]
– Meet the Future of Data Center Rack Technologies [Data Center Knowledge, Feb 20, 2013] by Raejeanne Skillern, Intel’s director of marketing for cloud computing
… Let’s now drill down into some of all-important details that shed light on what this announcement means in terms of the future of data center rack technologies.
What is Rack Disaggregation and Why is It Important?
Rack disaggregation refers to the separation of resources that currently exist in a rack, including compute, storage, networking and power distribution, into discrete modules. Traditionally, a server within a rack would each have its own group of resources. When disaggregated, resource types can then be grouped together, distributed throughout the rack, and upgraded on their own cadence without being coupled to the others. This provides increased lifespan for each resource and enables IT managers to replace individual resources instead of the entire system. This increased serviceability and flexibility drives improved total cost for infrastructure investments as well as higher levels of resiliency. There are also thermal efficiency opportunities by allowing more optimal component placement within a rack.
Intel’s photonic rack architecture, and the underlying Intel silicon photonics technologies, will be used for interconnecting the various computing resources within the rack. We expect these innovations to be a key enabler of rack disaggregation.
Why Design a New Connector?
Today’s optical interconnects typically use an optical connector called MTP. The MTP connector was designed in the mid-1980s for telecommunications and not optimized for data communications applications. At the time, it was designed with state-of-the-art materials manufacturing techniques and know-how. However, it includes many parts, is expensive, and is prone to contamination from dust.
The industry has seen significant changes over the last 25 years in terms of manufacturing and materials science. Building on these advances, Intel teamed up with Corning, a leader in optical fiber and cables, to design a totally new connector that includes state-of-the-art manufacturing techniques and abilities; a telescoping lens feature to make dust contamination much less likely; with up to 64 fibers in a smaller form factor; fewer parts – all at less cost.
What Specific Innovations Were Unveiled?
The mechanical prototype includes not only Intel silicon photonics technology, but also distributed input/output (I/O) using Intel Ethernet switch silicon, and supports Intel Xeon processor and next-generation system-on-chip Intel Atom processors code named “Avoton.” …
In fact this will lead to a CPU – Memory – Storage … disaggregation as shown by the following Intel slide:
which will lead to new “Photonic Architectures”, or more precisely “Photonic Many-Core Architectures” (or later on even “Photonic/Optical Computing”), much more efficient than anything so far. For possibilities see these starting documents in academic architecture research:
– Photonic Many-Core Architecture Study Abstract [HPEC 2008, May 29, 2008]
– Photonic Many-Core Architecture Study Presentation [HPEC 2008, Sept 23, 2008]
– Building Manycore Processor-to-DRAM Networks Using Monolithic Silicon Photonics Abstract [HPEC 2008, Sept 23, 2008]
– Building Manycore Processor-to-DRAM Networks Using Monolithic Silicon Photonics Presentation [HPEC 2008, Sept 23, 2008]
Intel made available the following Design Guide for Photonic Architecture Draft Document v 0.5 [Jan 16, 2013] where we can find the following three architectures:
3.2 Interconnect Topology with a ToR [Top of Rack] Switch
One particular implementation of the Photonically Enabled Architecture which is supported by the New Photonic Connector is shown below in Figure 3.1. In this implementation the New Photonic Connector cables are used to connect the compute systems arrayed throughout the rack to a Top of Rack switch. These intra-rack connections are currently made through electrical cabling, often using Ethernet signaling protocols at various line rates. The Photonically Enabled Architecture envisions a system where the bandwidth density, line rate scalability and easier cable routing provide value in this implementation model. One key feature of this architecture is that the line rate and optical technology are not dictated; rather the lowest cost technology which can support the bandwidth demands and provide the functionality required to support future high speed and dense applications can be deployed in this model consistent with the physical implementation model. This scalability of the architecture is a key value proposition of the design. Not only is the architecture scalable for data rate in the optical cable, but scalability of port count in each connection is also possible by altering the physical cabling and optical modules.
Figure 3.1: Open Rack with Optical Interconnect.
In this architectural concept the green lines represent optical fiber cables terminated with the New Photonic Connector. They connect the various compute systems within the rack to the Top of Rack (TOR) switch. The optical fibers could contain up to 64 fibers and still support the described New Photonic Connector mechanical guidelines.One key advantage of the optically enabled architecture is that it supports disaggregation in the rack based design of the various system functionality, which means separate and discrete portions of the system resources may be brought together. One approach to disaggregation is shown below in Figure 3.2; in the design shown here the New Photonic Connector optical cables are still connecting a computing platform to a Top of Rack switch, but the configuration of the components has been altered to allow for a more modular approach to system upgrade and serviceability. In this design the computing systems have been configured in ‘trays’ containing a single CPU die and the associated memory and control, while communication is aggregated between three of these trays through a Silicon Photonics module to a Top of Rack switch. The Top of Rack switch now communicates to the individual compute elements through a Network Interface Chip (NIC) while also supporting an array of Solid State Disk Drives (SSD’s) and potentially additional computing hardware to support the networking interfaces. This approach would allow for the modular upgrade of the computing and memory infrastructure without burdening the user with the cost of upgrading the SSD infrastructure simultaneously provided the IO infrastructure remains constant. Other options for the disaggregated system architecture are of course also possible, potentially leading to the disaggregation of the memory system as well.
Figure 3-2: Disaggregated Photonic Architecture Topology
with a ToR Switch.
This design shows 3 compute trays connected through a single New Photonic Connector enabled optical cable to a Top of Rack (TOR) switch supporting Network Interface Chip (NIC) elements, Solid State Disk Drives (SSD’s), Switching functionality and additional compute resources.3.3 Interconnect Topology with Distributed Switch Functionality
The Photonically Enabled Architecture which is supported by the New Photonic Connector cable and connector concept can support several different types of architectures, each with specific advantages. One particular type of architecture, which also takes advantage of the functionality of another Intel component, an Intel Switch Chip, is shown in Figure 3.3, shown below. In this architecture the Intel Switch Chip is configured in such a way as to support both aggregation of data streams to reduce overall fiber and cabling burden as well as a distributed switching functionality.
The distributed switch functionality supports the modular architecture which was discussed in previous sections. This concept allows for a very granular approach to the deployment of resources throughout the data center infrastructure which supports greater resiliency through a smaller impact from a failure event. The concept also supports a more granular approach to upgradability and potentially could enable re-partitioning of the architecture in such a way that system resources can be better shared between different compute elements.
In Figure 3.3 an example is shown of 100Gbps links between compute systems and a remote storage node. Both PCIe and Ethernet networking protocols may be used in the same rack system, all enabled by the functionality of the Intel Switch Chip (or Device). It should be understood that the components in this vision could be swapped dynamically and asymmetrically so that improvements in bandwidth between particular nodes could be upgraded individually or new functionality could be incorporated as it becomes available.
Figure 3.3: An example of a Photonically Enabled Architecture
relying upon the New Photonic Connector concept, Silicon Photonics
and the Intel Switch Chip (or Device).
In this example the switching between the rack nodes is accomplished in a distributed manner through the use of these switch chips.
Note that there is very little information about Kranich’s manufacturing technology winning cards. I found only this one although there might be several others as well.
8. The two-person Executive Office and Intel’s transparent computing strategy as presented so far
Newly Elected Intel CEO, Brian Krzanich Talks About His New Job [channelintel YouTube channel, May 2, 2013]
Intel Board Elects Brian Krzanich as CEO [Intel Newsroom, May 2, 2013]
SANTA CLARA, Calif., May 2, 2013 – Intel Corporation announced today that the board of directors has unanimously elected Brian Krzanich as its next chief executive officer (CEO), succeeding Paul Otellini. Krzanich will assume his new role at the company’s annual stockholders’ meeting on May 16.
Krzanich, Intel’s chief operating officer since January 2012, will become the sixth CEO in Intel’s history. As previously announced, Otellini will step down as CEO and from the board of directors on May 16.
“After a thorough and deliberate selection process, the board of directors is delighted that Krzanich will lead Intel as we define and invent the next generation of technology that will shape the future of computing,” said Andy Bryant, chairman of Intel.
“Brian is a strong leader with a passion for technology and deep understanding of the business,” Bryant added. “His track record of execution and strategic leadership, combined with his open-minded approach to problem solving has earned him the respect of employees, customers and partners worldwide. He has the right combination of knowledge, depth and experience to lead the company during this period of rapid technology and industry change.”
Krzanich, 52, has progressed through a series of technical and leadership roles since joining Intel in 1982.
“I am deeply honored by the opportunity to lead Intel,” said Krzanich. “We have amazing assets, tremendous talent, and an unmatched legacy of innovation and execution. I look forward to working with our leadership team and employees worldwide to continue our proud legacy, while moving even faster into ultra-mobility, to lead Intel into the next era.”
The board of directors elected Renée James, 48, to be president of Intel. She will also assume her new role on May 16, joining Krzanich in Intel’s executive office.
“I look forward to partnering with Renée as we begin a new chapter in Intel’s history,” said Krzanich. “Her deep understanding and vision for the future of computing architecture, combined with her broad experience running product R&D and one of the world’s largest software organizations, are extraordinary assets for Intel.”
As chief operating officer, Krzanich led an organization of more than 50,000 employees spanning Intel’s Technology and Manufacturing Group, Intel Custom Foundry, NAND Solutions group, Human Resources, Information Technology and Intel’s China strategy.
James, 48, has broad knowledge of the computing industry, spanning hardware, security, software and services, which she developed through leadership positions at Intel and as chairman of Intel’s software subsidiaries — Havok, McAfee and Wind River. She also currently serves on the board of directors of Vodafone Group Plc and VMware Inc. and was chief of staff for former Intel CEO Andy Grove.
Additional career background on both executives is available at newsroom.intel.com.
The prominent first external reaction to that: Intel Promotes From Within, Naming Brian Krzanich CEO [Bloomberg YouTube channel, May 2, 2013]
Intel’s Krzanich the 6th Inside Man to Be CEO [Bloomberg YouTube channel, May 2, 2013]
Can Intel Reinvent Itself… Again? [Bloomberg YouTube channel, May 3, 2013]
Brian M. Krzanich, Chief Executive Officer (Elect), Executive Office
Brian M. Krzanich will become the chief executive officer of Intel Corporation on May 16. He will be the sixth CEO in the company’s history, succeeding Paul S. Otellini.
Krzanich has progressed through a series of technical and leadership roles at Intel, most recently serving as the chief operating officer (COO) since January 2012. As COO, his responsibilities included leading an organization of more than 50,000 employees spanning Intel’s Technology and Manufacturing Group, Intel Custom Foundry, supply chain operations, the NAND Solutions group, human resources, information technology and Intel’s China strategy.
His open-minded approach to problem solving and listening to customers’ needs has extended the company’s product and technology leadership and created billions of dollars in value for the company. In 2006, he drove a broad transformation of Intel’s factories and supply chain, improving factory velocity by more than 60 percent and doubling customer responsiveness. Krzanich is also involved in advancing the industry’s transition to lower cost 450mm wafer manufacturing through the Global 450 Consortium as well as leading Intel’s strategic investment in lithography supplier ASML.
Prior to becoming COO, Krzanich held senior leadership positions within Intel’s manufacturing organization. He was responsible for Fab/Sort Manufacturing from 2007-2011 and Assembly and Test from 2003 to 2007. From 2001 to 2003, he was responsible for the implementation of the 0.13-micron logic process technology across Intel’s global factory network. From 1997 to 2001, Krzanich served as the Fab 17 plant manager, where he oversaw the integration of Digital Equipment Corporation’s semiconductor manufacturing operations into Intel’s manufacturing network. The assignment included building updated facilities as well as initiating and ramping 0.18-micron and 0.13-micron process technologies. Prior to this role, Krzanich held plant and manufacturing manager roles at multiple Intel factories.
Krzanich began his career at Intel in 1982 in New Mexico as a process engineer. He holds a bachelor’s degree in Chemistry from San Jose State University and has one patent for semiconductor processing. Krzanich is also a member of the Board of Directors of Lilliputian Corporation and the Semiconductor Industry Association.
Renée J. James, President (Elect), Executive Office
Renée J. James is president of Intel Corporation and, with the CEO, is part of the company’s two-person Executive Office.
James has broad knowledge of the computing industry, spanning hardware, security, software and services, which she developed through product R&D leadership positions at Intel and as chairman of Intel’s software subsidiaries — Havok, McAfee and Wind River.
During her 25-year career at Intel, James has spearheaded the company’s strategic expansion into providing proprietary and open source software and services for applications in security, cloud-based computing, and importantly, smartphones. In her most recent role as executive vice president and general manager of the Software and Services Group, she was responsible for Intel’s global software and services strategy, revenue, profit, and product R&D. In this role, James led Intel’s strategic relationships with the world’s leading device and enterprise operating systems companies. Previously, she was the director and COO of Intel Online Services, Intel’s datacenter services business. James was also part of the pioneering team working with independent software vendors to port applications to Intel Architecture and served as chief of staff for former Intel CEO Andy Grove.
James began her career with Intel through the company’s acquisition of Bell Technologies. She holds a bachelor’s degree and master’s degree in Business Administration from the University of Oregon.
James also serves as a non-executive director on the Vodafone Group Plc Board of Directors and is a member of the Remuneration Committee. She is an independent director on the VMware Inc. Board of Directors and is a member of the Audit Committee. She is also a member of the C200.
Chip Shot: Renée James Selected as Recipient of C200’s STEM Innovator Luminary Award [IntelPR in Intel Newsroom, April 13, 2013]
Renée J. James, Intel executive vice president and general manager of the Software and Services Group, has earned the prestigious honor of being the recipient of the STEM Innovator Luminary Award, presented by the Committee of 200 (C200). C200 is an international, non-profit organization of the most powerful women who own or run companies, or who lead major divisions of large corporations. A STEM Innovator is the leader of a technology-based business who has exemplified unique vision and success in science, technology, engineering or math-based industries, which James has continually demonstrated throughout her career at Intel. This includes growing Intel’s software and services business worldwide, driving open standards within the software ecosystem and providing leadership as chairman for both McAfee and Wind River Systems, Intel wholly owned subsidiaries.
Renée James keynote delivering Intel’s new strategy called ‘Transparent Computing’ at the IDF 2012 [TomsHardwareItalia YouTube channel, Sept 13, 2012]
IDF 2012 Day 2:
– Intel Developer Forum 2012 Keynote, Renée James Transcript (PDF 190KB)
– Intel Developer Forum 2012 Keynote, Renée James Presentation (PDF 7MB)
Intel to Software Developers: Embrace Era of Transparent Computing [press release, Sept 12, 2012]
NEWS HIGHLIGHTS
- Intel reinforces commitment to ensuring HTML5 adoption accelerates and remains an open standard, providing developers a robust application environment that will run best on Intel® architecture.
- New McAfee Anti-Theft product is designed to protect consumers’ property and personal information on Ultrabook™ devices.
- The Intel® Developer Zone is a new program designed to provide software developers and businesses with a single point of access to tools, communities and resources to help them engage with peers.
INTEL DEVELOPER FORUM, San Francisco, Sept. 12, 2012 – Today at the Intel Developer Forum (IDF), Renée James, senior vice president and general manager of the Software and Services Group at Intel Corporation, outlined her vision for transparent computing. This concept is made possible only through an “open” development ecosystem where software developers write code that will run across multiple environments and devices. This approach will lessen the financial and technical compromises developers make today.
“With transparent computing, software developers no longer must choose one environment over another in order to maintain profitability and continue to innovate,” said James. “Consumers and businesses are challenged with the multitude of wonderful, yet incompatible devices and environments available today. It’s not about just mobility, the cloud or the PC. What really matters is when all of these elements come together in a compelling and transparent cross-platform user experience that spans environments and hardware architectures. Developers who embrace this reality are the ones who will remain relevant.”
Software developers are currently forced to choose between market reach, delivering innovation or staying profitable. By delivering the best performance with Intel’s cross-platform tools, security solutions and economically favorable distribution channels, the company continues to take a leadership position in defining and driving the open software ecosystem.
Develop to Run Many Places
While developers regularly express their desire to write once and run on multiple platforms, currently there is little incentive for any of the curators of these environments to provide cross-platform support. Central to Intel’s operating system of choice strategy, the company believes a solution to the cross-platform challenge is HTML5. With it, developers no longer have to make trade-offs between profitability, market participation or delivering innovation in their products. Consumers benefit by enabling their data, applications and identity to seamlessly transition from one operating system or device environment to another.
During her keynote, James emphasized the importance of HTML5 and related standards and that the implementation of this technology by developers should remain open to provide a robust application development environment. James reinforced Intel’s commitment to HTML5 and JavaScript by announcing that Mozilla, in collaboration with Intel, is working on a native implementation of River Trail technology. It is available now for download as a plug-in and will become native in Firefox browsers to bring the power of parallel computing to Web applications in 2013.
Security at Intel Provides an Inherent Advantage
Security at Intel provides an inherent advantage in terms of its approach. For over a decade, Intel has applied its technology leadership to security platform features aimed at keeping computing safe, from devices and networks to the data center. Today, the company extends the efficacy of security by combining hardware and software security solutions and co-designing products with McAfee. James invited McAfee Co-President Michael DeCesare to join her onstage to emphasize the important role security takes as the threat landscape continues to become more complex both in terms of volume and sophistication. DeCesare also highlighted the opportunity for developers to participate in securing the industry.
Touching on where McAfee is heading with Intel, DeCesare discussed the importance of understanding where computing is going overall. He noted examples including applications moving to the cloud, as well as IT seeking ways to reduce power consumption and wrestling with challenges associated with big data and the consumerization of IT. DeCesare also highlighted the value of maintaining the user experience and introduced McAfee Anti-Theft security software. Designed to protect consumers’ property and personal information for Ultrabook™ devices, this latest product enhancement is a collaborative effort with Intel to develop anti-theft software using Intel technologies that provide device and data protection.
DeCesare reiterated the opportunity for developers through the McAfee Security Innovation Alliance (SIA). The technology partnering program helps accelerate development of interoperable- security products, simplify integration of these products and delivers solutions to maximize the value of existing customer investments. The program also is intended to reduce both time-to-problem resolution and operational costs.
Developers’ Access to Resources Made Easy
James also announced the Intel® Developer Zone, a program designed to provide software developers and businesses with a single point of access to tools, communities and resources to help them engage with peers. Today’s software ecosystem is full of challenges and opportunities in such areas as technology powering new user experiences, expectations from touchscreens, battery life requirements, data security and cloud accessibility. The program is focused on providing resources to help developers learn and embrace these evolving market shifts and maximize development efforts across many form factors, platforms and operating systems.
Development Resources: Software tools, training, developer guides, sample code and support will help developers create new user experiences across many platforms. In the fourth quarter of this year, Intel Developer Zone will introduce an HTML5 Developer Zone focused on cross-platform apps, guiding developers through actual deployments of HTML5 apps on Apple* iOS*, Google* Android*, Microsoft* Windows* and Tizen*.
Business Resources: Global software distribution and sales opportunities will be available via the Intel AppUp® center and co-marketing resources. Developers can submit and publish apps to multiple Intel AppUp center affiliate stores for Ultrabook devices, tablets and desktop systems. The Intel Developer Zone also provides opportunities for increased awareness and discoverability through the Software Business Network, product showcases and marketing programs.
Active Communities: With Intel Developer Zone, developers can engage with experts in their field – both from Intel and the industry – to share knowledge, get support and build relationships. In the Ultrabook community, users will find leading developers sharing ideas and recommendations on how to create compelling Microsoft* Windows* 8 apps for the latest touch- and sensor-enabled Ultrabook devices.
Mobile Insights: Emerging Technologies [channelintel YouTube channel, Feb 26, 2013]
Mobile Insights: Software Development in Africa [channelintel YouTube channel, March 5, 2013]
Intel Developer Forum: Executives Talk Evolution of Computing with Devices that Touch People’s Daily Lives [press release, April 11, 2011]
…
Renée James: Creating the Ultimate User Experience
During her keynote, James discussed Intel’s transition from a semiconductor company to a personal computing company, and emphasized the importance of delivering compelling user experiences across a range of personal computing devices. To develop and enable the best experiences, James announced a strategic relationship with Tencent*, China’s largest Internet company, to create a joint innovation center dedicated to delivering best-in-class mobile Internet experiences. Engineers from both companies will work together to further the mobile computing platforms and other technologies.James also announced new collaborations for the Intel AppUpSM center and the Intel AppUp Developer Program in China to help assist in the creation of innovative applications for Intel Atom processor-based devices. Chinese partners supporting this effort include Neusoft*, Haier* and Hasee* and Shenzhen Software Park*.
…
Related presentation: Renee James: The Intel User Experience (English PDF 9.1MB)
How Intel’s new president Renee James learned the ropes from the legendary Andy Grove [VentureBeat, May 2, 2013]
Renee James became the president of Intel today. That’s the highest position a woman has ever held at the world’s largest chip maker. Alongside new CEO Brian Krzanich, James will be part of the two-person executive office running Intel. She rose to that position through tenacity and leadership during a career at Intel, but she was also part of a very exclusive club.
The 25-year Intel veteran was one of the early young employees who served as “technical assistant ” to former chief executive Andy Grove, the hard-charging leader who went by the motto “Only the Paranoid Survive.” In that position, she was not just an executive assistant. Rather, her job was to make sure that Grove always looked good and was up-to-speed on his personal use of technology. She helped him prepare his PowerPoint presentations and orchestrated his speeches. As a close confidant, she had close access to one of the most brilliant leaders of the tech industry.
Intel’s executives needed technical assistants in the way that contemporaries like Bill Gates, who grew up as a programmer, did not. Intel’s leaders were technically savvy manufacturing and chip experts, but they were not born as masters of the ins and outs of operating PowerPoint. So the company developed the technical assistant as a formal position, and each top executive had one. That position has turned out to be an important one; executives mentored younger, more promising employees. These employees then moved on to positions of great authority within Intel.
What makes James’s career so interesting — and a stand out — is that unlike Intel’s early leaders, she wasn’t a chip engineer or manufacturing executive. She has an MBA from the University of Oregon, and she pitched no-chip businesses for Intel to enter and became chief operating officer of Intel Online Services.
James will start her new position on May 16 and will report to Krzanich.
James served under Grove for a longer time than most technical assistants did, as she proved indispensable to him. James said that she learned a huge amount from Grove, and she took lots of notes on the things that he said that made an impression on her. Paul Otellini, the retiring CEO of Intel, also served as a technical assistant for Grove. The technical assistant job was one of those unsung positions that required a lot of wits. James had to pull together lots of Intel resources to set up, rehearse, and execute Grove’s major keynote speeches.
She was eventually given the more impressive title of “chief of staff.” During the dotcom era, she moved out on her own to set up an ill-fated business. She was in charge of Intel’s move into operating data centers that could be outsourced to other companies.
Under James’ plan, Intel would set up data centers with the same discipline and precision that it did with its chip manufacturing plants. It would build out the huge server rooms in giant warehouses and then rent the computing power to smaller companies. The business was much like Amazon’s huge web services business today. But Intel was too early and on the wrong side of the dotcom crash. When things fell apart in 2001, so did Intel’s appetite for noncore businesses. Intel shut down James’ baby.
But she went on to manage a variety of other businsses, including Intel’s security, software, services, and other nonchip businesses that have become more important as Intel takes on its mantle as a leader of the technology industry rather than just a component maker. That’s one of the legacies of Grove, who saw that Intel had to do a lot of the fundamental research and development in the computer industry, in part because nobody except Microsoft had the profits to invest in R&D.
As executive vice president of software and services, James managed Intel software businesses, including Havok, McAfee, and Wind River. During her tenure over software, Intel struggled in its alliance with Nokia to create the Meego mobile operating system, and it eventually gave up on it.
Among the other technical assistants at Intel were Sean Maloney, a rising star who retired last year after having a a stroke in 2010; venture capitalist Alex Wong; and Anand Chandrasekher, who left Intel and is now the chief marketing officer at rival Qualcomm.
Nokia’s non-Windows crossroad
Update: 3” display with 240 x 320 pixels, not AMOLED screen, 3.2 MP camera. More information:
– New Asha platform and ecosystem to deliver a breakthrough category of affordable smartphone from Nokia [‘Experiencing the Cloud’, May 9, 2013] my composite post of the all relevant launch information
– New Nokia Asha platform for developers [‘Experiencing the Cloud’, May 9, 2013] my composite post of the all relevant development platform information End of update
There was a question why I was so affirmative with the headline of Temporary Nokia setback in India [‘Experiencing the Cloud’, April 28, 2013]. The quite remarkable cross-platform development story for Nokia Asha current and future devices is the major part of my affirmative approach. Take a look and convince yourself as well!
Nokia’s cross-platform strategy is aimed at the following value proposition to developers (see in the “Nokia’s own Asha cross-platform efforts for developers (so far)” section):
Consider Co-Development, instead of classic “porting”
As the Category:Silverlight [Nokia Developer Wiki, April 22, 2013] is stating:
Deprecated Category. Please move any articles across to Category:XAML.
the below rumor about the upcoming on May 9th Asha 501, that its design will be like the Nokia Lumias, would mean that programatically the same XAML interface would be delivered by Nokia for a further enhanced Nokia Asha Touch S40 operating system. It is even more likely as the J2ME platform of the Nokia Asha Touch S40 operating system was a few days ago enhanced by the Lightweight User Interface Toolkit (LWUIT) in Nokia SDK 2.0 for Java™, and this is supported by the full cross-platform Codename One development kit from the same name 3d party company, who is also preparing a XAML based 1.1 version of this toolkit for Windows Phone 8/7 (and presumably for Windows 8 as well), thus allowing the same standard Java programming by providing (see in the “Codename One cross-platform offerings for Java developers” section):
1 Java API which is the same for J2ME, Android, iOS, RIM and Win8.
It could also be quite probable that Nokia’s own Asha cross-platform offerings will extended by C#/XAML oriented cross-platform toolkit[s] on May 9th. Then we will have a complete cross-platform story for Nokia’s non-Windows offerings. We’ll see.
Nokia launching Asha 501 on 9th May? [mobile indian, May 1, 2013]
Nokia has sent out press invites for an event on May 9, which could possibly be about Asha 501 launch, and we have strong reasons to believe so.
Nokia may probably launch new phone(s) in the Asha series lineup on May 9th, on which day Nokia has organized an event and has sent out invites to various media organisations. And while the invitation does not specify the subject of the launch, we are pretty sure about it being an Asha series phone as it has been sent by a team that looks after Asha lineup.
Probably, Nokia would launch the Asha 501 which has been in the news off late.
According to rumors, Nokia Asha 501 is to come with design like the Nokia Lumia phones.
Further the Asha 501 is said to come with a 5 megapixel camera with LED flash, and a slightly larger display than Asha 311 which has a 3 inch touchscreen. Most likely this handset will have at least a 1 GHz processor.
Nokia is reemphasizing on its Asha series of phones to strengthen its market hold. Recently Stephen Elop, Nokia’s chief executive officer, had also emphasized that saying, “We have to make sure the product portfolio is as competitive as possible. We are due for a significant refresh.”
#Breaking “Nokia 501” & “Nokia 210” Passed Testing Process by Directorate Post & Telecommunication Indonesia [nokianesia blog, April 9, 2013]
Today, April 09, 2013 Directorate Post & Telecommunication Indonesia publish 2 New Nokia devices which are already passed the testing process to get certification.
There are Nokia 501 RM-902 that should be (Maybe) The next generation of Nokia Asha and Nokia 210 RM 924 that Should be Nokia Asha 210.
Right know, we still don’t have any information about specification and information. We will post if there are any information about Nokia 501 and Nokia Asha 210.
Source postel.go.id
Compare Nokia Asha 501 vs Micromax A51 Bolt [91mobiles, March 16, 2013]
| Nokia Asha 501 – 3.5”, AMOLED capacitive touchscreen – 320 x 480 pixels – 1 GHz Processor – 512 MB RAM – 5MP rear camera with LED Flash – front camera – video recording – video playback – GPRS, EDGE, HSDPA/HSUPA, WiFi 802.11 b/g/n, Bluetooth, USB – Nokia Asha Touch OS |
Micromax A51 Bolt [$79+] – 3.5” , TFT LCD capacitive Touchscreen, 262K Colors – 320 x 480 pixels – 832 MHz, BCM21552 [ARM11] – 512 MB ROM, 256 MB RAM – 2MP rear camera with Flash – 0.2MP front camera – video recording: VGA @30fps – video playback: 720×486 – 3G/Bluetooth/Wi-Fi/USB – Android V2.3.7 (Gingerbread) |
Sections of this post:
– Codename One cross-platform offerings for Java developers
– Nokia’s own Asha cross-platform efforts for developers (so far)
Codename One cross-platform offerings for Java developers
Developers Guide [Version 1.0.1, Jan 24, 2013]
Introduction
Codename One is a set of tools for mobile application development that derive a great deal of its architecture from Java. It stands both as the name of the startup that created the set of tools and as a prefix to the distinct tools that make up the Codename One product.
The goal of the Codename One project is to take the complex and fragmented task of mobile device programming and unify it under a single set of tools, APIs and services to create a more manageable approach to mobile application development without sacrificing development power/control.
History
Codename One was started by Chen Fishbein & Shai Almog who authored the Open Source LWUIT project at Sun Microsystems starting at 2007. The LWUIT project aimed at solving the fragmentation within J2ME/Blackberry devices by targeting a higher standard of user interface than the common baseline at the time. LWUIT received critical acclaim and traction within multiple industries but was limited by the declining feature phone market.
![]()
In 2012 the Codename One project has taken many of the basic concepts developed within the LWUIT project and adapted them to the smartphone world which is experiencing similar issues to the device fragmentation of the old J2ME phones.
How Does It Work
Codename One has 4 major parts: API, Designer, Simulator, Build/Cloud server.
API – abstracts platform specific functionality
Designer – allows developers/designers to design the GUI/theme and package various resources required by the application
Simulator – allows previewing and debugging applications within the IDE
Build/Cloud server – the server performs the build of the native application, removing the need to install additional software stacks.
Limitations & Capabilities
J2ME & RIM are very limited platforms to achieve partial Java 5 compatibility Codename One automatically strips the Java 5 language requirements from bytecode and injects its own implementation of Java 5 classes. Not everything is supported so consult the Codename One JavaDoc when you get a compiler error to see what is available.
Due to the implementation of the NetBeans IDE it is very difficult to properly replace and annotate the supported Java API’s so the completion and error marking might not represent correctly what is actually working and implemented on the devices. However, the compilation phase will not succeed if you used classes that are unsupported.
Lightweight UI
The biggest differentiation for Codename One is the lightweight architecture which allows for a great deal of the capabilities within Codename One. A Lightweight component is a component which is written entirely in Java, it draws its own interface and handles its own events/states.
This has huge portability advantages since the same code executes on all platforms, but it carries many additional advantages.
The components are infinitely customizable just by using standard inheritance and overriding paint/event handling. Theming and the GUI builder allow for live preview and accurate reproduction across platforms since the same code executes everywhere.
…
Codename One Benchmarked With Amazing Results [Codename One – Reinventing the Mobile Development blog, Dec 7, 2012]
Steve Hannah who ported Codename One to Avian has just completed a set of benchmarks on Codename One’s iOS performance putting Codename One’s at 33% slower performance than native C and faster performance than Objective-C!
I won’t spoil his research results so please read his full post here.
A small disclaimer is that the Objective-C benchmark is a bit heavy on the method/message calls which biases the benchmark in our favor. Method invocations in Codename One are naturally much faster than the equivalent Objective-C code due to the semantics of that language.
With 100,000 SDK Downloads, Mobile Development Platform Codename One Comes Out of Beta With 1.0 Launch [Codename One – Reinventing the Mobile Development blog, Jan 29, 2013]
Tel Aviv, Israel – Mobile development platform Codename One is announcing the launch of its 1.0 version on Tuesday, January 29. After releasing in beta last June, Codename One – the first software development kit that allows Java developers to create true high performance native mobile applications across multiple mobile operating systems using a single code base – has garnered over 100,000 downloads and emerged as one of the fastest toolkits of its kind, on par with native OS toolkits.
The platform to date has been used to build over 1,000 native mobile applications and has been touted by mobile developers and enthusiasts as the best write-once-run-everywhere solution for building native mobile apps.
“I have been developing with Codename One for a couple of months now. When you line up all of the other options for development, whether native SDKs, Appcelerator, ADF or others, Codename One wins on almost every front,” said software developer Steve Hannah.
Codename One has received widespread, viral acclaim in technology and business media including InfoWorld, Slashdot, Hacker News, VentureBeat, Business Insider, The Next Web, Dr. Dobbs and Forbes, which named the company one of the 10 greatest industry disrupting startups of 2012.
“We have been thrilled with the success of our beta launch and are very excited to release the much-awaited 1.0 version,” said co-founder and CEO Shai Almog.
Almog, along with co-founder Chen Fishbein, decided to launch the venture after noticing a growing inefficiency within mobile application development. By enabling developers to significantly cut time and costs in developing native applications for iOS, Android, Blackberry, Windows 7 Phone and other devices, Almog and Fishbein hope to make mobile application development increasingly feasible.
The Java-based platform is open-source and utilizes lightweight technology, allowing it to produce unique native interfaces highly differentiated from competitive cross-platform mobile development toolkits, which typically use HTML5 or heavyweight technology.
By drawing all components from scratch rather than utilizing native widgets, Codename One enables developers to avoid fragmentation – a major hindrance found in the majority of competitors – and additionally allows accurate desktop simulation of mobile apps.
The startup’s founders are recognized for engineering Sun Microsystems’s famous Lightweight User Interface Toolkit, a mobile platform used by leading mobile carriers and industry leaders to this date.
Codename One is available for download free of charge.
About Codename One
Codename One, named by Forbes as “one of the 10 greatest industry disrupting startups of 2012,” is an Israel-based technology company that has created a powerful cross-platform software development kit for mobile applications. The technology enables developers to create native applications across multiple operating systems using a single code base. Codename One was founded by renowned software engineers Shai Almog and Chen Fishbein in 2012.
Windows Phone 8 And The State Of 7 [Codename One – Reinventing the Mobile Development blog, April 2, 2013]
Codename One’s windows phone port is close to a public release.A preliminary Windows Phone 8 build has been available on our servers for the past couple of days. We differentiate between a Windows Phone 7 and 8 version by a build argument that indicates the version (win.ver=8) this will be exposed by the GUI in the next update of the plugin. But now I would like to discuss the architecture and logic behind this port which will help you understand how to optimize the port and maybe even help us with the actual port.
The Windows Phone 7 and 8 ports are both based on the XMLVM translation to C# code, we picked this approach because all other automated approaches proved to be duds. iKVM which seems like the most promising option, isn’t supported on mobile so that only left the XMLVM option.
The Windows Phone 7 port was based on XNA (3d C# based API) which has its share of problems but was more appropriate to our needs in Codename One. Unfortunately Microsoft chose to kill off XNA for Windows Phone 8 which put us in a bit of a bind when trying to build the Windows Phone 8 port.
While externally Windows Phone 8 and 7 look very similar, their underlying architecture is completely different and very incompatible. You cannot compile a universal binary that will work on all of Microsoft’s platforms, so just to make order within this mess:
- Windows Phone 7 – based on the old Windows CE kernel. Allows only managed runtimes (e.g. C# not C++), graphics can be done using XAML or XNA (more on that later.
- Windows Phone 8 – based on an ARM port of Windows 8 kernel. Allows unmanaged apps (C# or C++) graphics can be done in XAML or Direct3D when using C++ (but not silverlight).
- Windows RT/Desktop – the full windows 8 kernel either for ARM or for PC. They are partially compatible to one another so I’m putting them together. This is actually pretty similar to the Windows Phone 8 port, but incompatible so a different build is needed and slightly different API usage.
As you understand we can’t use XNA since it isn’t supported by the new platforms, we toyed a bit with the idea of using Direct3D but integrating it with text input, fonts etc. seemed like a nightmare. Furthermore, doing another C++ port would mean a HUGE amount of work!
So Codename One is based on the XAML API. Most people would think of XAML as an XML based API, but you can use it from C# and just ignore most of the XML aspects of it which is what we need since our UI is constructed dynamically. However, this is more complicated than it seems.
To understand the complexity you need to understand the idea of a Scene Graph. If you used Codename One you are using a more immediate mode graphics API, where the paint method is invoked and just paints the component whenever its needed. This is the simplest most portable way of doing graphics and is pretty common, its used natively by Android, OpenGL, Direct3D etc. and is very familiar to developers.
In recent years many Scene Graph API’s sprung up, XAML is one of them and so is JavaFX, Flash, SVG and many others. In a Scene Graph world you construct a graphics hierarchy and then let it be rendered, the whole paint() sequence is hidden from the developer. The best way to explain it is that our components in Codename One are really a scene graph, only at a higher abstraction level. Windows/Flash placed the scene graph on the graphics as well, so to draw a rectangle you would just add it to the tree (and remove it when you no longer need it).
This is actually pretty powerful, you can do animations just by changing component values in trees and performance can be pretty spectacular since the paint loop can be GPU optimized.
However, the reality of this is that most developers find these API’s harder to work with (since they need to keep track of a rather complex unintuitive tree), the API’s aren’t portable at all since the hierarchies are so different. Performance is also very hard to tune since so much is hidden by the underlying hidden paint logic.
For Codename One this is a huge problem, we need our API to act as if its painting in immediate mode while constructing/updating a scene! When we initially built this the performance was indeed as bad as you might imagine. While we are not in the clear yet, the performance is much improved…
How did we solve this?
There are several different issues involved, the first is the number of elements on the screen. We noticed that if we have more than 200 elements on the screen performance quickly degraded. This was a HUGE problem since we have thousands of paint operations happening just in the process of transitioning into a new form. To solve this we associate every graphics component with a component and when the component is repainted we remove all operations related to it, we also try to reuse graphics resources such as images from the previous paint operation.
When painting a component in Codename One we normally traverse up the component tree and paint the first opaque component forward (known as painters algorithm) however, since the scene already has the parent component painting it again would result in many copies of the image being within the scene graph. E.g. I have a background image on a form, when painting a translucent label I have to paint the background image within a clipping region matching the label…. In the Windows Phone port we have a special hook that just disables this functionality, this hook alone pushed us over the top to reasonable graphics performance!
We are working on getting additional performance oriented features into place and fixing some issues related to this approach, its not a simple task since the API wasn’t designed with this in mind but it is doable. We would appreciate you taking the time to review the port
Build Java Application for Mobile Devices [Shai Almog YouTube channel, Jan 10, 2013]
Codename One Executive Overview [Shai Almog YouTube channel, Jan 6, 2013]
Developer Introduction To Codename One [Shai Almog YouTube channel, Jan 6, 2013]
Series 40 Webinar: LWUIT for Nokia Asha app development [nokiadevforum YouTube channel, April 16, 2013]
More information:
– Swing into Mobile – Use the Lightweight UI Toolkit on Nokia Series 40 phones [pp. 81–84 of Java Magazine, January/February 2013]
– LWUIT for Series 40 out of beta [Nokia Developer News, Feb 26, 2013]
Great news for those of you wanting to deliver superior UIs in your Series 40 apps— Lightweight UI Toolkit (LWUIT) for Series 40 has graduated from beta to a full initial release.
LWUIT is an open source Java ME toolkit that supports a comprehensive range of visual UI components, and other user interface elements such as theming, transitions, and animation among others. It helps you create applications with appealing UIs that closely follow the native Series 40 UIs. It also helps speed up development by significantly reducing the need to create custom UI components, which might be needed when creating an app’s UI using LCDUI. LWUIT for Series 40 can be used in combination with selected Nokia UI APIs and all the JSR APIs available on the platform.
Since the last LWUIT for Series 40 release made available in the Nokia SDK 2.0 for Java, development of the toolkit has been continuing at a rapid pace. A number of new APIs have been introduced, including PopUpChoiceGroup, ContextMenu, NokiaListCellRenderer, theme selection, and full-screen mode. There have also been significant improvements in performance, particularly in lists, themes loading, and HTMLComponent. Compatibility with the native full-touch UI has been fine-tuned and many bugs fixed, particularly in command handling and text input.
The toolkit also includes all the new examples created since the last release. These include code examples that provide demonstrations of the Category bar, gestures, and lists. There are also new application examples for birthdays, showing use of the calendar component and PIM API; a slide puzzle; tourist attractions, showing the use of HERE maps and in-app purchasing APIs; and a Reddit client showing the use of a custom theme and JSON. In addition, updated version of two of the original LWUIT examples applications, LWUITDemo and LWUITBrowser, are also included.
The final component in the full release of LWUIT for Series 40 is the inclusion of comprehensive documentation in the toolkit. This is based on the LWUIT Developer’s Library, a library consisting of:
Developer’s Guide, which is based on the original LWUIT Developer Guide and provides technical information about using the LWUIT components
LWUIT UX overview, which is a new section providing a guide to designing app UIs with LWUIT for Series 40 components
If you have the Nokia SDK 2.0 for Java installed, you will receive an automatic notification of the availability of LWUIT for Series 40 1.0. You can then simply follow the instructions to install the update. If you are using LWUIT with the Nokia SDK 1.1 for Java, you can download the update from LWUIT for Series 40 project.
J2ME, Feature Phones & Nokia Devices [Codename One – Reinventing the Mobile Development blog, April 24, 2013]
Is J2ME dead or dying?
How many times have we heard this for the past 3 years or so? Sadly the answer is: Yes!
Unfortunately there is no active owner for the J2ME standard and thus no new innovation around J2ME for quite some time (MIDP 2.0 came out in 2004, 3.0 never really materialized). Android is/was the biggest innovation since and became the unofficial successor to J2ME.
Well, if J2ME is dead what about Feature Phones? Should we care about them?
The answer is: Yes! very much so!
Features Phones are still selling in millions and still beats Android sales in the developing world. Recently Nokia shipped the Asha series devices which are quite powerful and capable pieces of hardware, they are very impressive. Nokia’s revenue is driven mainly by the Feature Phone market.
There is a real battle in the developing countries between Feature Phones and Android devices, Feature Phones are still cheaper and more efficient where Android has more/better content (apps & games).
How long will it take Android to catch up? we will see…
In the meantime there is money on the table and a real opportunity for developers to make some money (and gain loyal users who will migrate to Android or other platform at some point)
To win over the competition or at least to maintain its dominate player position Nokia must bring new quality content to the devices, it’s not enough to ship cool new feature phones, the new phone needs to connect to facebook, twitter, gmail, whatsapp and have all the new cool games/apps Android has and more.
So how should you write your apps for the cool new Nokia Feature Phone if J2ME is dead? Luckily there is an option Codename One ;-).
In Codename One You have 1 Java API which is the same for J2ME, Android, iOS, RIM and Win8.
Below are some of the J2ME highlights:
Facebook Connect – did you noticed there aren’t many social apps on OVI?
There is a reason Facebook uses oauth2 which is a huge pain without a browser API, this is solved and working in Codename One.
Java 5 features – You can use generics and other Java 5 features in your app and it will work on your J2ME/RIM devices. You don’t have to limit yourself to CLDC.
Rich UI – If you know or knew LWUIT (Swing like API), well Codename One UI is effectively LWUIT 2.0.
Built in Asha skins and themes
The most important thing is the fact that your skills are not wasted on an old/dying J2ME API, by joining our growing community and writing the next amazing app your skills can target the emerging platforms of the present/future.
Codename One JavaOne Session Screencast [Shai Almog YouTube channel, Oct 25, 2012]
Nokia’s own Asha cross-platform efforts for developers (so far)
Series 40 Webinar: How to develop cool apps for Nokia Asha smartphones [nokiadevforum YouTube channel, April 5, 2013]
[25:01] Porting Resources at Nokia Developer
– Porting and Guide for Android Developers:
>>> http://www.developer.nokia.com/Develop/Porting/ [27:46]
Related to the porting vis-à-vis Android & cross-platform slides:
[27:46 > 28:50 > 29:40 > 30:20 > 30:50 > 31:15 > 31:40 > 32:25 > 33:20 Demo: Android porting Frozen Bubble: see https://projects.developer.nokia.com/frozenbubble and the video coming below > 34:24]
Tantalum Mobile [January 1, 2013] Summary
Tantalum is mobile Java tools for high performance and development speed on Android and J2ME. The focus is on practical use cases which can be included in a project to solve frequent needs in an elegant manner.
Life is many asynchronous tasks chained together and running concurrently on background threads with UI callbacks. The result may look like black magic or star wars, but as you become one with the source, the patterns emerge as ecstatic moments of clarity.
Tantalum Cross Platform Library
Tantalum 5 is nearing beta release
As the Tantalum team works hard on the new Tantalum 5 release and increasing support to the Android community, you can track that and possibly help at https://github.com/TantalumMobile/ More on that and the great support Nokia is giving to this open source effort as we release- happy changes and momentum.
* NEW 4.0 RELEASE January 1, 2013 *
New release 4.0 including cross-platform Android and J2ME app development support, simple fork-join concurrency, simple 3 layer caching and Android AsyncTask and more is now available!
Quick Start Guide and JAVADOC: Tanalum4_doc.zip
Source code and examples: Tantalum4.zip
Cross platform Series40-Android example using Tantalum4: Picasa_Viewer
JavaOne San Francisco talk and demos of Tantalum4: JavaOne_Extreme_Mobile_Java_Performance.mp4
Tantalum is a light-weight metal used used to keep mobile phone electronics compact and powerful. Tantalum4 is the 4th major release of a very light and elegant back end utility library for mobile java. With mobile applications, less is more.
This is _not_ a framework. It is a clean and light tool set which at 8-40kB it will _not_ bloat your application. Obfuscation of your release build automatically removes those features you do not use. We do just a few things really well:
The exact same JAR library runs on J2ME and Android– save time and money by reusing your code and add a native UI for each platform
Clean, fast utility model threading with Java7 fork-join-cancel and Android Java5 AsyncTask patterns
Unique async task chaining to feed the output of one Task to the input of the next is easier than overriding existing classes
WeakReference heap and persistent flash memory caching to easily make online-offlne apps which start fast and run reliably in real world mobile networks
Async HTTP GET and POST with automatic retry
Simplified async XML parsing directly into model objects
Simplified async JSON parsing directly into model objects
Logging convenience classes including J2ME USB debug and app profile from phone
The above capabilities work cleanly together to simplify your development. There is no UI assumption in Tantalum4– pick what works best for you on each platform. The bundled example applications are an RSS reader for
Forms
Nokia Series40 Asha touch devices
LWUIT 1.5
Download the sample apps and give a try. We hope you are amazed at the results and speed with which you can achieve them.
Apache 2 license. Please return your fixes and suggestions to the community here.
* NEW 3.0 RELEASE June 18, 2012 *
WHAT IS NEW
Many, many stability improvements, especially to caching and flash memory usage
Shutdown work tasks and low-priority work tasks are now supported
Support for Nokia LWUIT in the example applications
Support for Nokia full touch phones in the example applications.
Speed. Tantalum3 is wired and optimized even more than before to run well also on slower devices.
You can find a series of nice, short training videos covering Tantalum3 athttps://projects.developer.nokia.com/videotraining
CONTENTS OF THE ARCHIVE (Download link on right side of this page)
/prebuilt_examples
Pre-built example applications, run to test on various devices. Testing is mostly on Nokia SDK 1.1 and 2.0 with profiling of the S40 example tested in Oracle SDK.
/lib
Pre-built libraries you can include in your application if you don’t want to mess with the source code. There are three flavors: debug including unit tests and verbose errors, usb-debug, and release optimized. To use the usb-debug variant, connect your phone by USB and open a terminal emulator such as puttytel to the serial port you find in Window Device Manager. Use max baud rate and hardware flow control RTS/CTS.
/src
Everything you need to build the libraries and examples yourself
/doc
Javadoc for Tantalum3 library
/json_doc
Javadoc for the optional JSON suppliment
* NEW 2.2 RELEASE February 7 2012 *
Example updates with minor bug fix, reorganization of the source into 3 projects make release builds easier, added unit tests.
* NEW 2.1 RELEASE January 24 2012 *
Latest announcements
Tantalum 4 is out! – January 7th, 2013 by paul.houghton
Tantalum 4, almost ready… – December 11th, 2012 by paul.houghton
See all announcements >
Related videos:
– Series 40 Webinar: Porting Android apps to the Series 40 platform [nokiadevforum YouTube channel, Dec 17, 2012]
– Porting Android and Blackberry apps to Series 40 [Nokia Developer News, Nov 30, 2012]
If you’ve got an application for Android or BlackBerry (up to BlackBerry OS 7.1), your existing Java code puts you in a great position to take advantage of the growing demand for apps from Series 40 phone owners.
To help you take advantage of this opportunity, we’ve started to gather a collection of resources to guide you through the porting process in the Porting to Series 40 library section.
If you are starting with an Android app, the wiki provides basic information on the tools and technology needed, platform comparisons, porting considerations, code snippets, and example porting cases along with the all-important guidelines you need for an efficient port.
For your future apps, you can even consider creating a Series 40 and Android version at the same time, our Picasa Viewer example application will show you how.
If a little hands-on guidance could help even more, why not check out the Android porting webinar sessions we have on 4 December at 8 a.m. San Francisco; 10 a.m. Mexico City; 4 p.m. London and 13 December, 8 a.m. London; 1:30 p.m. New Delhi; 4 p.m. Singapore.
Life could be even easier if you have a BlackBerry app. Most generic Java ME MIDlets can be deployed to both BlackBerry and Series 40 with little more than platform-specific repackaging. However, you might want to adapt the user interface and the look & feel of the app to fit to Series 40 screen-size and UI style. Again, the wiki gives you a pointer to the porting article with code samples that will be enhanced for the later updates of the library.
You can also get practical guidance from an expert, check out our BlackBerry porting webinar on 18 December, 8 a.m. London; 1:30 p.m. New Delhi; 4 p.m. Singapore or view a recording of one of the earlier sessions on our webinars page.
Using our latest Nokia SDK 2.0 for Java, and its integrated Nokia IDE for Java ME, combined with the guidance of the updated porting library, we think you’ll find porting your app easier than you ever imagined.
We’re looking forward to welcoming you to the family of developers who have found success on the Series 40 platform.
– Designing & Optimising Graphics for your Series 40 app [nokiadevforum YouTube channel, Nov 8, 2012] https://projects.developer.nokia.com/frozenbubble
– UI Clinic – Series 40 full touch, April 2013 [nokiadevforum YouTube channel, April 24, 2013]
– Introduction to the Nokia Premium Developer Program for Asha [nokiadevforum YouTube channel, April 19, 2013]
Asha Premium Developer Program introduced [Nokia Developer News, March 26, 2013]
We’ve been having a lot of fun lately—we launched the Nokia Premium Developer Program for Lumia back in October, and it proved to be our most successful developer program ever. Our rewards program, DVLUP, has also proven extremely popular with developers, and we recently expanded it to include developers in the UK.
So we decided it was time to bring some “Premium goodness” to Asha development. Today we are excited to introduce the Nokia Premium Developer Program for Asha.
The Asha Opportunity
The Asha ecosystem has a growing installed base of superior but affordable smartphones (such as the Nokia Asha 308, 310, and 311), and with these great devices comes an increased demand for apps. The Asha Premium Developer Program is designed to provide you with tools and services to make developing for Asha faster and easier, increase the discoverability of your apps, and bring you closer to the millions of Nokia Asha users around the world.
By providing you with high-value support and tools beyond what’s provided by your standard registration with Nokia Developer, the Asha Premium Developer Program will help you fast-track your success.
The Nokia Premium Developer Program for Asha comprises two levels: enhanced productivity tools and app promotion opportunities. We know that it’s easier not only to be inspired but also to develop and test when you have a great device in hand, so the productivity tools start with a free Nokia Asha 310 smartphone. To help you with testing, we’re also offering expanded Remote Device Access with more Nokia Asha devices available to you. Finally, you’ll get two free tech tickets for Asha development support, a value of $198 (USD).
Program members who submit a new, high quality full touch Asha app to Nokia Store can apply for app promotional opportunities: greater visibility on Nokia Store, or a $500 (USD) credit to run paid ad campaigns on Nokia Ad Exchange.
Best of all membership in the Nokia Premium Developer Program for Asha is free, although you’ll need to meet certain criteria.
Explore the Nokia Premium Developer Program for Asha, and apply for membership today.






