Tag: IDF 2014

Intel shows off in-memory-database Biz

Intel-IDF-'14-Copy-SizeIntel’s Developer Forum 2014 annual meeting at San Francisco’s Moscone Center wound down yesterday. My assignment is to continue research on a technology that’s now ramping.

The computer industry is at the beginning of a major architectural shift – “In-Memory Database” (IMD) systems, originally aimed at solving near real-time solutions for analytic problems have successfully been applied to cognitive computing problems as well. The nascent application of “cognitive computing intelligence and predictive analytics” toolsets to IMD equipped servers is thought to be the first step in a new era in computing – quite possibly the next big thing.

The Google Effect
At the 2000 Intel Developer Forum in San Francisco a relatively unknown entrepreneur, while having a Keynote fireside chat with Andy Grove, said he’d like to take the entire Internet and put it in memory to speed it up – “The Web, a good part of the Web, is a few terabits. So it’s not unreasonable,” he said. “We’d like to have the whole Web in memory, in random access memory.”

The comment received a rather derisive reception from the audience and was quickly forgotten. The speaker, Larry Page, an unknown at the time, as was his startup company, Google – the company’s backbone consisted of 2,400 computers at the time.

Fast forward to the present – system vendors found their future in Big Data has a lot of the look and feel of Google’s “free to the public” offering. Google was the first to successfully deploy a massively parallel processing (MPP) network commercially using commodity servers – one that was delivering real-time data access on a worldwide basis. Their competitors realized that they could no longer remain competitive with systems that relied on high latency rotating magnetic media as the main store – in fact, solid state disks (SSD) are considered somewhat slow for the new realities of Big Data analytic computing.

The development – called “In-Memory Database” mounts the entire database (single system image – even enormous ones) into large scale memory arrays of Registered DIMMs – closely coupled with Multi Core Processors. The resulting increase in throughput accelerates not only transaction processing but also analytic application performance into real time. The frosting on the cake is that this architecture change applies to good advantage in the emerging cognitive computing space.

SAP – HANA, In-Memory Database Computing
In 2006 Hasso Plattner, Co-founder of SAP AG, took a bottle of red wine, a wine glass, some writing implements and paper to the garden behind his house. By the time he reached the bottom of the bottle there wasn’t much written on the paper. But he had reached the conclusion that in-memory systems were the future. Mr. Plattner had realized that for SAP to remain competitive it needed to innovate – Plattner believed that by changing the server design to accommodate massively parallel processing with enough memory to load an entire database when combined with columnar based storage software would have a revolutionizing effect on processing speeds for OLTP and OLAP applications.

Gathering a small group of PhDs and undergrads at the Hasso Plattner Institute, Plattner expressed the in-memory idea he wanted them to explore. The first prototype was shown in 2007 before an internal audience at the company’s headquarters in Waldorf, Germany. SAP management was skeptical that the idea would work – the team needed to prove that the concept of in-memory database would work under real world conditions.

Using contacts to advance the project, Mr. Plattner persuaded Colgate-Palmolive Co. to provide transaction data for the project. He also persuaded Intel’s Craig Barrett to secure the latest microprocessors for the labs ongoing effort. The company also set up an R&D facility in Palo Alto to be in close proximity to their innovation and research partner Stanford University.

SAP HANA was officially announced in May 2010 with shipments commencing with the release of SAP HANA 1.0 in November. The market was slow in adopting the technology convinced that it was still in an early stage of development. Analytics and the need to score a real reason for their customers to mount their IT to the cloud provided the market conditions SAP’s HANA needed to press its adaptation. SAP over time adopted HANA to the Cloud through successful partnering with a wide array of vendors making it the company’s fastest growing segment.

During the development of HANA, SAP discovered the amount of physical memory required to store an entire database could be reduced substantially (compressed) – in some cases by 100X. This had the effect of reducing power (less memory required) and made database searches more efficient (reduction of the empty set). The market implication was that the price of memory per gigabyte had finally reached a price/performance breakeven point in an application that could not be accomplished at that price any other way. DRAM producers have found their next “Killer Application”.

IBM’s Watson – Cognitive Computing Public Debut
IBM’s Watson is a Big Data analytics system running on 2,880 PowerPC cores with 16TBytes of DRAM. Estimated cost is reportedly just over $3 Million and it requires 200kW of power to operate. Watson’s inner workings have not been publicly released – what is known is that it runs under a tool IBM calls DeepQA, implemented in conjunction with Hadoop (a Java implementation of MapReduce) that runs under the SUSE Linux Enterprise Server Operating System.

IBM introduced Watson to the public by competing it against human opponents on the game show “Jeopardy” in February 2011 – establishing IBM and the Watson Brand in the public mind when it won the $1 Million Dollar prize for charity.

Watson’s ability to semantically interpret language implies a native ability to understand the context of questions – including puns and word plays that it handled amazingly well – questions of this nature typically remain a significant challenge for machine-based systems.

Watson’s creators have stated that the algorithms are “embarrassingly” parallel – the implication that the core engine is highly MapReduce in nature rather than the more traditional graph analytics approach. Conventional network control is adequate for such an engine reducing costs and falls within a Software Defined Networking (SDN) framework.

IBM previously missed the industry shift to data management from ISAM files to relational databases in the 1970’s even though they were the inventor of RDMS systems. Oracle took full advantage of this colossal gaff much to IBM’s dismay.

IBM announced the establishment of the Watson Business Unit in early March investing upwards of $1 Billion in the new entity. What is surprising is that the company had a fully established cloud based offering replete with a supporting ecosystem around Watson (now physically occupying three rack cabinets instead of the original nine). There is no lack of customer interest in Watson with over 1,000 third party developers signed on to date.

IBM emphasizes Watsons’ natural language capabilities and analytics to process and synthesize information in a manner similar to the way humans think – enabling quick comprehension and evaluation of large of amounts of human style communication data to generate and evaluate evidence based hypotheses – to adapt and learn from training, interaction and outcomes.

Server Commoditisation – IBM Going Fabless?
“Watson” is at the beginning of a bundling “strategy” by IBM that’s in line with its continued separation from its hardware origins. IBM’s internal politics sometimes show in decisions made by disparate groups within the company in efforts to preserve their own “silage”.

The persistent and widely spread rumor that IBM was selling their low-end server division began circulating in April 2013 with Lenovo the most likely buyer – it passed into obscurity before becoming a reality in January 2014. The trend toward server hardware commoditization is the driving force behind the sale. Margins in the low-end server space have decreased to the point where economies of scale must come into play – requiring ever-larger investments with ever decreasing margins draining capital away from the company’s core business strategy. Watson, on the other hand, is viewed as a “maximum best-fit scaling technology” for capitalizing on IBM’s capabilities as a company.

Recent rumors that IBM is accepting bids for its semiconductor operations are being taken seriously and lean toward Global Foundries as the favored bidder. IBM announced that it is investing $3 Billion over five years on semiconductor research in a move to reassure their customer base that the company is continuing basic research to advance hardware and software technology. The company has entered talks of selling the East Fishkill, N.Y. Fab to Globalfoundries Inc. though a definitive agreement has yet to be announced.

IBM is slowly being transformed into a mostly software and services company using commodity, software defined hardware. That it’s going fabless is no surprise – the question of who will fill the void of developing next generation semiconductor processes and the attendant processor architecture development.
In 2013 the odds were firmly on Intel – the lack of furthered commitment in IDF 2014 shakes this conclusion but remember that the E7 version will not be ready for prime time till next year or at best very late this calendar year.

Collaboration
IBM, deciding to take Watson to market, set out to solve cost, power and footprint issues through industry collaboration. The effects of this collaboration will have far ranging effects on the company, its hardware product line and industry partners.

IBM’s larger than usual presence at the Intel Developer Forum in 2013 with a keynote delivered by Diane Bryant, Intel Senior Vice President and General Manager of the Data Center Group further signaled IBM’s continued segue with Intel toward high end servers.
Intel’s Rack Scale Architecture

Intel has been developing its version of the “Disaggregated Server” named “Rack Scale Architecture” or RSA.

At the core of the Rack Scale Architecture is a technology Intel calls “Silicon Photonics” – developed under the premise that a system wide integrated silicon photonic based data highway woven into a hierarchical communication fabric will support massively parallel computing systems into the foreseeable future and remain a baseline architectural model for future growth. Copper interconnects do not scale reliably in server systems at data rates much above 10 Gbs per channel (multiple fiber channels (10) are combined to establish interconnects like 100 Gbit Ethernet).

The idea of a “silicon photonic” highway offers system architects freedom to allocate computational resources “at will”. This blends well with Software Defined Networking down to the computational element – essentially making an entire data center a virtual machine.

Key to this idea is the use of fiber optic cable capable of carrying 100 Gbps and up data channels (cluster of 4 fibers at 25 Gbps each) called “Silicon Photonics” by Intel.

Diane Bryant brought Andy Bechtolsheim – Founder, Chief Development Officer and Chairman of Arista Networks on stage to announce the company’s first shipments of the “Top of Rack Switch”. Bechtolsheim stated that Intel’s Silicon Photonic’s solved the cost issue allowing Arista’s TOR Switch to enter the market. Andy added that switches extending transmission distance from 200 meters to 2 kilometers required for Cloud data centers would be shipping in volume in Q1 CY 2015.

Intel’s Big Data Analytics Market Outlook
Diane Bryant saved the best for last in her keynote segment. She stated that McKinsey reported big data analytics can improve margins up to 60% through increased sales per visit through improved management of inventory and through optimized product pricing. Cost of compute has declined 40% and the cost of storage has declined 100% making it truly cost feasible to deploy these big data analytic solutions. She added that the E5V3 analytic server units were announced in a separate announcement on Monday. Unfortunately nothing was said about the massive E7s now in development.

Hadoop
Bryant went on stating “within a couple of years Hadoop will be the number one application. It will be running on more servers than any other single application. It will be more common for Enterprise IT than Enterprise ERP system. The big data market is growing at 35% CAGR it’s projected to be a $150 Billion business in silicon systems, software and professional services by 2020.”

TechEye Take Away
We’re not sure what happened between IBM and Intel. Comparing IBM’s presence last year compared to this year’s IDF was completely different. Relationships between companies can take wild swings over internal problems that are kept far from the public eye and we suspect that this may well be operative here. IBM is most interested in the E7 version which remains unannounced though sources report this is scheduled for some time in Q1 2015. We think the apparent lack of mutual devotion is temporary and helps to quiet internal silo wars at IBM for the time being.

Do not be surprised if Intel’s Data Centre Group breaks out into a separate, standalone forum next year.

Intel is working on multiple technology fronts to develop next generation data center architectures capable of real time transaction processing and analytical processing. Keep also in mind that these machines are completely capable of running Cognitive Intelligent Computing currently the domain of IBM but will first ramp in 2015 in an application span called Cognitive Analytics.

Remembering that analytics also includes voice and real-time voice translation leaves wide implications into a number of consumer space applications – think of a gate keeper service melded into cellular phone contracts.

In any regards Mark Bohr is still holding court over Intel’s process development – one of the company’s solid IDF anchors that’s still left at the company. The news is that Intel can build 14 nm FinFet 300 mm wafers in volume and is well on its way to 7 nm with a stop at 10 nm.

Intel replays marketing card

Intel-logoBecause Intel has so few products to show at its expensive upcoming Intel Developer Forum in September in San Francisco, it will play its old three card trick and show off new logos and marketing plans instead. Ailing Intel, it seems, has run out of “innovation”.

That’s according to reliable sources within the corporation that told the Eyes that newly formed CEOs need marketing ideas because product ideas are few on the ground.

The source – based in Asia – told the Eyes that it had attempted to convince ex CEO Paul Otellini that the marketing needed changing to a retro kind of thing, but had come up against determined opposition from the then CEO.

But facing ruin because it was slow off the mark with chips for tablets and for smartphones, instead Intel will attempt to bamboozle the world with marketing. The newly born CEO – and the INTC board are  up for it.

The re-branding will re-position Intel as a 21st century company that doesn’t really invent technology any more. Just manufacture it.

Although we don’t have the new logos and that yet, expect a blast of marketing publicity that talks a lot about not very much at all, faced with the opposition. Oh, that’s not AMD, by the way.