Author: admin

Intel shows off in-memory-database Biz

Intel-IDF-'14-Copy-SizeIntel’s Developer Forum 2014 annual meeting at San Francisco’s Moscone Center wound down yesterday. My assignment is to continue research on a technology that’s now ramping.

The computer industry is at the beginning of a major architectural shift – “In-Memory Database” (IMD) systems, originally aimed at solving near real-time solutions for analytic problems have successfully been applied to cognitive computing problems as well. The nascent application of “cognitive computing intelligence and predictive analytics” toolsets to IMD equipped servers is thought to be the first step in a new era in computing – quite possibly the next big thing.

The Google Effect
At the 2000 Intel Developer Forum in San Francisco a relatively unknown entrepreneur, while having a Keynote fireside chat with Andy Grove, said he’d like to take the entire Internet and put it in memory to speed it up – “The Web, a good part of the Web, is a few terabits. So it’s not unreasonable,” he said. “We’d like to have the whole Web in memory, in random access memory.”

The comment received a rather derisive reception from the audience and was quickly forgotten. The speaker, Larry Page, an unknown at the time, as was his startup company, Google – the company’s backbone consisted of 2,400 computers at the time.

Fast forward to the present – system vendors found their future in Big Data has a lot of the look and feel of Google’s “free to the public” offering. Google was the first to successfully deploy a massively parallel processing (MPP) network commercially using commodity servers – one that was delivering real-time data access on a worldwide basis. Their competitors realized that they could no longer remain competitive with systems that relied on high latency rotating magnetic media as the main store – in fact, solid state disks (SSD) are considered somewhat slow for the new realities of Big Data analytic computing.

The development – called “In-Memory Database” mounts the entire database (single system image – even enormous ones) into large scale memory arrays of Registered DIMMs – closely coupled with Multi Core Processors. The resulting increase in throughput accelerates not only transaction processing but also analytic application performance into real time. The frosting on the cake is that this architecture change applies to good advantage in the emerging cognitive computing space.

SAP – HANA, In-Memory Database Computing
In 2006 Hasso Plattner, Co-founder of SAP AG, took a bottle of red wine, a wine glass, some writing implements and paper to the garden behind his house. By the time he reached the bottom of the bottle there wasn’t much written on the paper. But he had reached the conclusion that in-memory systems were the future. Mr. Plattner had realized that for SAP to remain competitive it needed to innovate – Plattner believed that by changing the server design to accommodate massively parallel processing with enough memory to load an entire database when combined with columnar based storage software would have a revolutionizing effect on processing speeds for OLTP and OLAP applications.

Gathering a small group of PhDs and undergrads at the Hasso Plattner Institute, Plattner expressed the in-memory idea he wanted them to explore. The first prototype was shown in 2007 before an internal audience at the company’s headquarters in Waldorf, Germany. SAP management was skeptical that the idea would work – the team needed to prove that the concept of in-memory database would work under real world conditions.

Using contacts to advance the project, Mr. Plattner persuaded Colgate-Palmolive Co. to provide transaction data for the project. He also persuaded Intel’s Craig Barrett to secure the latest microprocessors for the labs ongoing effort. The company also set up an R&D facility in Palo Alto to be in close proximity to their innovation and research partner Stanford University.

SAP HANA was officially announced in May 2010 with shipments commencing with the release of SAP HANA 1.0 in November. The market was slow in adopting the technology convinced that it was still in an early stage of development. Analytics and the need to score a real reason for their customers to mount their IT to the cloud provided the market conditions SAP’s HANA needed to press its adaptation. SAP over time adopted HANA to the Cloud through successful partnering with a wide array of vendors making it the company’s fastest growing segment.

During the development of HANA, SAP discovered the amount of physical memory required to store an entire database could be reduced substantially (compressed) – in some cases by 100X. This had the effect of reducing power (less memory required) and made database searches more efficient (reduction of the empty set). The market implication was that the price of memory per gigabyte had finally reached a price/performance breakeven point in an application that could not be accomplished at that price any other way. DRAM producers have found their next “Killer Application”.

IBM’s Watson – Cognitive Computing Public Debut
IBM’s Watson is a Big Data analytics system running on 2,880 PowerPC cores with 16TBytes of DRAM. Estimated cost is reportedly just over $3 Million and it requires 200kW of power to operate. Watson’s inner workings have not been publicly released – what is known is that it runs under a tool IBM calls DeepQA, implemented in conjunction with Hadoop (a Java implementation of MapReduce) that runs under the SUSE Linux Enterprise Server Operating System.

IBM introduced Watson to the public by competing it against human opponents on the game show “Jeopardy” in February 2011 – establishing IBM and the Watson Brand in the public mind when it won the $1 Million Dollar prize for charity.

Watson’s ability to semantically interpret language implies a native ability to understand the context of questions – including puns and word plays that it handled amazingly well – questions of this nature typically remain a significant challenge for machine-based systems.

Watson’s creators have stated that the algorithms are “embarrassingly” parallel – the implication that the core engine is highly MapReduce in nature rather than the more traditional graph analytics approach. Conventional network control is adequate for such an engine reducing costs and falls within a Software Defined Networking (SDN) framework.

IBM previously missed the industry shift to data management from ISAM files to relational databases in the 1970’s even though they were the inventor of RDMS systems. Oracle took full advantage of this colossal gaff much to IBM’s dismay.

IBM announced the establishment of the Watson Business Unit in early March investing upwards of $1 Billion in the new entity. What is surprising is that the company had a fully established cloud based offering replete with a supporting ecosystem around Watson (now physically occupying three rack cabinets instead of the original nine). There is no lack of customer interest in Watson with over 1,000 third party developers signed on to date.

IBM emphasizes Watsons’ natural language capabilities and analytics to process and synthesize information in a manner similar to the way humans think – enabling quick comprehension and evaluation of large of amounts of human style communication data to generate and evaluate evidence based hypotheses – to adapt and learn from training, interaction and outcomes.

Server Commoditisation – IBM Going Fabless?
“Watson” is at the beginning of a bundling “strategy” by IBM that’s in line with its continued separation from its hardware origins. IBM’s internal politics sometimes show in decisions made by disparate groups within the company in efforts to preserve their own “silage”.

The persistent and widely spread rumor that IBM was selling their low-end server division began circulating in April 2013 with Lenovo the most likely buyer – it passed into obscurity before becoming a reality in January 2014. The trend toward server hardware commoditization is the driving force behind the sale. Margins in the low-end server space have decreased to the point where economies of scale must come into play – requiring ever-larger investments with ever decreasing margins draining capital away from the company’s core business strategy. Watson, on the other hand, is viewed as a “maximum best-fit scaling technology” for capitalizing on IBM’s capabilities as a company.

Recent rumors that IBM is accepting bids for its semiconductor operations are being taken seriously and lean toward Global Foundries as the favored bidder. IBM announced that it is investing $3 Billion over five years on semiconductor research in a move to reassure their customer base that the company is continuing basic research to advance hardware and software technology. The company has entered talks of selling the East Fishkill, N.Y. Fab to Globalfoundries Inc. though a definitive agreement has yet to be announced.

IBM is slowly being transformed into a mostly software and services company using commodity, software defined hardware. That it’s going fabless is no surprise – the question of who will fill the void of developing next generation semiconductor processes and the attendant processor architecture development.
In 2013 the odds were firmly on Intel – the lack of furthered commitment in IDF 2014 shakes this conclusion but remember that the E7 version will not be ready for prime time till next year or at best very late this calendar year.

Collaboration
IBM, deciding to take Watson to market, set out to solve cost, power and footprint issues through industry collaboration. The effects of this collaboration will have far ranging effects on the company, its hardware product line and industry partners.

IBM’s larger than usual presence at the Intel Developer Forum in 2013 with a keynote delivered by Diane Bryant, Intel Senior Vice President and General Manager of the Data Center Group further signaled IBM’s continued segue with Intel toward high end servers.
Intel’s Rack Scale Architecture

Intel has been developing its version of the “Disaggregated Server” named “Rack Scale Architecture” or RSA.

At the core of the Rack Scale Architecture is a technology Intel calls “Silicon Photonics” – developed under the premise that a system wide integrated silicon photonic based data highway woven into a hierarchical communication fabric will support massively parallel computing systems into the foreseeable future and remain a baseline architectural model for future growth. Copper interconnects do not scale reliably in server systems at data rates much above 10 Gbs per channel (multiple fiber channels (10) are combined to establish interconnects like 100 Gbit Ethernet).

The idea of a “silicon photonic” highway offers system architects freedom to allocate computational resources “at will”. This blends well with Software Defined Networking down to the computational element – essentially making an entire data center a virtual machine.

Key to this idea is the use of fiber optic cable capable of carrying 100 Gbps and up data channels (cluster of 4 fibers at 25 Gbps each) called “Silicon Photonics” by Intel.

Diane Bryant brought Andy Bechtolsheim – Founder, Chief Development Officer and Chairman of Arista Networks on stage to announce the company’s first shipments of the “Top of Rack Switch”. Bechtolsheim stated that Intel’s Silicon Photonic’s solved the cost issue allowing Arista’s TOR Switch to enter the market. Andy added that switches extending transmission distance from 200 meters to 2 kilometers required for Cloud data centers would be shipping in volume in Q1 CY 2015.

Intel’s Big Data Analytics Market Outlook
Diane Bryant saved the best for last in her keynote segment. She stated that McKinsey reported big data analytics can improve margins up to 60% through increased sales per visit through improved management of inventory and through optimized product pricing. Cost of compute has declined 40% and the cost of storage has declined 100% making it truly cost feasible to deploy these big data analytic solutions. She added that the E5V3 analytic server units were announced in a separate announcement on Monday. Unfortunately nothing was said about the massive E7s now in development.

Hadoop
Bryant went on stating “within a couple of years Hadoop will be the number one application. It will be running on more servers than any other single application. It will be more common for Enterprise IT than Enterprise ERP system. The big data market is growing at 35% CAGR it’s projected to be a $150 Billion business in silicon systems, software and professional services by 2020.”

TechEye Take Away
We’re not sure what happened between IBM and Intel. Comparing IBM’s presence last year compared to this year’s IDF was completely different. Relationships between companies can take wild swings over internal problems that are kept far from the public eye and we suspect that this may well be operative here. IBM is most interested in the E7 version which remains unannounced though sources report this is scheduled for some time in Q1 2015. We think the apparent lack of mutual devotion is temporary and helps to quiet internal silo wars at IBM for the time being.

Do not be surprised if Intel’s Data Centre Group breaks out into a separate, standalone forum next year.

Intel is working on multiple technology fronts to develop next generation data center architectures capable of real time transaction processing and analytical processing. Keep also in mind that these machines are completely capable of running Cognitive Intelligent Computing currently the domain of IBM but will first ramp in 2015 in an application span called Cognitive Analytics.

Remembering that analytics also includes voice and real-time voice translation leaves wide implications into a number of consumer space applications – think of a gate keeper service melded into cellular phone contracts.

In any regards Mark Bohr is still holding court over Intel’s process development – one of the company’s solid IDF anchors that’s still left at the company. The news is that Intel can build 14 nm FinFet 300 mm wafers in volume and is well on its way to 7 nm with a stop at 10 nm.

Future HP servers will learn from smartphones

HPHere in sunny Barcelona @ the ETSS conference, Mark Potter – HP’s CTO for its Enterprise Group was expounding his company’s vision for how the future will look for file servers.

Basically, the IT industry appears to have woken up to a technology which smartphones have been using for ages – the SoC (System on a Chip). Not revolutionary concept for the fact that Potter suggested that HP would use ARM cores in addition to Intel based processors. Given that Intel and AMD were major sponsors of this show, we’re sure he couldn’t elaborate more. Anyway, Potter was keen to pump up what HP is calling The Machine as a new era in server technology.

HP seems a bit ambivalent in its approach to The Machine because Dr Tom Bradicich, vp for Moonshot Engineering with HP servers suggested that Moonshot was the first step towards The Machine model. Except it isn’t The Machine.

Curiously, HP appears to have announced the Moonshot last month at an event hosted by our old friends – Citrix.

Well, the Moonshot is now a reality and shipping but for some reason Bradicich revealed that initially this product would only be made available through his company’s “US partners”.

Which is a bit strange because most resellers that ChannelEye was speaking to here claimed to have a US presence. So they should all be able to source the product.

Besides SoCs, the other technology which Potter was bigging up was photonics.

We’ve all heard of it before but HP seems to be suggesting that you’ll see products actually utilising photonicsavailable from HP before Q1 2015.

Which will be great because photonics are a key part to HP’s dream of The Machine.

Thanks to photonics, a server won’t have to write information to anywhere – It will simply reside in shared memory.

Sounds impressive but the major advantage to photonics is the energy saving, Potter quoted somebody else as saying that “copper is an energy sucking devil”.

But he’s got a point. Potter revealed that if cloud computing was a country it would be the fifth largest electricity consumer above the UK and just below Japan.

If The Machine really can return such massive energy savings through the use of photonics which HP is suggesting, then the ROI for customers of its channel partners will be significant.

As Potter hinted, a great way of getting enteprises – which have recently been low spenders, to update their data centres.

HP’s channel vision hits Catalan capital

HPWhereas it used to be that every conference & exhibition for North America had to be held in Lost Vagueness [Las Vegas], these days every European major event worth its salt has to be held in Barcelona. So this ChannelEye hack found himself in the Catalan capital listening to HP wax lyrical about the importance of the channel for its future business. And given that today sees the start of its ETSS (ExpertOne Technology & Solutions Summit) conference, there were some heavy duty HP personnel in attendance. They’re all keen to stress how the channel can not only grow HP’s own revenues but its own revenues as well in the process.

One of the key speakers at the Press event was Alessandra Brambilla, HP’s vp for EMEA Enterprise Group channels, she was keen to explain the benefits of HP Unison.

What came across very clearly from all of the presentations was HP’s firm belief that the IT landscape has changed beyound recognition.

We are now in an era of ‘SmartIT’. But no cause for panic because the four key customer demands with SmartIT are built on modules which the channel already knows throughly, namely: – Client/Server; Legacy systems; and Pcs.

Today, however, clients are asking for solutions based on mobility; social networking; cloud storage; and Big Data. But not to worry because HP is keen to share its knowledge with its channel partners.

In fact, it will achieve this objective by sharing the same kind of training it gives its own internal pre-sales workforce with employees of selected partners.

But HP promises more. Brambilla listed the key engagement points with Unison today: –

  • Partner portal
  • Ease of use and quick access to customized information
  • Faster, more competitive quotes
  • The right support to empower partners to win more deals
  • Demand generation
  • Automated and personalised co-marketing asset
  • Market development funds (MDF)

Thhis will apparently lead to an Increase in marketing ROI with a simpler, more-efficient MDF process. Ok?

One thing was blantantly obvious – behind the hype and marketing speak, HP is keen not to lose the advantage it presently enjoys thanks to a broad channel partnership programme. SmartIT or no SmartIT.

HP_barceloan

Left to right: – Matt Latter, Logicalls; Alesandra Brambilla; Andres Miramontes Miras, Taisa Syvalue

 

UK inflation fell in May

parliamentAgainst a background of rising house prices and modest signs of economic recovery, the Office of National Statistics (ONS) said today that the consumer price index fell to 1.5 percent in May. The other index, the retail prices index, showed a drop to 2.4 percent during the month of May,

That compares to 1.8 percent in April and represents the sixth month in a row that inflation stayed below the Bank of England’s ceiling of two percent.

Food prices fell during May but other figures from the ONS showed that house prices in the UK jumped by 9.9 percent in April 2014 compared to April 2013, with an average abode costing £260,000.

London showed a rise of 18.7 percent but prices also rose entire all regions in the UK.

Pay is failing to keep up with inflation.

Notebook sales down

framedwindowsIt is hardly a surprise given that one in two UK households now have a swipy style tablet, but independent research shows top X86 models aren’t exactly the flavour of the month.

According to Digitimes Research, both branded notebook vendors and top original design manufacturers (ODMs), recorded month on month drops of 12 percent and 11 percent in December.

Dell and Toshiba did better than the other bunch of brand names, with the former, in particular, showing a bit of a surge because Microsoft will deck long in the tooth but reliable Windows XP this spring.

The ODMs were hit because HP was hit – Quanta and Inventec supply Hewlett Packard with most of its notebook boxes.

While the X86 mob hope that enterprises are still likely to plump for Windows based boxes, there is evidence that large corporations are seriously contemplating the bring your own device route, which will further erode Intel market share.

John Lewis up. Debenhams down

tablet-POS-cash-registerHigh street stores showed mixed results in their bids to win the hearts, souls and wallets of people over the Yuletide season.

Debenhams didn’t do at all well and that caused its chief beancounter, Simon Herrick, to fall on his sword this morning.

The John Lewis Partnership, which is a sort of cooperative, said its sales for the period were up 6.9 percent from the same Christmas period the year before. But it did particularly well on the interwibble front – in the five weeks to the 28th of December last its sales rose by over 22 percent.

Debenhams is in the slough of despond, however. It issued a profits warning for the next six months.

Obliquely, the John Lewis news is bad news for chip giant Intel too. Many people are using smartphones and tablets to buy online rather than wait for their X86 based machines to boot up.

Quanta slashes tablet forecast by a quarter

cheap-tabletsQuanta Computer, the world’s biggest laptop maker for hire, has slashed its tablet shipment forecast for 2013 from 20 million units to just 15 million. The reason? Cheap white-box tablets.

“We were optimistic about the company’s tablet shipments this year and didn’t expect that our clients’ products would face pricing competition from Chinese white-brands,” Quanta vice chairman C.C. Leung said in a conference call, reports Taipei Times.

In other words, it wasn’t exactly Quanta’s fault, it was their clients’ fault. Amazon and Google account for the majority of Quanta’s tablet orders and they obviously underestimated the impact of cheap white-box tablets on Nexus 7 and Kindle Fire sales.

However, Quanta still believes it will be able to ship 20 million tablets – next year, of course.
Luckily Quanta did not see a dip in laptop shipments and its annual forecast of 44 million units still stands. In addition, Quanta is hoping to see plenty of growth in server shipments next year thanks to growing demand for could servers.

Smartphone subscriptions to hit 5.9bn by 2019

smartphones-generic

The findings of the latest Ericsson Mobility Report indicate that the smartphone craze has not peaked just yet. The report found that the number of mobile subscriptions will reach 9.3 billion by 2019 and more than 60 percent of all subscriptions will be for smartphones.

An estimated 90 percent of the world’s population will be covered by current generation WCDMA/HSPA networks, while 65 percent of the population will have LTE coverage. Smartphone data traffic is expected to increase tenfold over the next six years.

“The rapid pace of smartphone uptake has been phenomenal and is set to continue. It took more than five years to reach the first billion smartphone subscriptions, but it will take less than two to hit the 2 billion mark,” said Douglas Gilstrap, Senior Vice President and Head of Strategy at Ericsson.

“Between now and 2019, smartphone subscriptions will triple. Interestingly, this trend will be driven by uptake in China and other emerging markets as lower-priced smartphone models become available.”

At the moment, smartphones account for about 25 to 30 percent of all mobile phone subscriptions, but they are already outpacing feature phones in terms of new sales.

Currys and PC World to peddle Tegra Note 7

tegra-tabIn a rather surprising development, Currys and PC World will be selling Nvidia’s Tegra Note 7 tablet under their own brand.

The Advent Vega Tegra Note 7 as it’s known, will be available in stores and online from November 15, but pre-orders will be available tomorrow. The price is £179, slightly less than the Nexus 7, but quite a bit more than Tesco’s Hudl.

Although it has a different brand, the hardware is identical – Tegra 4 SoC, 7-inch 1280×800 IPS display, 1GB of RAM, 16GB storage and a stylus to boot. The only difference is the sticker.

Here is the interesting part. Although we speculated that some players could sell the Tegra Note 7 under their own brand, when it was announced, Nvidia said it would be sold by five AIBs, one or two handling every major region. Nvidia never mentioned retailers.

If Currys and PC World and willing to give it a go, and if Nvidia is happy to offer such an arrangement, they might be the first of many retailers to adopt Nvidia tablets and sell them under their own brand.

In addition to the Tegra Note 7, Nvidia is also working on a few other Android devices. The Shield console is already out, although it lacks the mass market appeal of tablets. However, Nvidia is also working on a reference phone platform (Phoenix) and rumour has it that other tablets are in the works, too.

AMD bullies Nvidia with $399 Hawaii card

radeon-r9-290A few weeks ago AMD introduced its Volcanic Islands products at an event in Hawaii. Most of the line-up were just rebrands, but the flagship R9 290X and R9 290 weren’t. 

The Hawaii cards are based on all new silicon, 6.2bn transistors crammed onto a 28nm die. AMD did not announce the prices at the event, but a couple of weeks later it launched the R9 290X at $549. The price was lower than expected and it forced Nvidia to slash the price of the GTX 780 by $150.

Just as Nvidia countered the R9 290X, AMD decided to make its life miserable once again. The Hawaii Pro version of the card, the R9 290, launched at $399 – making it $100 cheaper than the GTX 780, which went down from $649 to $499 last week.

There is, however, a slight problem for Nvidia. The R9 290 ends up significantly faster than the GTX 780 and in some cases it can even give the $999 GTX Titan a run for its money.
So, the new card is $100 cheaper than what Nvidia has to offer, yet it’s faster. There is one problem though, reviewers report the R9 290 can get very loud, but it seems like a small price to pay considering the price/performance ratio. In addition, it’s only a matter of time before AIB partners come up with non-reference designs, with custom coolers to keep the noise down.

Nvidia was already forced into two price cuts following AMD’s launch. First it slashed the prices of its sub-$199 products to compete with AMD’s rebranded R7-series. Then it slashed the prices of the GTX 780 and GTX 770, only to be undercut by AMD’s new $399 card. Most punters were expecting the R9 290 to end up at ~$449, but like we said last month, AMD had a couple of good reasons to launch it at $399 – and it did.

Nvidia simply doesn’t have much wiggle room left. Perhaps it’s feeling a bit like Guy Fawkes, and hoping bonfire night is merely a damp squib.

Industry experts talk up R2B, R2D2

highA group of executives behind the Retail to Business (R2B) initiative is warning retailers that they could be in a world of trouble if they don’t start targeting businesses.

The R2B initiative was formed by Context and it’s backed by execs from Lenovo, AMD, Lexmark, Tech Data and other companies. The ultimate goal is to make retailers more competitive and capable of taking on B2B resellers.

“Let’s stop the decline – or stores will end up being showrooms,” Global MD for Retail at Context Adam Simon told PCR. “Don’t just focus on consumers and tablets – blur the consumer and SMB. Support the small business people and their entourage.”

The consumerisation of IT and trends like BYOD is already blurring the line between SMBs and average people. Context argues British retailers could learn a thing or two from telecoms who have dedicated in-store corners in their shops for business users. Germany is also setting an interesting example, as its retailers are already selling heaps of laptops to businesses.

Retailers are pulling an NSA on shoppers

smartphone-shoppingSmartphones and tablets have not just changed the way we shop online, they are also having an impact in brick-and-mortar shops, as many shoppers are using them to compare prices and read product reviews. But shoppers aren’t the only ones doing a bit of intelligence work on the ground, the retailers are responding in kind.

More and more retailers, or click-and-mortar outfits are gathering data from smartphone users in stores, reports AFP. They are simply using the smartphones to check what the shoppers are up to, where they are moving and what they are looking for. The practice is not going down well with privacy groups, but shops seem to like what they are getting and there are even a number of start-ups specialising in the field.

Of course, the data shops can collect is rather limited, but it is nonetheless useful. They can track users visits and their identities, learn how frequently the shoppers return, see what they are looking for in the shops and so on. The data allows them to better understand customer behaviour and to come up with ways of getting more return business and making better offers to potential customers.

Although privacy concerns are rather fashionable these days, thanks to America’s attempts to beat East Germany in spying on its own citizens, most of the data collected by the shops seems relatively harmless, as it doesn’t include any truly personal data, such as phone numbers, emails or credit card info. In fact, anyone who swipes a credit card in the shop is likely to be providing the shop with more valuable information.

It sounds like a benign and relatively harmless practice, but if it catches on it will undoubtedly draw more scrutiny. Not because it is dangerous or unethical, but because talking about privacy and data security is a pretty good way of getting on the telly and getting some free publicity.

Cheap tablets start to make their mark

cheap-tabletsEver since Google launched its $199 Nexus 7 last year, tablet makers have been looking for ways to come up with even cheaper devices to undercut Google and other brands who targeted the sub-$200 space. Smaller form factors were popularised by Apple’s iPad mini, too. As a result the tablet underwent a massive transformation over the last 12 to 15 months in what can only be described as a race to the bottom. However, we’re not at the bottom just yet.

Big brands have started rolling out cheaper devices, first hitting the $149 mark and now going towards $99. The white-box gang is already there and cheap tablets are slowly making their presence felt. According to Bloomberg, sales of sub-$149 tablets will account for almost 35 percent of the US market next year, up from 25 percent in 2011.

However, cheap tablets have evolved. The average $199 or $149 tablet two years ago was absolutely horrible, but this is no longer the case. Here are a few examples proving that cheap tablets have come a long way.

The cheap white-box tablet, anno 2011, usually shipped with 512MB of memory, single-core A8 processor and low-res 1024×600 or 1024×768 TN panel. Some even featured outdated resistive touchscreens. However, 1GB of RAM is now the bare minimum, while many cheap tablets already pack 2GB. Practically all cheap tablets now sport IPS panels and it’s even possible to get a WUXGA (2048×1536) tablet for as little as $200, or ~€160 in Euroland. Dual-core A9 or quad-core A7 processors are standard, but there are even some A9 quads available for that sort of money.

Components are getting ridiculously cheap, allowing vendors to add more for less. This is especially true of processors and displays.

Several companies are churning out cheap ARM SoCs and it is estimated that Rockchip can sell a SoC for as little as $5. MediaTek is currently shipping one in five SoCs on the planet and most of them are cheap, A7-based parts. Prices of relatively high-quality IPS displays have tumbled as well and many cost less than $10. Prices or RAM and NAND have gone down as well, but the drop wasn’t as drastic. All in all, Bloomberg reckons the cost of components used in today’s cheap and cheerful tablets is $60, down from $175 in 2011.

It should be noted that cheap tablets, or the companies behind them, don’t get nearly as much press as they should. After all, cheap tablets will make up a third of all tablet shipments next year, but tech sites are focusing on clickbait, pricey high-end models churned out by brands who tend to advertise on the same sites.

It’s all somewhat reminiscent of the vanilla PC boom in the mid eighties, although we don’t believe cheap tablets can replicate the success of cheap PCs three decades ago.

Facebook marketing works after all

visa-epayIt appears that Facebook is finally starting to make sense for marketers. For years Facebook users complained about every single redesign and the inclusion of more ads, especially intrusive ones that appear in newsfeeds.

However, it appears that they are working. The Drum reports that 12 percent of Facebook users in Britain have already made a purchase after seeing a product in their newsfeed. So for all the talk of hating Facebook ads, the same people who are complaining seem to be falling for the ads.

What’s more, Faceboom EMEA veep Nicola Mendelsohn said mums spend three times as much time on Facebook during the holiday season and they account for the vast majority of Christmas gift purchases.

Facebook has recently announced the launch of a new SMB content hub that should help small businesses promote goods and services on the social network.

Old PCs are costing SMBs time and money

ancient-laptopSmall businesses are bleeding cash thanks to their reliance on antiquated PCs. According to Intel’s Small Business PC Refresh Study, the average small business worker loses one work week per year due to old PCs.

The Techaisle survey covered 736 businesses across six countries and found that more than 36 percent of them use PCs that are more than four years old. The old boxes require more maintenance, repairs and they exhibit security and performance issues, all of which have a negative impact on productivity.

Worse, the average repair costs for older PCs usually equal or exceed the cost of buying a brand new one. On average small businesses spend a staggering $427 to repair a PC that’s four years or older, which is 1.3 times the repair cost of PCs that are less than four years old. Almost a half of respondents did not even know that Redmond is planning to cut off support for XP next year.

Curiously, businesses in the US tend to use the oldest PCs on the planet – 8 percent of them are using PCs that are five years or older. In India, just one percent of small businesses use ancient PCs.

The results aren’t very surprising. A couple of months ago Intel released another survey which found that the average age of PCs is going up and it’s now at four years or more. The upgrade cycle is getting longer and there’s practically no incentive to upgrade for many users.