Tag: GPU

AMD to release life preserver

titanic-life-preserverThe troubled chipmaker AMD is about to fight back against Nvidia dominence with its upcoming AMD Radeon R9 300-series graphics cards.

Specifications of the cards started to leak last month and now there are new rumours of an official announcement at the Computex show.

AMD is planning to introduce the new cards during Computex in June and there will be a single new card at CeBIT and a few re-branded cards before Computex.

Managers at AMD apparently want a “full line up” of cards to be released at the same time. I needs this to cover the ground lost to rival Nvidia over the last year.

Nvidia took a significant market share in the GPU market but AMD CFO Devinder Kumar was confident that in the the second half of 2015 there would be a launch of a new graphics product which would set everything right.

“We will gain back the market share which is low from my standpoint and historically,” he claimed.
There is nothing really on the roadmap other than the AMD Radeon R9 300-series that can do that, so it looks like this rumour has legs.

Nvidia adds a new Agent of Shield

Agents of SHIELD returnsNvidia might not have had many nibbles for its Shield concept, but today it released a TV console to the range for lots of people to ignore.

In this case it has released an Android-based living room device capable of 4K playback and capture.

During its Game Developers Conference 2015 press event, Nvidia CEO Jen-Hsun Huang insisted that the world wanted an Android based console TV.

“First, it’s based on the most popular OS in the world. Second, the richness of the Google Play store, with it huge range of applications,” Nvidia CEO Jen-Hsun Huang said. “Third, it syncs with all my other devices. Finally, it has an incredible voice search capability,” he said.

He claimed it was the the “world’s irst Android 4K TV” and it could receive and capture 4K at up to 60 fps video “very soon.” The living room entertainment device also packs Android TV functionality so users can access a library of movies, TV and more.

Nvidia Shield is part of the GPU makers cunning plan to make gear away from its traditional markets.

The Nvidia Shield can run both local, and streamed games via its Nvidia Grid subscription based game streaming service, up to 1080p at 60fps. The gaming-centric device will launch with more than 50 games on the Grid store.

The set-top box and console hybrid is based around a Tegra X1 processor, with a 256-bit Maxwell GPU with 3GB of memory. Its body features a MicroSD slot, a Micro USB 2.0 port, two USB Type-A 3.0 ports, gigabit Ethernet port, and HDMI port.

The device will ship bundled with the Shield Controller for USD$199.

Intel shows off 14nm processor

Intel-Core-MIntel has taken the wraps off its first 14nm processor, the Core M at the CES show in Las Vegas.

The CPU is designed for the ultramobile, low-power market and is an  SoC die shrink of the existing architecture.

Where it does something interesting is on the GPU side with shedloads of improvements and advances,

The video decoder block has been beefed up with dual bit stream decoders, we are expecting to see other improvements but Intel has been quiet on exactly what it has tweaked and changed to date.

Intel does appear to have clear ground ahead of it. TSMC’s 20nm is only a slight improvement on die size and power consumption, Intel’s new Broadwell cores are 37 percent smaller.

What is odd, is that although more than half the chip seems dedicated to its GPU, Intel has made no announcement about supporting Direct X12. Nvidia and AMD has announced full DX12 support for their various current hardware,

Intel claims its new chip can boost battery life by 1.5 hours, speed video conversions, and offer a whopping 22 percent improvement to 3D performance. It has been a while that we have seen a significant productivity jump and we were not expecting to see much in the way of improved CPU performance from this chip anyway.

We are expecting Intel to announce the shipment of its Cherry Trail platform today too.  This is the  14nm Bay Trail die shrink that’s been on the roadmap for a little while and is not really news.  What is strange is how Intel has been quiet about the specifics of Cherry Trail so far. We have no benchmarks, statistics, or anything.

This is odd given Intel’s commitment to tablets and that pushing the Atom down to 14nm is a significant achievement. It should mean some performance gains and it is not clear why Chipzilla is not trumpeting it more.

Nvidia puts high-end graphics in car

indy1909Nvidia unveiled a new processor aimed at powering high-end graphics on car dashboards as well as auto-pilot systems.

Before the Consumer Electronics Show, Nvidia Chief Executive Jen-Hsun Huang said the Tegra X1 chip would provide enough computing for automobiles with displays built into mirrors, dashboard, navigation systems and passenger seating.

“The future car is going to have an enormous amount of computational ability,” Huang said. “We imagine the number of displays in your car will grow very rapidly.”

The Tegra X1 has twice the performance of its predecessor, the Tegra K1, and will come out in early 2015, Nvidia said.

A platform combining two of the X1 chips can process data collected from up to 12 high-definition cameras monitoring traffic, blind spots and other safety conditions in driver assistance systems, Huang said.

The chips can help detect and read road signs, recognise pedestrians and detect braking vehicles before you do.

Nvidia has been struggling to compete against larger chipmakers like Qualcomm in smartphones and tablets and thinks that its Tegra mobile chips will be better off in cars and is already supplying companies including Audi, BMW and Tesla.

In the third quarter, revenue from Tegra chips for automobiles and mobile devices jumped 51 percent to $168 million. While this is not bad it is a Fiat 500 to Nvidia’s Mac Truck of total revenue of $1.225 billion.

 

Cambridge boffin dismisses the existence of a general processor

martin-luther-nails-thesis-1David Chisnall, of Cambridge University become the general processor’s first atheist, by refusing to believe in the existence of a general-purpose chip.

Writing in Queue  he said that there was a general trend to categorise processors and accelerators as “general purpose.” Of the papers published at this year’s International Symposium on Computer Architecture (ISCA 2014), nine out of 45 referred to general-purpose processors; one additionally referred to general-purpose FPGAs (field-programmable gate arrays), and another referred to general-purpose MIMD (multiple instruction, multiple data) supercomputers, stretching the definition to the breaking point.

However Chisnall nailed a statement on the door of the Intel research division that there was no such thing as a truly general purpose processor and that the belief in such a device was harmful.

Risking the wrath of the great Intel God he said that many of the papers presented at ISCA 2014 that did not explicitly refer to general-purpose processors or cores did instead refer to general-purpose programs, typically in the context of a GPGPU (general-purpose graphics processing unit), a term with an inherent contradiction.

He said that a modern GPU has I/O facilities, can run programs of arbitrary sizes (or, if not, can store temporary results and start a new program phase), supports a wide range of arithmetic, has complex flow control, and so on. Implementing Conway’s Game of Life on a GPU is a common exercise for students, so it’s clear that the underlying substrate is Turing complete.

Chisnall argues that it is not enough for a processor to be Turing complete in order to be classified as general purpose; it must be able to run all programs efficiently. The existence of accelerators (including GPUs) indicates that all attempts thus far at building a general-purpose processor have failed. If they had succeeded, then they would be efficient at running the algorithms delegated to accelerators, and there would be no market for accelerators.

Assassins bemoan poor console chip performance

assassins-creedAssassin’s Creed Unity senior producer Vincent Pontbriand has waded into AMD’s console performance saying that his new game’s 900p frame rate and 30 fps target on consoles is a result of weak CPU performance.

He said that while the GPUs on the machines are really powerful and the graphics look good, the game was technically CPU-bound and the CPU has to process the AI, the number of NPCs we have on screen. All these systems running in parallel.

Speaking to Hot Hardware, Pontbriand  said game designers were quickly bottlenecked and it was a bit frustrating.

“We thought that this was going to be a tenfold improvement over everything AI-wise, and we realised it was going to be pretty hard. It’s not the number of polygons that affect the framerate. We could be running at 100fps if it was just graphics, but because of AI, we’re still limited to 30 frames per second,” he said.

The comments are being seen as damning AMD’s APU. The Jaguar CPU inside both the Sony PS4 and Xbox One has a relatively low clock speed and, while both consoles may offer eight threads on paper, but it appears that games can’t access them.

Pontbriand said that one thread is reserved for the OS and several more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations. This means that the performance is about the same as the last generation of Xbox 360 and PS3 CPUs.  In fact these were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.

To be fair it is hardly AMD’s fault. Microsoft or Sony could’ve specced out a variant of the core clocked at 2-2.4GHz and boosted total CPU throughput, but they didn’t. The programmable nature of the GCN architecture inside the Xbone and PS4 is meant to compensate for the relatively lightweight core, but AI calculations may simply be beyond this.  GPU calculations tend to be high latency, and AI typically requires fast response times.

Nvidia does rather well

nvidia-gangnam-style-330pxNvidia posted higher second-quarter earnings and gave a forecast for current-quarter revenues that exceeded what the cocaine nose jobs of Wall Street predicted.

This was surprising given that some analysts were worried that PC shipments were flat in the June quarter.

Nvidia’s graphics chips for PCs make up most of its business but what appears to have saved the company’s bottom line was that it has been selling to car makers and data centres.

In the second quarter, revenue from Tegra chips for automobiles and mobile devices jumped 200 percent to $159 million.

After struggling to compete against larger chipmakers like Qualcomm in smartphones and tablets, Nvidia has increased its focus on using its Tegra chips to power entertainment and navigation systems in cars made by companies including Volkswagen’s Audi, BMW and Tesla.

Nvidia in July launched its own tablet aimed at game enthusiasts, called Shield, with Tegra chips and other high-end components. This went against the industry trend toward commoditized, inexpensive devices.

Nvidia has been doing well in the cloud by flogging its chips to IBM, Dell and HP as part of their datacentre product range.

Predictions are that Nvidia’s GRID graphics technology for data centres will also do well after it has been tested by other potential enterprise customers.

Nvidia reported second-quarter revenue of $1.1 billion, up 13 percent from the year-ago quarter as it expanded its focus on cars and cloud-computing.

For the current quarter, Nvidia said it expects revenue of $1.2 billion, plus or minus 2 percent. Analysts on average expected second-quarter revenue of $1.1 billion and third-quarter revenue of $1.16 billion.

Nvidia’s net income in the second quarter, which ended on July 27, added up to $128 million or compared with $96 million in the year-ago quarter.

AMD Hawaii event has aggression written all over it

radeon-r9-290Huge GPU launches are back. An AMD  webcast dragged on for more than two and a half hours and the ailing chip firm went to great lengths to explain a few new technologies, showcase games and of course talk GPUs. In fact, the event was so big that many complained the presentations were too long and too elaborate.

Let’s take a quick look at what AMD graphics boss Matt Skynner announced last night. As expected, AMD has a new naming scheme, mid range products now come with an R7 prefix, while the R9 prefix is reserved for high-end and performance boards.

The line-up starts with the R7 250, with 1GB of GDDR5 memory. AMD says the card scores 2,000 in the Firestrike benchmark and it’s priced at $89, so AMD calls it the king of sub-$100 gaming. The R7 260X comes with 2GB of memory and hits 3,700 in Firestrike.

The R9 series starts at $199, with the R9 270X with 2GB of RAM and a Firestrike score of 5,500. The R9 280X is priced at $299 and scores 6,800 in Firestrike. AMD says it was designed for 1440p gaming.

The flagship R7 290X boasts 4GB of RAM and a 512-bit memory bus, with bandwidth of over 300GB/s. AMD did not reveal its price or the exact specs and clocks of any of the cards. Rumours point at a $599 price tag, but rumours can be wrong. The general consensus is that the R7 290X should outpace Nvidia’s GTX 780, perhaps even the pricey Titan.

All in all the new generation looks very competitive, at least on paper. We’ll have to wait for some proper reviews before jumping to conclusions. The cards are expected to hit retail in the second half of October, but this isn’t official, either.

AMD also used the opportunity to talk up TrueAudio, a new spatial audio technology incorporated in some of the new cards. The tech press had a chance to see it, or hear it in action at the event and the first impression is very positive.

Then there were games, loads and loads of games. AMD scored an exclusive bundle deal with EA and it will sell a limited bundle edition of the R9 290X with Battlefield 4. AMD also promised to deliver even better even better Never Settle bundles, which could help the new cards gain a bit more traction.

But in our opinion, the biggest news wasn’t a new card or a new game – it was the fact that AMD chose to stage such an event at all. AMD and Nvidia are now talking about a renaissance in PC gaming and analysts seem to agree. Sales of gaming gear are bucking the industry trend and they are still growing. This year the sector will net $18 billion, but by 2016 the figure should hit $21 billion.

AMD and Nvidia believe consoles are no longer directly competing with PC gaming, not to mention smartphone and tablet gaming.

Vega GPU Announced by Vivante

VivanteRight up front Vivante states that it designed its GPU architecture to scale to compete with Nvidia and ATI. It plans to vie with Nvidia in the next generation of ultra-mobile GPU in GTX/Maxwell, John Oram writes from San Francisco.

A fledgling start up once assisted by semiconductor angel investors and corporate investment from Fujitsu, Vivante was profitable five years after opening its doors. It is now headquartered in Sunnyvale, California with offices in Shanghai and Chengdu China. Over its nine year history, Vivante Corporation has infiltrated many markets.

The company flaunts its “firsts” – first to ship OpenGLES 3.0 silicon and first to ship embedded OpenCL 1.1 silicon. It has shipped over 120 million units. Currently, Vivante is inside the majority of the top players in the fields of SoC vendors, mobile OEMs, TV OEMs, and automotive OEMs.

At IDF, Vivante was heralding its  advantage over its competitors referring to benchmark ratings in its slides. For example see the GC1000 – Mali 400-MP2 comparison where it also pictorially point out the difference in size between the Mali and smaller Vivante product.

Smart TVs, such as Vizio, LG U+, Lenevo, TCL, Hisense, and Changhong, rely on Vivante. Chromecast Internet to TV streaming experiences Acceleration by Vivante in 3D gaming, composition, and user interface. Set top boxes from Toshiba out of Japan, and three companies out of Shenzhen, China, Huawei, Himedia,and GIEC, all use Vivante’s GPU Acceleration.

Tomorrow’s cars will never be the same. Vivante is everywhere. Drivers will check out their positioning with ADAS (Advanced Driver Assistance Systems) displays, reverse guidance, pedestrian detection, and object distance indicators. In fact, Vivante was awarded the 2013 Frost & Sullivan Best Practices Award  for Advanced Driver Assistance Systems.

Vivante used IDF to announce Vega. Vega is the culmination of seven years of architecture refinements and the experience of more than 100 SoC integrations. It is optimized to balance the big three: performance, power, and area. GPU delivers highest in class performance at greater than 1 GHz GPU clock speeds. It even touts patented logarithmic space full precision math units. Vega is optimized and configured from production GPU cores GC2000, GC4000, and GC5000. Vega GPUs have been delivered to lead customers for tapeout.

Vivante’s SDK is ready for GUI, gaming, and navigation applications. Vivante provides full API support across the GPU product line, OpenGL ES 3.0, OpenCL 1.2, and DirectX 11 9_3. The company prides itself on its Scalable Ultra-threaded Unified Shader which offers up to 32x SIMD Vec-4 shaders and up to 256 independent threads per shader operate on discrete data in parallel. Shaders facilitate creation of an endless range of effects by tweaking hue, brightness, contrast and saturation of the pixels, vertices and textures to create an image. Shaders provide a programmable alternative to the hard-coded approach known as Fixed Function Pipeline.

Vivante isn’t shy about pointing out its edge over the competition. As far as performance / area advantages, they are taking on Tegra, Adreno, Mali, and IMG.

In conclusion, Vivante indicated that it isn’t overlooking the mass market either with their Vega Lite version which still promises the smallest silicon area matched with extremely low power.

AMD, Intel gain share in GPU market

graphics-cardsGPU shipments are recovering and according to Jon Peddie Research, the graphics market increased 4.6 percent last quarter, while the PC market as a whole took a 2.5 percent sequential dip. Intel and AMD upped their market share, at Nvidia’s expense, of course.

The increase in overall GPU shipments reveals that many customers are choosing to “double-attach,” or add a discrete GPU to a system with integrated graphics. This doesn’t really paint the full picture, as practically all Intel non-server chips ship with integrated graphics and the same goes for more than two thirds of AMD chips. As a result, the average PC today has 1.4 GPUs on board.

On a year-to-year basis total graphics shipments in the second quarter dropped 6.8 percent. Once again this was better than PC shipments, which slipped 11.2 percent. JPR expects the total shipments of graphics chips in 2016 to hit 319 million units and the CAGR from 2012 to 2016 now stands at -1.4 percent.

jpr-gpu-market-2q13

AMD’s overall PC graphics shipments increased 10.9 percent and the company upped its market share to 21.9 percent. However, shipments of APUs declined 9.6 percent. Shipments of APUs in notebooks increased 47.1 percent, but it should be noted that AMD’s presence in notebooks is rather limited. With that in mind all it takes to get such a high figure is a few design wins.

Intel’s desktop graphics shipments dropped 1.4 percent, but notebooks were up 12.1 percent. Intel’s overall shipments increased 6.2 percent.

Nvidia was the big loser last quarter. Its desktop discrete shipments were down 8.9 percent, while discrete mobile shipments were down 7.1 percent. It should be noted that Nvidia scored the vast majority of Haswell notebook design wins, but new notebooks aren’t exactly flying off shop shelves right now.

Although some of the numbers are encouraging, total GPU shipments were down 5.2 percent from the same quarter last year. The trend for discrete GPUs remains negative, with a CAGR to 2016 of -2.2 percent.