Author: Nick Farrell

AMD’s R9 390X will put the wind up Nvidia

1-AMD-s-New-Steamroller-Architecture-to-Bring-Significant-PerformanceFigures leaked to the great unwashed by deep throats within AMD show that its next-gen flagship gaming graphics card will really put the wind up its envious competitor, Nvidia.

Dubbed the R9 390X, the card has numbers which make Nvidia’s Maxwell architecture-based GeForce GTX 980 and GTX 970 look a bit weak.

Let’s be clear, the GTX 980 and GTX 970 are damn fine cards – they are both faster and more power efficient than their predecessors, a tough act for AMD to follow. AMD was already behind in the power consumption stakes with its R9 290X performing well, but consuming much more power than the GeForce GTX 980.

It appears that AMD has cracked the high power consumption of its previous generation graphics cards.

Leaked benchmarks claim to show that a yet unknown graphics card is over 15 per cent faster than Nvidia’s GeForce GTX 980, yet consumes only 12W more power on average.

Several websites, including WCCFTech, claim that the style of the leaked slides is the same as those that appeared for several previous GPU launches too, so there is some credibility to the results as well.

This means that AMD could well be launching a stunning graphics card early in 2015 which will give Nvidia a good kicking.

It certainly needs to do something. Nvidia is in charge of the above $300 market, with its GeForce GTX 970 and GTX 980 out-performing AMD’s equivalents both in terms of speed and power efficiency. Nvidia is expected to launch its GTX 960 soon too, which will further cement its dominance a little lower down the price range.

 

Dell finds itself patronised

screen_image_398032_w620Computer maker Dell is planning to expand its manufacturing facility in India.

Top officials of Dell have apparently had words with the Telecom and IT Minister Ravi Shankar Prasad and expressed keen interest in expanding its Chennai plant. We assume that they were tapping Prasad for some sort of sweetener for the deal.

Judging by what he told the media, Prasad did not really seem to know what Dell is. Prasad said, “Dell is very interested in electronic manufacturing” which he was clearly surprised about. “Dell has a facility here which they want to expand and have expressed need to have complete ecosystem of components suppliers as well.”

“They told me that Dell employed 27,000 people in the country. They are very positive under (Narendra) Modiji led new government and with growing interest of investors in the country,” Prasad said.

Who would have thunk it? Dell wants to start manufacturing computers and already hires a lot of staff in India, we would have thought that Prasad might have heard of them

Dell has actually invested $30 million in a manufacturing facility at Sriperumbudur, near Chennai in Tamil Nadu.

India’s proximity with the Middle East, Africa and Eastern Europe make it superior location for export compared to other parts of Asia, Dell officials said.

“They also want to work on Digital India programme which is now a buzz word across the world. They have said that the programme should also be linked to e-commerce expansion in the country,” Prasad said.

 

How Apple destroyed Sapphire glass

Broken_glassMIT Technology Review  has been going through the bankruptcy documents of GT Advanced and seems to have found out what went wrong – and why the iPhone 6 bends.

Apple invested more than $1 billion in an effort to make sapphire one of the device’s big selling points. Making screens out of the nearly unscratchable material would have helped set the new phone apart from its competitors. It would have also enabled it to be structurally strong

When Apple announced the iPhone 6 this September, however, it didn’t have a sapphire screen, only a regular glass one and was structurally weak, so that it bent in your pocket.

GT Advanced Technologies, declared bankruptcy as without Apple it was doomed.

Apple had been using sapphire to cover the cameras and fingerprint sensors in some iPhones since October 2013. But making large pieces of sapphire—enough for a smartphone screen—would normally cost 10 times as much as using glass.

In 2013, GT claimed it could cut the cost by two thirds by increasing the size of its equipment and adapting the crystal growth procedures to make cylindrical crystals—called boules—that are more than twice as large as ordinary sapphire crystals.

Apple originally offered to buy sapphire growing furnaces from GT. But according to sources familiar with negotiations, after five months Apple demanded a major change in terms, requiring GT to supply the sapphire itself. Apple wanted to drive costs down by having GT build the world’s largest factory to produce the stuff.

Apple moaned in the court documents that GT failed to produce “any meaningful quantity of useable sapphire”.

However GT’s bankruptcy filing said that was mostly Apple’s fault.

Producing sapphire requires a very clean environment, but construction at the factory meant that sapphire was grown “in a highly contaminated environment that adversely affected the quality of sapphire material,” according to GT.

It also needs uninterrupted supplies of water and electricity to regulate the temperature of the molten aluminium oxide used to form the boule. GT said that to save costs, Apple decided not to install backup power supplies, and multiple “outages” ruined whole batches of sapphire.

GT said in the documents that there were problems with much of the sawing and polishing equipment used to slice the boule—equipment that it says Apple selected. For example, a diamond-wire saw that was supposed to cut sapphire in 3.6 hours took 20 hours to do it and had to be replaced. According to GT, problems like these increased the costs of processing the sapphire boule by 30 percent.

Then came the worst of it. The terms Apple negotiated committed GT to supplying a huge amount of sapphire, but put Apple under no obligation to buy it.

 

Former HP goddess aims for US president

6874335353_8791daf3a9_bFormer HP boss  and peddler of expensive printer ink Carly Fiorina is apparently going to have a crack at being the next republican president of the US and is standing on a ticket that she is the only woman candidate and the only CEO.

Fiorina has been talking privately with potential donors, recruiting campaign staffers, courting grass-roots activists in early caucus and primary states and planning trips to Iowa and New Hampshire starting next week.

She would be considered an outsider. She has sought but never held public office and her last campaign was in such a disarray it could hurt her current one.  After all how can you stand saying you are an effective manager when you owe nearly $500,000 to consultants and staffers from your failed 2010 Senate bid in California?

Republicans have spoken about Fiorina with disdain, saying she has an elevated assessment of her political talents and questioning her qualifications to be commander in chief.

However analysts say that she might make a better candidate than the suited men that the GOP traditionally chooses.

She is also a free-market advocate who would act as an antidote to the “left wing” views of Elizabeth Warren.

The GOP also has a problem attracting women voters with some of its prominent members favouring stances on issues like rape, abortion and glass ceilings which are so backward they were out of date when the book of Leviticus was written. The party claimed to supporting women during the mid-terms only to award all the committee chair roles to white men after the election.

Helping Fiorina chart her political future are consultants Frank Sadler, who once worked for Koch Industries, and Stephen DeMaura, a strategist who heads Americans for Job Security, a pro-business advocacy group in Virginia.

When Fiorina was CEO at HP she was famously described as more important than the King of Spain, by an aide.

EU wants to widen “right to be forgotten”

thanks-for-the-memory-movie-poster-1938-1020198195European privacy regulators want Internet search engines such as Google and Microsoft’s Bing (MSFT.O) to scrub results globally, not just in Europe, when people invoke their “right to be forgotten”.

The European Union’s privacy watchdogs agreed on a set of guidelines on Wednesday to help them implement a ruling from Europe’s Supreme Court that gives people the right to ask search engines to remove personal information that is “inadequate, irrelevant or no longer relevant”.

Google has been scrubbing results only from the European versions of its website such as Google.de in Germany or Google.fr in France, but they still appear on Google.com.

Isabelle Falque-Pierrotin, the head of France’s privacy watchdog and the Article 29 Working Party of EU national data protection authorities, told a news conference that from the legal and technical analysis we are doing, they should include the ‘.com’.

Google said the company had not yet seen the guidelines but would “study them carefully” when they are published.

Google has previously said that search results should be removed only from its European versions since Google automatically redirects people to the local versions of its search engine.

However some feel that Google’s current approach waters down the effectiveness of the court ruling, given how easy it is to switch between different national versions.

The search engine has problems in Europe. Google is facing multiple investigations into its privacy policy and is bogged down in a four year EU antitrust inquiry.

The EU ruling has pitted privacy advocates against free speech campaigners, who say allowing people to ask search engines to remove information would lead to a whitewashing of the past.

BT censors sites without court order

russian censorsBT has started blocking access to 24 torrent sites this past weekend, including IPTorrents and TorrentDay.

It is the first time that a UK ISP has blocked private torrent sites, without a court order demanding it does it.

The High Court has ordered six UK ISPs to block subscriber access to dozens of the world’s largest torrent sites. The latest order was issued last month after a complaint from the major record labels. It expands the UK blocklist by 21 torrent sites, including limetorrents.com, nowtorrents.com, picktorrent.com, seedpeer.me and torlock.com.

Over the weekend, BT and Sky implemented the new changes, making it harder for their subscribers to reach these sites. But BT appears to have gone above and beyond the court order, limiting access to various other sites.

According to TorrentFreak several users of private torrent sites get an “error blocked” message instead of their favourite sites. These include the popular IPTorrents.com and TorrentDay.com trackers, as well as scene release site Scnsrc.me.

The fact that BT has targeted IPTorrents and Torrentday is significant. Both sites require prospective users to obtain an invite from a current member they have over a hundred thousand active users.

BT used the same error message that is returned when users to try access sites covered by High Court injunctions. It is also the first time that a UK ISP has ever blocked a private torrent site. It is also significant because it indicates that ISPs are now starting to accept that they are not safe havens and have to censor the web.

IPTorrents is still accessible via https and via the site’s alternative .me and .ru domains. In addition, VPNs and proxy servers are often cited among suggested workaround techniques.

NSF spends a fortune on cloud-based supercomputers

whirlwind-computer

The National Science Foundation (NSF) has invested $16 million to build cloud-based and data-intensive advanced computing systems for the open science community

In a statement the NSF said that high performance computing (HPC) had become central to the work and progress of researchers in all fields, from genomics and ecology to medicine and education, new kinds of computing resources and more inclusive modes of interaction are required.

It has splashed out on two new supercomputing acquisitions for the open science community that it says will complement existing resources with capabilities that allow advanced computing to be available to a broader portfolio of emerging scientific frontiers and communities. The new resources are anticipated to come online in early 2016.

The “Bridges” system will be housed at the Pittsburgh Supercomputing Center (PSC) and the “Jetstream” computer will be  co-located at the Indiana University Pervasive Technology Institute (PTI) and The University of Texas at Austin’s Texas Advanced Computing Center (TACC).

Irene Qualters, division director for Advanced Cyberinfrastructure at NSF  said that Bridges and Jetstream will expand the capabilities of the NSF-supported computational infrastructure, pushing the frontiers of science forward in the life sciences, the social sciences and other emerging computational fields by exploiting interactive and cloud systems.

“Bridges and Jetstream offer a mix of new capabilities and usage modalities, from large memory nodes to virtualization technologies that allow a PC-like experience via the cloud. Together, these technologies will let a broader swath of researchers use advancing computing while making new kinds of scientific inquiry possible.”

A $9.6-million NSF grant will fund the acquisition of the system, to begin in November 2014, with a target production date of January 2016.

Jetstream–led by Indiana University’s Pervasive Technology Institute (PTI)–will add cloud-based computation to the national cyberinfrastructure. Researchers will be able to create virtual machines on the remote resource that look and feel like their lab workstation or home machine, but are able to harness thousands of times the computing power.

Craig Stewart, PTI executive director and associate dean for research technologies at Indiana University said that the new Jetstream cloud system will operate at the border between the existing NSF-funded cyberinfrastructure and thousands of researchers and research students who will be new to use of NSF XD program resources. Jetstream will give researchers access to cloud computing and data analysis resources interactively, when they need them.

Jetstream is supported by a $6.6-million NSF grant and will go into production in January 2016.

100TB hard drives to arrive by 2025

science_fiction1940An industry consortium today released a roadmap for new recording technologies could yield 100TB hard drives in about 10 years.

Advanced Storage Technology Consortium (ASTC)’ s figures show hard-drives which are 10 times the capacity of today’s biggest hard drives. Apparently, it will be achieved using up-and-coming techniques such as laser-assisted recording technology.

The ASTC’s roadmap shows HAMR and BMPR technologies combining to grow bit areal densities and technologies such as Bit Patterned Media Recording (BPMR) and Heat-Assisted Magnetic Recording (HAMR) will result in up to 10-terabit-per-square-inch (Tbpsi) areal densities by 2025, compared with today’s .86 Tbpsi areal densities.

Industry analyst Tom Coughlin wrote in his bog that this implies that a 3.5-inch HDD built with that technology could have about 10X the capacity of the 10TB HDDs in 2025, or 100TB.

Western Digital’s HGST division has been sealing helium gas in its enterprise drives to reduce friction created by spinning platters, thereby allowing it to pack them more tightly together. Its Ultrastar HelioSeal product line now has 8TB and 10TB hard drives.

Using Helium instead of air, HGST is able to pack more platters into a hard drive.

Seagate’s largest capacity drive using conventional recording is 6TB. The company has been using a technology called Shingled Magnetic Recording (SMR), which overlaps data tracks on a disk platter like shingles on a rooftop to increase that to 8TB.

However, SMR likely to continue adding areal  density and adding helium is limited.

The problem is that as disk drive densities increase, the potential for data errors also increases due to a phenomenon known as superparamagnetism. This is when there is a magnetic pull between bits on a platter’s surface can randomly flip them, thus changing their value from one to zero or zero to one.

Seagate believes it can produce a 30TB drive by 2020 using (HAMR). HAMR integrates a semiconductor laser onto a hard drive recording transducer. The lasers are able to set down smaller bits, but ones that are also harder to overwrite, which makes the media more stable by reducing overwrite errors.

Microsoft done for tax evasion in China

fb_share.af4030d35be0Chinese mandarins have the pip at Microsoft and fined the software giant more than $140 million in back taxes.

The case is being seen as the first major case concerning cross-border tax evasion in the country, as regulators ramp up pressure on US corporations doing business there.

According to China’s Xinhua official news agency, Microsoft must pay the Chinese government $137 million in back taxes and interest, as well as more than 100 million yuan in additional taxes a year in the future.

Microsoft did not confirm the report but said that in 2012 the tax authorities of China and the United States agreed to a bilateral advanced pricing agreement about Microsoft’s operations in China.

China receives tax revenue from Microsoft consistent with the terms of the agreed advanced pricing agreement.

An advanced pricing agreement sets the tax treatment of transfer pricing, or methods of booking prices and sales between subsidiaries, which Microsoft uses across the globe.

According to its fiscal 2014 annual report, Microsoft’s overall effective tax rate was 21 percent still lower than the US corporate rate of 35 percent because it funnels earnings through “foreign regional operations centres” in Ireland, Singapore, and Puerto Rico.

China was less patent than Western powers about this sort of thing. Microsoft was reporting losses for six years in China of more than two billion yuan while peers enjoyed profits. The taxman decided that this was unreasonable. It said the US company fessed up to tax evasion and its mainland subsidiary had agreed to pay the central government.

Ruby on Rails is dying

damsel-in-distress-4Ruby on Rails is dying off, despite the fact that those with the skill can make a killing according to a new report from Quartz.

Quartz found that by using US job listing data collected by Burning Glass and the Brookings Institution, and dividing by its shoe size,the most valuable programming skill to have today is Ruby on Rails. If you have the skill you can take home an average salary of $109,460.

But other data indicates that Rails (and Ruby) usage is not trending upwards.

Quartz homed in on the demand for workers with programming-specific skills and based it all on a large data set which was nearly two years old. Phil Johnson at ITworld had a look at some other figures collected by MS Gooroo, which has collected data from over 300,000 job listings in the US, UK and Australia.

While this confirmed that Ruby on Rails experts were still getting paid a bomb, The percentage of US job listings mentioning Rails in July 2014 was 1.1 percent, which was down from 1.8 percent in December 2013, an almost 40 percent drop. While the pay for Rails engineers is high, demand over the last year seems to be dropping.

Rails is a framework, and not a programming language, but Ruby is the language upon which Rails was built. The most recent TIOBE index of programming language popularity, which is based on web searches for languages, from this month, Ruby was ranked 14th, down from 13th in November 2013 and 11th in January 2013.

Another PYPL index of programming languages, which ranks languages based on searches for web tutorials about them, ranked Ruby 10th, the same spot it held one year ago and down slightly from January, 2013 when it ranked 9th.

It looks like demand by U.S. employers for engineers with Rails skills, however, has been on the decline, at least for the last year.

If use of the Ruby programming language itself can be considered a reliable proxy for the use of Rails, its use by engineers has also been dropping at least moderately since the beginning of 2013.

 

Verizon uses Dalek diplomacy on FCC

DayoftheDaleksUS telco Verizon has revealed its compromise on net neutrality which appears to have been drawn up by a Dalek PR – it has promised not to sue the FCC if the watchdog does exactly what it tells it to.

Verizon has promised that it will not sue to block net neutrality rules as long as they’re issued without reclassifying broadband providers as utilities.  There is a concern that Verizon would do this because that was exactly what it did last time.

In 2010, the FCC issued rules preventing Internet service providers from blocking or discriminating against traffic by relying on Section 706 of the Telecommunications Act, rather than the stronger powers the FCC has under Title II, which covers utilities or “common carriers”.

Verizon sued and won, with a federal appeals court stating that the FCC could not issue what amounted to common carrier rules without first reclassifying broadband service as a utility, similar to the traditional phone network.

Ironically, that is why the FCC is now considering reclassifying broadband. It wants the next set of net neutrality rules to survive a court challenge.

FCC Chairman Tom Wheeler said last week that whatever the FCC did, Verizon would sue it.

However Verizon Executive VP Randal Milch said that  statement was completely unfair. Verizon will not sue if the FCC uses Section 706, even though that is exactly what Verizon did last time.

Section 706 requires the FCC to encourage the deployment of advanced telecommunications capability to all Americans, and it can be used to govern broadband providers’ treatment of Internet traffic.

Milch’s e-mail was made public in an ex parte letter filed with the commission yesterday. Milch wrote that rules based on Section 706 “will not be the object of a successful court challenge—by Verizon or anyone else”.

So in otherwords, the FCC will be ok provided it obeys Verizon in all things. It is a bit like the burglars threatening to sue you if your watchdog bites them as they break into your house. In this case the burglars are telling you in advance that you will not be sued if your watchdog does not bite them while they make off with your TV. You cannot be fairer than that.

 

HP sees profits plummet

meg-whitmanThe maker of expensive printer ink, HP, has surprised the cocaine nose jobs of Wall Street by telling them that its quarterly revenue fell in almost every business segment over the year.

The numbers highlight weaknesses in the company ahead of the company’s planned 2015 separation of its enterprise services from its traditional computer and printing units.

Sales fell 2.5 percent in the fourth quarter to $28.41 billion, from $29.13 billion a year earlier, HP said. Analysts had expected $28.76 billion. Profit declined 2.7 percent to $2.01 billion.

Chief Executive Meg Whitman cryptically told analysts that she said that “turnarounds were not linear” which will be news to any driver who uses a roundabout or attempts a three point turn.  She insists that after three years of her rule, HP is exactly where she thought it would be.

The enterprise group and enterprise services, areas that Whitman had previously flagged as growth drivers, showed revenue declines of four percent and seven percent.

On the call, Whitman said she expected a slower decline in enterprise revenue next year. Enterprise services would be the biggest “swing factor” in the company’s 2015 growth projections, she said.

The company’s personal computer division grew by four percent after a 12 percent jump in the prior quarter. Much of the growth in PCs was driven by a Microsoft decision to quit supporting older software, and Whitman said that was pretty much over now.

The high-margin printer business shrank by five percent.

Whitman is pinning her hopes on splitting the company into two next year, separating its computer and printer businesses from its faster-growing corporate hardware and services operations, and eliminating another 5,000 jobs as part of its turnaround plan.

“This separation was totally the right thing to do for this company,” Whitman said. “It is remarkable how it focuses the mind on overhead.” Well if turnarounds are not linear then you have to keep an eye on what is above you otherwise a turnaround might fall on you.

Cameron claims that ISPs are protecting terrorists

stupid cameronUK Prime Minister David “one is an ordinary bloke” Cameron has been claiming that ISPs are responsible for the deaths of fusilier Lee Rigby.

Responding to a report by the intelligence and security committee, Cameron said that internet companies are allowing their networks to be used to plot “murder and mayhem”.

He demanded that internet companies live up to their social responsibilities to report potential terror threats and said there was no reason for such firms to be willing to cooperate with state agencies over child abuse but not over combatting terrorism, or anything else that he might not happen to likethat week.

The parliamentary intelligence and security committee (ISC) concluded that the brutal murder of Rigby could have been prevented if a US internet company had passed on an online exchange in which one of the killers expressed “in the most graphic terms” his intention to carry out an Islamist jihadi attack.

However Cameron’s blaming the ISPs was probably more to cover up for the fact that the agencies had made a cock-up in their monitoring of Rigby’s murderers, Michael Adebolajo and Michael Adebowale.  He focused on a comment which said that if MI5 had had access to the exchange between one of the killers, Adebowale, and an extremist overseas, Adebowale would have become a top surveillance priority. By failing to alert the authorities, the company had, “however unintentionally,” provided a “safe haven for terrorists.”

The report did say that MI6 and MI5 had made errors but that the murder would have happened even if the errors had not.

This is one of the problems – terrorists are darn hard to spot at the best of times and these two were known as self-starter terrorists, whose connection with other terrorist groups is slight. The report says the two men appeared between them in seven different agency investigations but were for the most part regarded as low-level subjects of interest.

“Adebolajo was a high priority for MI5 during two operations: they put significant effort into investigating him and employed a broad range of intrusive techniques. None of these revealed any evidence of attack planning,” the report said.

Adebowale was never more than a low-level subject of interest and the agencies took appropriate action based on the rigorous threshold set down in law: they had not received any intelligence that Adebowale was planning an attack and, based on that evidence, more intrusive action would not have been justified.

However Cameron said there was no possible justification for US internet providers not to inform agencies of terrorist activity since they already cancelled the accounts of suspected terrorists.

This summer, the government updated its legislation to require internet companies to cooperate with the state and report potential terrorist activity, but he said the level of cooperation was not satisfactory, mostly because of a reluctance for ISPs to be involved with what would be a police state.

Given that the ISP in this case was based in the US, it would have been incredibly unlikely that Cameron cracking down on UK ISPs would have made the slightest difference.

However, he admitted there was legal uncertainty about the duty of internet companies based in the US to cooperate with UK agencies due to conflicting laws in the US.

“There were errors in these operations, where processes were not followed, decisions not recorded, or delays encountered. However, we do not consider that any of these errors, taken individually, were significant enough to have made a difference,” the report says.

Homeland Security deletes Einstein files

einstineThe Department of Homeland Security has promised to delete records from a controversial network monitoring system called Einstein.

The files to be deleted are at least three years old, and the reasons for the deletion is not exactly altruistic.

DHS thinks the files, which include data about traffic to government websites, agency network intrusions and general vulnerabilities, are a waste of space.

The irony is that some security experts claim that the DHS would be deleting a treasure chest of historical threat data and privacy experts claim that destroying it could eliminate evidence that the government-wide surveillance system does not work.

Either way it appears that the spooks cannot win.

According to Homeland Security’s rationale a three-year retention period for reference purposes is sufficient, and “the records have no value beyond that point” but can be kept longer, if needed, appraisers said.

Some incident reports, which include records on catastrophic cyber events, must be kept permanently. Apparently the spooks are keen to save space on their servers. Keeping too much data costs an arm and a leg.

Johannes Ullrich, dean of research at the SANS Technology Institute warned that older intrusion-detection records provide insight into the evolution of threats, said. Analysts there sometimes need even older data to answer today’s research questions.

He thinks the intrusion records would be made available to the public in some form. The Einstein data would likely be a goldmine for researchers, as it documents attacks against very specific networks in a consistent way over a large extent of time, he said.

Lee Tien, senior staff attorney with the Electronic Frontier Foundation said that  getting rid of data about people’s activities is a pro-privacy, pro-security step.  But “if the data relates to something they’re trying to hide, that’s bad.”

It is possible the records could reveal the monitoring tools make mistakes when attempting to spot threats.

According to Next Gov  The public has until Dec. 19 to request a copy of the records retention plan. Comments are due within 30 days of receipt.

IT pros don’t want to live in America

Photo-02-Emigrantstationen-EllisIsland-NewYork-500A new study of the worldwide migration of IT professionals to the US shows a sharp drop-off in its proportional share of those workers.

The study used social media site LinkedIn to track the movement of professional people and is the first to monitor global migrations of professionals to the U.S.

Co-author Emilio Zagheni, a University of Washington assistant professor of sociology and fellow of the UW eScience Institute presented the study at the recent SocInfo conference in Barcelona, Spain.

While 27 percent of migrating professionals among the sample group chose the U.S. as a destination in 2000, in 2012 just 13 percent did.

The biggest drop was among those in the science, technology, engineering and math (STEM) fields, from 37 to 15 percent.

Instead, all the brains are going to Asian countries which saw the highest increase in professional migrants worldwide, attracting a cumulative 26 percent in 2012, compared with just 10 percent in 2000.

Australia, Oceania, Africa and Latin America also saw an uptick in their share of the world’s professional migration flows.

The Land of the Free attracted 24 percent of graduates from the top 500 universities worldwide in 2000, but just 12 percent in 2012.

The US is still the top destination for migrations, but the study indicated that was something that should not be taken for granted.

The study suggests numerous possible reasons for the proportional migration decline including the US’s Byzantine style visa system, a greater demand for professionals in other countries, fewer opportunities for immigrants due to the dot-com collapse of the early 2000s and the 2008 recession.