Tag: search

Euro Parliament votes for Google breakup

euroflagzThe European Parliament has sort of voted to break up Google into little bits, separating the search function from its other businesses.

The resolution was passed today with 384 in favour and 174 against – but the vote is more of a gesture than a resolution because the Parliament has no power to split it up.

What it does mean is that there is additional pressure put on the European Commission to step up its now four year long inquiry into Google’s alleged dominance of the market – domination that the Mountain View, California company denies.

Google maintains it has plenty of competition from a number of companies including Amazon, Expedia and others.

The parliamentarians are supported by a number of lobbyists and by publishers in Europe such as the Axel Springer group, which alleges Google has way too much power to influence the market.

Google hasn’t formally replied to the vote at press time, but has mounted a spin offensive in Europe over the last few months in an attempt to show that it isn’t evil, but is a force for good.

Brown Dog snuffles the 99 percent

cover-image-530x360A team of boffins is developing a search engine which can find all the data on the world wide web which cannot be seen by search bots.

The engine, dubbed Brown Dog, searches the web for uncurated data and makes it accessible to scientists.

Kenton McHenry, who along with Jong Lee lead the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) said that the information age has made it easy for anyone to create and share vast amounts of digital data, including unstructured collections of images, video and audio as well as documents and spreadsheets.

But the ability to search and use the contents of digital data has become exponentially more difficult because digital data is often trapped in outdated, difficult-to-read file formats and because metadata–the critical data about the data, such as when and how and by whom it was produced–is nonexistent.

McHenry and his team at NCSA have been given a $10 million, five year award from the National Science Foundation (NSF) to manage and make sense of vast amounts of digital scientific data that is currently trapped in outdated file formats.

So far they have come up with a Data Access Proxy (DAP) which transforms unreadable files into readable ones by linking together a series of computing and translational operations behind the scenes.

Similar to an internet gateway, the configuration of the Data Access Proxy would be entered into a user’s machine settings and then forgotten. Data requests over HTTP would first be examined by the proxy to determine if the native file format is readable on the client device. If not, the DAP would be called in the background to convert the file into the best possible format readable by the client machine.

The second tool, the Data Tilling Service (DTS), lets individuals search collections of data, possibly using an existing file to discover other similar files in the data.

Once the machine and browser settings are configured, a search field will be appended to the browser where example files can be dropped in by the user. Doing so triggers the DTS to search the contents of all the files on a given site that are similar to the one provided by the user.

While browsing an online image collection, a user could drop an image of three people into the search field, and the DTS would return images in the collection that also contain three people. If the DTS encounters a file format it is unable to parse, it will use the Data Access Proxy to make the file accessible.

The Data Tilling Service will also perform general indexing of the data and extract and append metadata to files to give users a sense of the type of data they are encountering.

McHenry said the two services are like the Domain Name Service (DNS) in that they can translate inaccessible uncurated data into information.

According to IDC, a research firm, up to 90 percent of big data is “dark,” meaning the contents of such files cannot be easily accessed.

Brown Dog is not only useful for searching the Deep Web, it could one day be used to help individuals manage their ever-growing collections of photos, videos and unstructured/uncurated data on the Web.

Google serves up tat

telescopeSearch engine Google has come up with a novel way of getting people to update their browsers to something a little more recent.

Users of browsers which can remember the use of shoulder pads and Duran Duran are suddenly getting served up search results which are just as old.

An Opera 12.17 user complained on a Google help desk that, Google’s homepage reverted to the old version him. If he searched for something, the results are shown with the current Google look, but the homepage itself is the old look with the black bar across the top. It seemed to affect only the Google homepage and image search. However he still got the latest news.

Opera is currently using version 24, version 12 was bought out in June 7, 2011.

A Google spokesman said that there was not a fault with Google, in fact it was proof that the “encouragement” to upgrade was working perfectly. He suggested politely that if the user wanted a modern Google they should run a modern browser to support it.

“We’re continually making improvements to Search, so we can only provide limited support for some outdated browsers. We encourage everyone to make the free upgrade to modern browsers — they’re more secure and provide a better web experience overall,” the spokesman wrote on the thread.

Strangely, the help desk thread continued with people using old browsers insisting that there must be a fault with Google’s programming. After all, there was nothing wrong with working with the same version of Safari which was blessed by Steve Jobs while he was young and healthy is there?