Tag: ai

Salesforce wants to bring AI to the channel

robotsWhile most channel partners might not be too interested in AI trends, Salesforce has a cunning plan to use the concept to spice up its partner relationship management software.

Salesforce’s PRM software, which is delivered as a service, uses an Einstein AI engine that the software outfit thinks could change the way the channel is run.

At the moment, PRM applications are loaded with data about customer transactions, but sorting through all that data to make the best optimal decision is laborious. Einstein is supposed to instantly identify what combination of products and services will, for example, yield the most profit for them.

Channel management teams can identify what partners make the best use of marketing development funds (MDFs) or have higher customer satisfaction ratings with a specific product or technology.

This means that the vendor can better identify when a customer is most likely to upgrade an existing product or service.

While this will not mean the end of the days where a nice lunch would improve vendor standing, it will mean that sales teams will come to the table with some good facts about what the client wants.

The technology is still limited AI technologies start to cut both ways in the channel. Instead of a PRM application, there will inevitably be a vendor relationship management (VRM) application infused with AI capabilities. Solution providers would then be able to instantly compare which vendor in a category, such as servers and storage, is providing them with the best deal at a given time.

Nvidia installs $10,000 computer in car

reddit7Chipmaker Nvidia introduced a $10,000 computer that it says will allow cars to learn the right and wrong reactions to different situations.

Basically they think it can work out what to do from experience rather than a rigid set of pre-defined situations. If this is applied to the roads of Italy this will mean that your car will never leave the garage and will simply quake with fear.

Jen Hsun Huang, CEO of Nvidia claimed that real driving is not about detecting but a skill of learned behaviour.

Talking to the company’s GTC 2015 conference in San Jose, Huang said his Drive PX computer was based on two of the company’s Tegra X1 processors and will crunch video from up to 12 cameras.

Over time the computer should learn, for example, to slow down for dogs and water buffalo crossing the road but not jam on the brakes for a coke can.

Today’s commercial autonomous systems are largely related to detecting when cars stray from their lanes or preventing collisions. Several fully self-driving cars have been developed as part of research projects, but they rely on highly detailed maps and are generally restricted to operating in controlled environments.

A DARPA project already proved the learning technology on a lower level, said Huang. A small autonomous robot was fed with 225,000 images of a backyard. When it started out, the robot ran straight into an obstacle, but after analyzing the images, it managed to successfully scoot around the yard without hitting objects, figuring out for itself how to get around.

While it is not really designed for the great unwashed, Nvidia thinks its Drive PX will find a home in the R&D departments of car makers.

One proponent of autonomous driving, Tesla Motors CEO Elon Musk, said the most difficult part of realizing the technology was at speeds between 10- and 50 miles per hour.

“It’s fairly easy to deal with things that are sub five or 10 miles per hour, you just make sure it hits nothing” said Musk, who was speaking alongside Huang at the event.

“From 10 to 50 miles per hour in complex suburban environments, that’s when you can get a lot of unexpected things happening. Once you’re above 50 miles per hour, it gets easier again.”

An additional element of Drive PX will ensure that actions learned in one car are shared with others, which should mean that cars will start to recognise bad drivers and get out of their way.
Nvidia didn’t say which auto makers would be using the platform, which will be available from May, but did say that it’s already receiving enquiries from car companies about the technology.

 

Facebook stops you posting pics drunk

zuckerberg-drinkSocial notworking site Facebook is creating an artificial intelligence tool that will warn people when they are about to do something they might regret.

Apparently the tool can detect if you are about to upload an embarrassing photo of yourself or other people.

Yann LeCun, who heads the Facebook Artificial Intelligence Research lab, and his team are laying the basic groundwork for the tool.

He sees the AI tool as a Facebook digital assistant that recognises when you are uploading an embarrassing photo from a late-night party and tells you that this is probably not a good idea.

LeCun said this assistant would tap you on the shoulder and say: “Uh, this is being posted publicly. Are you sure you want your boss and your mother to see this?”

The tool would rely on image recognition technology that can distinguish between your drunken self and sober self. Which is interesting, because a few can’t remember their sober selves and have never met them for a long time.

The larger aim, LeCun said, is to create a tool that can self-censor stuff posted to Facebook. They are also looking at AI systems that can understand FB data in ways that are more complex.

LeCun also envisions a Facebook that instantly notifies you when someone you do not know posts your photo to the social network without your approval. Or posts adverts for Sunglasses.

The only problem with this tool is if you are off your face you think it is a good idea to  post a picture even if your AI assistant tells you it is a bad one.

 

 

 

 

Scientists teach computer legerdemain

levitationArtificial intelligence reached its zenith today after researchers at Queen Mary University of London taught a computer to create magic tricks.

The computer’s even been clever enough to create a card trick called Phoney which is available in the Google Play Store.

According to the scientists, they wrote a computer program to show how magic jigsaw puzzles and mind reading card trick works, along with a database showing how humans understand magic tricks.

But, the researchers point out, the magic trick created by the computer doesn’t use prestidigitation but rather uses mathematic techniques, which are, apparently, a core part of many conjurors’ stage acts.

Howard Williams, who co-created the project, said that AI can use psychological and mathematical principles to “create lots of different versions and keep audiences guessing”.  Its stage presence might need a little tweaking.

Professor Peter McOwan, a member of the team, said: “Using AI to create magic tricks is a great way to demonstrate the possibilities of computer intelligence and it also forms a part of our research into the psychology of being a spectator.”

At press time we searched the Google Play Store for Phoney but only found Phoney Girlfriend from Baller Industries.  We suspect this isn’t the app QMUL means…

IBM bets on Watson

Sherlock-Holmes-and-WatsonBig Blue is hoping that its AI based supercomputer Watson can come up with a few ideas which will help turn it around.

IBM  is taking a kicking from cheap cloud computing services and the outfit is  facing an uncertain future.

Apparently, IBM’s research division is building on the research effort that led to Watson, the computer that won in the game show Jeopardy! in 2011. The hope is that this effort will lead to software and hardware that can answer complex questions by looking through vast amounts of information containing subtle and disparate clues.

John Kelly, director of IBM Research told MIT Technology review  that IBM was betting billions of dollars, and a third of this division now is working on artificial intelligence techniques related to Watson.

Earlier this year the division was reorganised to ramp up efforts related to cognitive computing. The push began with the development of the original Watson, but has expanded to include other areas of software and hardware research aimed at helping machines provide useful insights from huge quantities of often-messy data.

So far, the research has created new recipes by analysing thousands of ingredients and popular meals, and, less interesting, electronic components, known as neurosynaptic chips, that have features modelled on the workings of biological brains and are more efficient at processing sensory information.

The hope is that the technology will be able to answer complicated questions in different industries, including health, financial markets, and oil discovery; and that it will help IBM build its new computer-driven consulting business.

There is a growing belief that machine-learning techniques may provide ways to use big data.  Already Google, Facebook, and Amazon have their own methods for hunting through vast quantities of data for useful insights.

So far those Watson has proved a bit elementary.  Some companies and researchers testing Watson systems have reported difficulties in adapting the technology to work with their data sets. However that has not stopped IBM’s CEO, Virginia Rometty, said in October last year that she expects Watson to bring in $10 billion in annual revenue in 10 years, even though that figure then stood at around $100 million.

IBM is aggressively commercialising the technology. Last week the company announced it had teamed up with Twitter and the Chinese social network Tencent to offer a service that will try to find useful insights from messages daily sent through these services, as we reported here. A company that sells phones might, for example, learn about a possible problem with one of its products from comments made by restaurant patrons.

 

Google hires Oxford boffins to provide AI

oxford-robesGoogle is finding itself a little short on intelligence and has been seen snuffling around near the Oxford headquarters of TechEye.

When cornered, near one of the wheelie bins at the back of public house the Kite, a Google staffer explained that the search engine was expanding its artificial intelligence initiative. Apparently, they are hiring more than half a dozen leading academics and experts in the field and announcing a partnership with Oxford University to “accelerate” its efforts.

Apparently, Google will make a “substantial contribution” to establish a research partnership with Oxford’s computer science and engineering departments, and Oxford will return the favour by holding one of its famous dinners.

Google did not provide any financial details about the partnership, saying only in a post on its blog that it will include a program of student internships and a series of joint lectures and workshops “to share knowledge and expertise.”

Google is building up its artificial intelligence capabilities as it strives to maintain its dominance in the Internet search market and to develop new products such as robotics and self-driving cars. In January Google acquired artificial intelligence company Deep Mind for $400 million according to media reports.

The Oxford boffins will be joining Google’s Deep Mind team, including three artificial intelligence experts whose work has focused on improving computer visual recognition systems. Among that team is Oxford Professor Andrew Zisserman, a three-time winner of the Marr Prize for computer vision.

The four founders of Dark Blue Labs will also be joining Google where they will be will be leading efforts to help machines “better understand what users are saying to them.”

Google said that three of the professors will hold joint appointments at Oxford, continuing to work part time at the university.