Tag: ai

Cisco using intuitive switches in compute space push

Cisco Kid Networking giant Cisco is having a crack at pushing itself into the compute space with “intuitive” boxes which can analyse and control network traffic.

The new Catalyst 9000 series switches are shipping with a management platform called “DNA Centre”, and the aim is to make the switches better able to apply security policies and controls for devices on a network.

The control centre can manage access policies and privileges for devices and apply specific security controls via the Talos security tools.

According to Cisco, the tools can analyse traffic and recognize things like malware infections based on the way they move packets over the network.

Meanwhile, Cisco claims machine learning components in DNA Centre will let the switches change policies to recognize devices and users.

All this means is that Cisco partners can push its gear deeper into IT management, letting the network hardware handle things like managing cloud apps and keeping access policies for mobile devices and guest connections.

CEO Chuck Robbins said that the gear can translate their business intent into the network.

The switches are based on custom ASICs from Cisco that will be customisable and reprogrammable to use with private clouds or specific applications and stacks. The 9000 series comes in three models for enterprises: the 9300, 9400, and 9500. The smaller 9300 and 9500 switches are shipping this month, while the larger 9400 switches will be making their way to customers in July.

The 9000 series uses subscription services. Cisco says that from now on, customers will have to agree to a package of either pre-bundled Cisco ONE software tools or as packages with the DNA Centre software.

This is all part of Cisco’s long-term goal to turn from a network hardware outfit into an IT management vendor.

Dell claims client computing is still its core

michael-dell-2At the inaugural Dell EMC World event in Las Vegas this week, Dell pledged that client computing “remains core to our business” and said that the PC will be back with augmented reality, subscription payment models, and wireless charging.

Dell Technologies chief marketing officer Jeremy Burton said that the PC will evolve in the coming years, as well as some of the technology the vendor is ready to bring to market now.

He showed off the Latitude 7285, which Dell claims is the world’s first wireless-charging two-in-one laptop and the Canvas, a 27-inch monitor pitched at the design industry. The latter involves a stylus, for drawing, and a ‘totem’ dial-style tool for cursor control. He said:

“The PC is smack bang in the centre of what is going on in the world Augmented reality and virtual reality are technologies that are now at the tipping point. This will be a $45bn [annual] hardware market, and a $35bn software market by 2025.”

Dell is also looking to forge partnerships with other AR and VR players, and is launching a partner programme through which to formalise ties with industry specialists.

“We want to work with a broad ecosystem to make this a reality,” said Burton.

Michael Dell showed off VMware’s AirWatch enterprise mobility management which has been embedded into some of its client computing devices he also revealed that the vendor’s PC-as-a-service offering will be available across the world during 2017.

“We are announcing the global rollout of our PC-as-a-service, which combines the latest Dell PCs with financing services and support for a single predictable price per month,” he said.
Dell added: “To make it extremely clear: the PC remains core to our business and strategy – it is how work gets done.”

Robots are after your job

robotsSmart machines and robots may replace skilled professionals in medicine, law and IT by 2020, warned beancounters at Gartner who are presumably seeing R2D2 cleaning out their desks as we speak.

Analyst group Gartner has predicted that by 2022, smart machines and robots could replace highly trained professionals in tasks within medicine, the law and IT. CIOs need to prepare now to ensure that their organisations are ready for the impact that AI will have over the next five or ten years.

Stephen Prentice, vice president and Gartner fellow, suggested that the economics of AI and machine learning will lead to many tasks performed by highly paid professionals today becoming ‘low-cost utilities’.

This means that all this will force an organisation to adjust its business strategy. Many competitive, high-margin industries will become more like utilities as AI turns complex work into a metred service “that the enterprise pays for, like electricity,” he said.

Prentice cited the example of lawyers, who must spend a lot of time and money on education and training.

Any organisation that hires lawyers must therefore pay salary and benefits sufficient not only to compensate each successive lawyer it hired for this training, but a sum that is commensurate with their knowledge, expertise and experience.

A smart machine that could act as a substitute for a lawyer would also require a long, expensive period of training – or ‘machine learning’ but once the first smart machine is ready, the enterprise could add as many other similar machines as it wants for little extra cost.

Employment numbers would be hit in some industries, with some routine functions at risk of replacement, such as systems administration, help desk, project management and application-support roles.

Others would see the technology as a benefit as AI takes over routine and repetitive tasks, leaving more time for the existing workforce to improve in other areas of the business. The mix of AI and human skillsets will complement each other in these roles.

Prentice said that CIOs need to develop a plan that can run alongside the company’s current digital transformation strategy. He warned that too much AI-driven automation could leave the enterprise less flexible.

“The CIO should commission the enterprise architecture team to identify which IT roles will become utilities and create a timeline for when these changes become possible. Work with HR to ensure that the organisation has a plan to mitigate any disruptions that AI will cause, such as offering training and upskilling to help operational staff to move into more-creative positions,” he said.

Salesforce wants to bring AI to the channel

robotsWhile most channel partners might not be too interested in AI trends, Salesforce has a cunning plan to use the concept to spice up its partner relationship management software.

Salesforce’s PRM software, which is delivered as a service, uses an Einstein AI engine that the software outfit thinks could change the way the channel is run.

At the moment, PRM applications are loaded with data about customer transactions, but sorting through all that data to make the best optimal decision is laborious. Einstein is supposed to instantly identify what combination of products and services will, for example, yield the most profit for them.

Channel management teams can identify what partners make the best use of marketing development funds (MDFs) or have higher customer satisfaction ratings with a specific product or technology.

This means that the vendor can better identify when a customer is most likely to upgrade an existing product or service.

While this will not mean the end of the days where a nice lunch would improve vendor standing, it will mean that sales teams will come to the table with some good facts about what the client wants.

The technology is still limited AI technologies start to cut both ways in the channel. Instead of a PRM application, there will inevitably be a vendor relationship management (VRM) application infused with AI capabilities. Solution providers would then be able to instantly compare which vendor in a category, such as servers and storage, is providing them with the best deal at a given time.

Nvidia installs $10,000 computer in car

reddit7Chipmaker Nvidia introduced a $10,000 computer that it says will allow cars to learn the right and wrong reactions to different situations.

Basically they think it can work out what to do from experience rather than a rigid set of pre-defined situations. If this is applied to the roads of Italy this will mean that your car will never leave the garage and will simply quake with fear.

Jen Hsun Huang, CEO of Nvidia claimed that real driving is not about detecting but a skill of learned behaviour.

Talking to the company’s GTC 2015 conference in San Jose, Huang said his Drive PX computer was based on two of the company’s Tegra X1 processors and will crunch video from up to 12 cameras.

Over time the computer should learn, for example, to slow down for dogs and water buffalo crossing the road but not jam on the brakes for a coke can.

Today’s commercial autonomous systems are largely related to detecting when cars stray from their lanes or preventing collisions. Several fully self-driving cars have been developed as part of research projects, but they rely on highly detailed maps and are generally restricted to operating in controlled environments.

A DARPA project already proved the learning technology on a lower level, said Huang. A small autonomous robot was fed with 225,000 images of a backyard. When it started out, the robot ran straight into an obstacle, but after analyzing the images, it managed to successfully scoot around the yard without hitting objects, figuring out for itself how to get around.

While it is not really designed for the great unwashed, Nvidia thinks its Drive PX will find a home in the R&D departments of car makers.

One proponent of autonomous driving, Tesla Motors CEO Elon Musk, said the most difficult part of realizing the technology was at speeds between 10- and 50 miles per hour.

“It’s fairly easy to deal with things that are sub five or 10 miles per hour, you just make sure it hits nothing” said Musk, who was speaking alongside Huang at the event.

“From 10 to 50 miles per hour in complex suburban environments, that’s when you can get a lot of unexpected things happening. Once you’re above 50 miles per hour, it gets easier again.”

An additional element of Drive PX will ensure that actions learned in one car are shared with others, which should mean that cars will start to recognise bad drivers and get out of their way.
Nvidia didn’t say which auto makers would be using the platform, which will be available from May, but did say that it’s already receiving enquiries from car companies about the technology.

 

Facebook stops you posting pics drunk

zuckerberg-drinkSocial notworking site Facebook is creating an artificial intelligence tool that will warn people when they are about to do something they might regret.

Apparently the tool can detect if you are about to upload an embarrassing photo of yourself or other people.

Yann LeCun, who heads the Facebook Artificial Intelligence Research lab, and his team are laying the basic groundwork for the tool.

He sees the AI tool as a Facebook digital assistant that recognises when you are uploading an embarrassing photo from a late-night party and tells you that this is probably not a good idea.

LeCun said this assistant would tap you on the shoulder and say: “Uh, this is being posted publicly. Are you sure you want your boss and your mother to see this?”

The tool would rely on image recognition technology that can distinguish between your drunken self and sober self. Which is interesting, because a few can’t remember their sober selves and have never met them for a long time.

The larger aim, LeCun said, is to create a tool that can self-censor stuff posted to Facebook. They are also looking at AI systems that can understand FB data in ways that are more complex.

LeCun also envisions a Facebook that instantly notifies you when someone you do not know posts your photo to the social network without your approval. Or posts adverts for Sunglasses.

The only problem with this tool is if you are off your face you think it is a good idea to  post a picture even if your AI assistant tells you it is a bad one.

 

 

 

 

Scientists teach computer legerdemain

levitationArtificial intelligence reached its zenith today after researchers at Queen Mary University of London taught a computer to create magic tricks.

The computer’s even been clever enough to create a card trick called Phoney which is available in the Google Play Store.

According to the scientists, they wrote a computer program to show how magic jigsaw puzzles and mind reading card trick works, along with a database showing how humans understand magic tricks.

But, the researchers point out, the magic trick created by the computer doesn’t use prestidigitation but rather uses mathematic techniques, which are, apparently, a core part of many conjurors’ stage acts.

Howard Williams, who co-created the project, said that AI can use psychological and mathematical principles to “create lots of different versions and keep audiences guessing”.  Its stage presence might need a little tweaking.

Professor Peter McOwan, a member of the team, said: “Using AI to create magic tricks is a great way to demonstrate the possibilities of computer intelligence and it also forms a part of our research into the psychology of being a spectator.”

At press time we searched the Google Play Store for Phoney but only found Phoney Girlfriend from Baller Industries.  We suspect this isn’t the app QMUL means…

IBM bets on Watson

Sherlock-Holmes-and-WatsonBig Blue is hoping that its AI based supercomputer Watson can come up with a few ideas which will help turn it around.

IBM  is taking a kicking from cheap cloud computing services and the outfit is  facing an uncertain future.

Apparently, IBM’s research division is building on the research effort that led to Watson, the computer that won in the game show Jeopardy! in 2011. The hope is that this effort will lead to software and hardware that can answer complex questions by looking through vast amounts of information containing subtle and disparate clues.

John Kelly, director of IBM Research told MIT Technology review  that IBM was betting billions of dollars, and a third of this division now is working on artificial intelligence techniques related to Watson.

Earlier this year the division was reorganised to ramp up efforts related to cognitive computing. The push began with the development of the original Watson, but has expanded to include other areas of software and hardware research aimed at helping machines provide useful insights from huge quantities of often-messy data.

So far, the research has created new recipes by analysing thousands of ingredients and popular meals, and, less interesting, electronic components, known as neurosynaptic chips, that have features modelled on the workings of biological brains and are more efficient at processing sensory information.

The hope is that the technology will be able to answer complicated questions in different industries, including health, financial markets, and oil discovery; and that it will help IBM build its new computer-driven consulting business.

There is a growing belief that machine-learning techniques may provide ways to use big data.  Already Google, Facebook, and Amazon have their own methods for hunting through vast quantities of data for useful insights.

So far those Watson has proved a bit elementary.  Some companies and researchers testing Watson systems have reported difficulties in adapting the technology to work with their data sets. However that has not stopped IBM’s CEO, Virginia Rometty, said in October last year that she expects Watson to bring in $10 billion in annual revenue in 10 years, even though that figure then stood at around $100 million.

IBM is aggressively commercialising the technology. Last week the company announced it had teamed up with Twitter and the Chinese social network Tencent to offer a service that will try to find useful insights from messages daily sent through these services, as we reported here. A company that sells phones might, for example, learn about a possible problem with one of its products from comments made by restaurant patrons.

 

Google hires Oxford boffins to provide AI

oxford-robesGoogle is finding itself a little short on intelligence and has been seen snuffling around near the Oxford headquarters of TechEye.

When cornered, near one of the wheelie bins at the back of public house the Kite, a Google staffer explained that the search engine was expanding its artificial intelligence initiative. Apparently, they are hiring more than half a dozen leading academics and experts in the field and announcing a partnership with Oxford University to “accelerate” its efforts.

Apparently, Google will make a “substantial contribution” to establish a research partnership with Oxford’s computer science and engineering departments, and Oxford will return the favour by holding one of its famous dinners.

Google did not provide any financial details about the partnership, saying only in a post on its blog that it will include a program of student internships and a series of joint lectures and workshops “to share knowledge and expertise.”

Google is building up its artificial intelligence capabilities as it strives to maintain its dominance in the Internet search market and to develop new products such as robotics and self-driving cars. In January Google acquired artificial intelligence company Deep Mind for $400 million according to media reports.

The Oxford boffins will be joining Google’s Deep Mind team, including three artificial intelligence experts whose work has focused on improving computer visual recognition systems. Among that team is Oxford Professor Andrew Zisserman, a three-time winner of the Marr Prize for computer vision.

The four founders of Dark Blue Labs will also be joining Google where they will be will be leading efforts to help machines “better understand what users are saying to them.”

Google said that three of the professors will hold joint appointments at Oxford, continuing to work part time at the university.