Scientists at Aarhus University have devised a computer game that they say has given insight into the way people solve problems.
The game – called Quantum Moves – has been played 400,000 times by regular people. It involves moving atoms around a screen and scoring points for the best moves.
And that means ordinary people are helping research into quantum physics, according to research director Jacob Sherson. He said that a human’s way of solving a problem is very different than a computer’s approach to similar problems.
The whole idea is to help build quantum computers by providing data. Sheraton said: “The players showed us that there’s an unexploited capacity for ingenuity in the human brain. We see solutions that a computer would never have allowed, and which optimise the process.”
Initial research said that females are better at solving problems than males.
“It would be very interesting to find that the feminine brain has a different – and more efficient approach than the masculine,” said Sherson.
The insights from Quantum Moves means that the university has developed another game called Quantum Minds, which the team hopes will give even better insight into the way our brains work.
And you can try it out yourself, by going to this web page.
Using mobile phones, laptops and other media devices at the same time could be changing the structure of our brains and not in a good way.
University of Sussex research reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.
This supports the view that high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.
Neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out before anyone can be certain.
The researchers at the University of Sussex’s Sackler Centre for Consciousness Science used functional magnetic resonance imaging (fMRI) to look at the brain structures of 75 adults, who had all answered a questionnaire regarding their use and consumption of media devices, including mobile phones and computers, as well as television and print media.
People who used a higher number of media devices concurrently also had smaller grey matter density in the part of the brain known as the anterior cingulate cortex (ACC), the region notably responsible for cognitive and emotional control functions.
Kep Kee Loh said his study was the first to reveal links between media multitasking and brain structure.
Scientists have previously demonstrated that brain structure can be altered upon prolonged exposure to novel environments and experience. The neural pathways and synapses can change based on our behaviours, environment, emotions, and can happen at the cellular level (in the case of learning and memory) or cortical re-mapping, which is how specific functions of a damaged brain region could be re-mapped to a remaining intact region.
Kep Kee Loh said that the mechanisms of these changes are still unclear. It is conceivable that individuals with small ACC are more susceptible to multitasking situations due to weaker ability in cognitive control or socio-emotional regulation, it is equally plausible that higher levels of exposure to multitasking situations leads to structural changes in the ACC.
Researchers at Cornell, Stanford and Brown universities and the University of California have come up with a method of teaching robots using the cloud.
Dubbed Robo Brain , the system is a large-scale computational system that learns from publicly available Internet resources. The data is translated and stored in a robot-friendly format that robots can draw on when they need it.
Ashutosh Saxena, assistant professor of computer science at Cornell University said that since laptops and mobile phones don’t have access to all the information we want, the robot can query Robo Brain in the cloud.
Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behaviour.
It speeds up the development time that a robot needs to work out what to do. If a robot sees a teacup, it can learn from Robo Brain not only that it is a teacup and not a coffee mug. It also can learn that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full.
The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Robo Brain knows that chairs are something you can sit on, but that a human can also sit on a stool, a bench or the lawn.
The robot stores the information in a mathematical model, which can be represented graphically as a set of points connected by lines. The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct.
This means that the robot’s brain makes its own chain and looks for one in the knowledge base that matches within those limits.