enterprise

Google researchers use quantum computing to help improve image classification


In a new tutorial, Google researchers demonstrate how quantum computing techniques can be used to classify 28-pixel-by-28-pixel images illuminated by a single photon. By transforming the quantum state of the said photon, they show they’re able to achieve “at least” 41.27% accuracy on the popular MINST corpus of handwritten digits — a 21.27% improvement over classical computing approaches.

The work, which the researchers say is intended to show how textbook quantum mechanics can shed new light on AI problems, considers the maximum achievable classification accuracy if an algorithm must make a decision after spotting the first “quantum” of light (i.e. photon) passing an LCD screen revealing an image from a data set. On MNIST, the most classical computing can accomplish is detecting a photon that lands on one of the image’s pixels and guessing at the digit from the light intensity distribution, obtained by rescaling the brightness of every image to a unit sum.

The researchers’ quantum mechanical approach employs beam splitters, phase shifters, and other optical elements to create a hologram-like inference pattern. The region of the inference pattern the photon lands on can be used to inform the image classification, illustrating that it’s unnecessary to illuminate a scene with many photons simultaneously in order to produce interference.

“Conceptually, exploiting interference to enhance the probability of a quantum experiment producing the sought outcome is the essential idea underlying all quantum computing,” wrote the researchers. “Apart from providing an easily accessible and commonly understandable toy problem for quantum and machine learning experts, this simple-quantum/simple-machine learning corner also may be of interest for teaching the physics of the measurement process … in a more accessible setting.”

Quantum computing is poised to significantly advance the field of AI and machine learning, some predict. For example, last March, researchers at IBM, MIT, and Oxford published a paper in Nature asserting that as quantum computers become more powerful, they’ll be able to perform feature mapping — i.e., the disassembly of data into non-redundant features — on highly complex data structures that classical computers cannot. Researchers would then be able to develop more effective AI that can, for example, identify patterns in data that are invisible to classical computers.

“Machine learning and quantum computing are two technologies each with the potential for altering how computation is performed to address previously untenable problems,” the coauthors wrote. “A core element to computational speed-ups afforded by quantum algorithms is the exploitation of an exponentially large quantum state space through controllable entanglement and interference.”

A TensorFlow implementation of the Google researchers’ work is forthcoming.



READ SOURCE

Leave a Reply