Portion control is something that everyone believes in but few practice. It’s difficult to estimate, after all, what a single serving of food might look like to a physician, because it’s so much easier to see just how much of something we want to eat. And it’s too much work to dirty a bunch of measuring cups or using a tiny bascule every time we want to cook or eat. Now, we can count on augmented reality to do that for us.
Researchers from the University of Newcastle in Callaghan, Australia have designed a system that uses AR to help people estimate how much food they should be eating at one time. Making such an estimation isn’t an easy task, since portion size is determined by a lot of factors, including the size of the person, their activity level, how much of anything else they’d eaten that day, and specific dietary restrictions or needs. The system described in this article is working hard to combat that and create a system that relies on visual cues to provide an accurate approximate serving size.
For this study, nine foods were selected: including green beans, kidney beans, penne pasta, potatoes, broccoli, carrots, sweet corn, cauliflower, and rice. These foods were picked specifically because it’s difficult to estimate portion size by looking. It’s easy to grab a slice of bread or a single apple or banana, but small, individual units are much harder. Each food was measured first in order to create a reference size, which was equal to half of a standard measuring cup (125 mL). Then the system, named ServAR, was calibrated by overlaying an image of that amount on top of the test foods.
These images were processed and integrated into the software using a tablet or mobile phone with a camera. The system was able to portray a mixed reality of sorts, showing the adequate quantity of food on the plate that the person was going to use, allowing them to estimate how much food should be served. It was tested upon 90 adults sorted into three groups–those who had no guide to creating a portion-controlled plate one who had a short introduction and a short verbal guide about serving sizes, and third, those using the ServAR as a measuring tool.
The control group, with no help creating an adequate serving size, came within only 68.9% accuracy of what should be a single serving. Those who had verbal instruction did a bit better, with a score of 77.4%, but the ServAR group did by far the best, achieving a score of 90.7%. The ServAR group also reported that they found the technology easy to use, and agreed that it could really help people estimate more accurate serving size. Some suggested that it might be better if there was more color contrast with white foods, which were difficult to see, especially on the white plate, and others said that it was a bit complicated to manipulate the image in relation to the actual food.
It’s clear that ServAR needs more testing–especially on different kinds of food, from amorphous foods like mashed potatoes and salad to sugary foods like cookies. Pairing this technology with a balanced online guide for nutritional questions and dietary restrictions could really make it even more amazing, and maybe we’ll see that in the future, all using the power of augmented reality.
Let us know what you think in the comments section!