Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Feb 15;22(4):1493.
doi: 10.3390/s22041493.

A Novel Approach to Dining Bowl Reconstruction for Image-Based Food Volume Estimation

Affiliations

A Novel Approach to Dining Bowl Reconstruction for Image-Based Food Volume Estimation

Wenyan Jia et al. Sensors (Basel). .

Abstract

Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.

Keywords: 3D reconstruction; food volume estimation; image-based dietary assessment; round bowl.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
(a) A roll of the adhesive paper ruler as a tool for bowl measurement, (b) a bowl taped with an adhesive paper ruler centrally across the bottom and sides of the bowl, and (c) selected landmark points for computation (red asterisks).
Figure 2
Figure 2
(a) Extracted landmarks on the image plane. The dashed vertical black line represents the central vertical line of the image. (b) Pinhole camera model, where O and f are the optical center and the focal length of the camera, respectively, Wi  represents the width (in pixels) of the observed ruler at the ith landmark location, Di is the physical distance on the ruler corresponding to the distance between the ith pair of landmarks in the image, and ri is the distance between optical center O and the center of the ith pair of landmarks on the bowl surface.
Figure 3
Figure 3
(a) Pinhole camera model for the reconstruction of the cross-section curve, (b) reconstructed intersecting points of rays between optical center O and bowl interior surface, and (c) reconstructed cross-section curves after interpolation (red), shift, rotation, and averaging (blue).
Figure 4
Figure 4
(a) Computationally reconstructed interior surface of the bowl. (b) An image of the same bowl containing one cup (237 mL) of red tea. Six points on the rim of the bowl are manually specified to fit an ellipse. (c) Virtual volumetric levels (red ellipses) are superimposed in the image, where each level (upwards) represents a 50 mL increment.
Figure 5
Figure 5
The user interface for estimating food with a flat surface (a) and without a flat surface (b).
Figure 6
Figure 6
(a) The actual image and the segmented liquid area, (b) simulated liquid volume in a bowl, and (c) relationship between the FAR and the liquid volume. The blue asterisk in (c) represents the FAR of the liquid in (b).
Figure 7
Figure 7
(a) Nine bowls used in the experiments and (b) reconstructed bowls.
Figure 8
Figure 8
Actual and estimated fullness values from both the smartphone and eButton images using manual estimation and simulation. Each group containing five bars corresponds to one liquid sample. The first bar represents the measured fullness, i.e., the ground truth, and the other four bars correspond to the results of the four estimation methods, respectively.
Figure 9
Figure 9
Examples of real food images.
Figure 10
Figure 10
Box plots of estimation errors of fullness estimated by two researchers using our software and by a registered dietitian with prior experience estimating volume from images using direct visualization. R represents the researchers’ estimation, DN represents the dietitian’s estimation with no image cue (i.e., water bottle in the bowl image), and DC represents the dietitian’s estimation with the cue. On each box, the central line represents the median of the errors over all the food samples. The bottom and top edges of the box are, respectively, the first and the third quartiles, which is the IQR. Whiskers are extended to the most extreme data point that is no more than 1.5× IQR from the edge of the box. Points outside the whiskers are plotted individually as pluses, representing potential outliers.

Similar articles

Cited by

References

    1. Madival S.A., Jawaligi S.S. A comprehensive review and open issues on food image analysis and dietary assessment; Proceedings of the 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS); Thoothukudi, India. 3–5 December 2020; pp. 414–420.
    1. Bell B.M., Alam R., Alshurafa N., Thomaz E., Mondol A.S., de la Haye K., Stankovic J.A., Lach J., Spruijt-Metz D. Automatic, wearable-based, in-field eating detection approaches for public health research: A scoping review. NPJ Digit. Med. 2020;3:38. doi: 10.1038/s41746-020-0246-2. - DOI - PMC - PubMed
    1. Jobarteh M.L., McCrory M.A., Lo B., Sun M., Sazonov E., Anderson A.K., Jia W., Maitland K., Qiu J., Steiner-Asiedu M., et al. Development and validation of an objective, passive dietary assessment method for estimating food and nutrient intake in households in low- and middle-income countries: A study protocol. Curr. Dev. Nutr. 2020;4:nzaa020. doi: 10.1093/cdn/nzaa020. - DOI - PMC - PubMed
    1. Doulah A., McCrory M.A., Higgins J.A., Sazonov E. A systematic review of technology-driven methodologies for estimation of energy intake. IEEE Access. 2019;7:49653–49668. doi: 10.1109/ACCESS.2019.2910308. - DOI - PMC - PubMed
    1. Vu T., Lin F., Alshurafa N., Xu W. Wearable food intake monitoring technologies: A comprehensive review. Computers. 2017;6:4. doi: 10.3390/computers6010004. - DOI