Learning Manifold Patch-Based
Representations of Man-Made Shapes

1Massachusetts Institute of Technology 2Université de Montréal
Learning Manifold Patch-Based Representations of Man-Made Shapes

Abstract

Choosing the right representation for geometry is crucial for making 3D models compatible with existing applications. Focusing on piecewise-smooth man-made shapes, we propose a new representation that is usable in conventional CAD modeling pipelines and can also be learned by deep neural networks. We demonstrate its benefits by applying it to the task of sketch-based modeling. Given a raster image, our system infers a set of parametric surfaces that realize the input in 3D. To capture piecewise smooth geometry, we learn a special shape representation: a deformable parametric template composed of Coons patches. Naively training such a system, however, is hampered by non-manifold artifacts in the parametric shapes and by a lack of data. To address this, we introduce loss functions that bias the network to output non-self-intersecting shapes and implement them as part of a fully self-supervised system, automatically generating both shape templates and synthetic training data. We develop a testbed for sketch-based modeling, demonstrate shape interpolation, and provide comparison to related work.


Method

Deformable patch-based templates

Our representation is composed of Coons patches (a) organized into a deformable template (b). A template provides hard topological constraints for our surfaces, an initialization of their geometry, and, optionally, a means for regularization. Templates are crucial in ensuring that the patches have consistent topology—an approach without templates would result in unstructured, non-manifold patch collections. While our method works using a generic sphere template, we can define templates per shape category to incorporate category-specific priors. These templates capture only coarse geometric features and approximate scale. We present an algorithm for obtaining templates from training data in automatic way.

Pipeline

An overview of our data generation and augmentation (a) and learning (b) pipelines.

Editing a 3D model

Editing a 3D model produced by our method. Because we output 3D geometry as a collection of consistent, well-placed NURBS patches, user edits can be made in conventional CAD software by simply moving control points. Here, we are able to refine the trunk of a car model with just a few clicks.


Video


Paper and supplementary material

Paper

D. Smirnov, M. Bessmeltsev, J. Solomon
Learning Manifold Patch-Based Representations of Man-Made Shapes
International Conference on Learning Representations (ICLR) 2021, virtual
OpenReview | BibTeX

@inproceedings{smirnov2021patches,
  title={Learning Manifold Patch-Based Representations of Man-Made Shapes},
  author={Smirnov, Dmitriy and Bessmeltsev, Mikhail and Solomon, Justin},
  year={2021},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}
}

Poster

pdf [3M]

Code, models, and data

GitHub

Acknowledgements

The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grant W911NF2010168, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grant IIS-1838071, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program. This work was also supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) grant RGPIN-2019-05097 ("Creating Virtual Shapes via Intuitive Input") and from the Fonds de recherche du Québec - Nature et technologies (FRQNT) grant 2020-NC-270087.