Mesh Neural Cellular Automata

steps / sec:
frames / sec:
AO Map
Normal Map
Roughness Map
Vis Mode


Speed:
  Brush  
  Mode  

Subdivision Level

Orientation:

Bumpiness:

Brush Size

Click or tap on the mesh to graft a new texture onto it.
Hold Shift-Key and drag the mouse to rotate the camera.
Target Appearance
Target Mesh
Main Texture
Grafting Texture

Abstract

Modeling and synthesizing textures are essential for enhancing the realism of virtual environments. Methods that directly synthesize textures in 3D offer distinct advantages to the UV-mapping-based methods as they can create seamless textures and align more closely with the ways textures form in nature. We propose Mesh Neural Cellular Automata (MeshNCA), a method for directly synthesizing dynamic textures on 3D meshes without requiring any UV maps. MeshNCA is a generalized type of cellular automata that can operate on a set of cells arranged on a non-grid structure such as vertices of a 3D mesh. While only being trained on an Icosphere mesh, MeshNCA shows remarkable generalization and can synthesize textures on any mesh in real time after the training. Additionally, it accommodates multi-modal supervision and can be trained using different targets such as images, text prompts, and motion vector fields. Moreover, we conceptualize a way of grafting trained MeshNCA instances, enabling texture interpolation. Our MeshNCA model enables real-time 3D texture synthesis on meshes and allows several user interactions including texture density/orientation control, a grafting brush, and motion speed/direction control. Finally, we implement the forward pass of our MeshNCA model using the WebGL shading language and showcase our trained models in an online interactive demo which is accessible on personal computers and smartphones.

Summary of Results

MeshNCA can synthesize textures given guidance from exemplar texture images or text prompts. Additionally, given a target vector field, MeshNCA can synthesize dynamic textures on 3D meshes that follow the target motion. Remarkably, in all of our experiments, MeshNCA can generalize to any mesh in test-time after being trained on an Icosphere mesh.

MeshNCA Properties

Additionally, MeshNCA entails many remarkable test-time properties which make this model very suitable for real-time and interactive applications. These properties include: Generalization to unseen meshes, Generalization to animated meshes, self organization, grafting, texture density control, texture orientation, emergent spontaneous motion, and motion direction control.

Generalization to Unseen Meshes

MeshNCA is only trained on an Icosphere that has 40,962 vertices. Remarkably, after training, MeshNCA can generalize to almost any mesh since it only relies on local communication. Here we show generalization to the mug mesh that has a hole, and the anime mesh which has very sharp edges.

Controlling the texture density by subdividing the mesh.

Generalization to Animated Meshes

The spherical-harmonics-based perception stage in MeshNCA is invariant to mesh translation and scaling. Additionally, we find that MeshNCA is robust against moving vertices and can texture animated meshes seamlessly.


Self Organization

In MeshNCA, each cell (vertex) communicates with its neighbors and updates itself asynchronously. This type of distributed local update rule allows the cell to self-organize and be robust against perturbation.

Grafting

MeshNCA and in general NCA models are inspired from biological cells. Similar to biological systems, two compatible MeshNCA instances can be grafted to create a seamless transition between textures.

Texture Density Control

MeshNCA is invariant to the scale of the underlying mesh and the density of the synthesized texture depends on the number of the vertices on the mesh. This allows the user to control the texture density by the subdividing the mesh.

Controlling the texture density by subdividing the mesh.

Texture Orientation Control

In MeshNCA, cells receive directional information thorough the spherical-harmonics-based perception. By rotating the spherical harmonics basis around the surface normal for each cell, MeshNCA allows the users to re-orient the synthesized texture in real time.

Emergent Spontaneous Motion

Although the MeshNCA's training signal originates from a static exemplar texture image, the model is able to spontaneously generate stable but randomly moving textures. This emergent spontaneous motion can also be supervised using our proposed motion loss.

Motion Direction Control

In MeshNCA, cells receive directional information thorough the spherical-harmonics-based perception. By rotating the spherical harmonics basis around the surface normal for each cell, MeshNCA allows the users to control the motion direction in real time.

Fitting PBR Textures

MeshNCA can effectively leverage the shared structural similarity of PBR textures -- i.e. Albedo, Normal, Height, Roughness, and Ambient Occlusion Maps -- and simultaneously multiple texture maps. In the models presented in our demo, each cell has a 16 dimensional state. We assign 3 dimensions to albedo, 3 to surface normal map, 1 to height map, 1 to roughness map, and 1 to ambient occlusion map. Notice that a single MeshNCA model synthesizes all of these texture maps. The Color option in the visualization mode corresponds to the results of applying a physically-based-rendering shader on the MeshNCA synthesized texture maps.

BibTeX