Influence of the frame field on the parametrization
- It seems like the parametrization with hard transition functions only around black pixels is the way to go. However the result is quite sensitive to the input frame field. We should spend some effort on improving that.
- I’ve disabled the triangle area weighting in the frame field smoothness term. All triangles now have the same weight regardless of their size; this means the frame field is less constrained in the white areas, where the mesh is usually coarser.
- Frame field regularization term should be set carefully – it orthogonalizes the frame field too much if set too high (seems like 0.1 is already too much). As a result, we have issues capturing junctions with sharp angles such as the ones in the puppy hair dataset. On the other hand, without the regularization term, the frame field tends to collapse to a line field, which makes it unusable due to artefacts in the parametrization.
- Do we really need the frame field everywhere? Why don’t we just compute it over black pixels as in Misha’s paper? Would that improve the parametrization?
- Could we improve the frame field using some kind of iterative refinement?
Results for different values of w_regular
-
each row: frame field, uv computed without/with elimination of some integer variables; some rows also show extraced curves (very basic tracing)
-
w_regular = 0
- w_regular = 0.01
- w_regular = 0.02
- w_regular = 0.05
- w_regular = 0.1