Previous focus: extraction

  • issue: how do we make sure that the (selected) output edges cover well the input curves?
  • heuristic: for each input edge e_input, greedily choose the closest hex edge e_nearest
  • optimization: for binary labeling, assign a large negative cost to e_nearest (high probability of being selected), while penalizing the number of selected edges (uniform positive cost per edge)
  • results: ok-ish on clean inputs (curved corner, bowl, trebol), unusable on oversketched inputs (sphere circles, round roof, cross) since too many edges get pre-selected
  • conclusion: extraction still an open problem, right now now ideas on how to improve it that would not require a more difficult and slow optimization (OT, stochastic, …)

Next steps: parametrization

Extraction gets easier with a good parametrization. In order to improve the parametrization, I want to try two things:

  1. modify the snapping
    • instead of point-wise snapping, snap all tets intersected by the input curves
    • snap adjacent tets to the same isoline where we know they should be snapped to the same isoline
    • in practice: create groups of tets, tets in a group will share the two auxiliary integer variables (i.e., they will get snapped to the same isoline). Moreover, don’t forget to include transition functions: if two adjacent groups are separated by a cut, but should still be on the same isoline up to the cut, make sure to include that constraint in the optimization (i.e., tie the two groups of aux variables using the transition functions, same as the ones used for parametric coords)
  2. anisotropic scale
    • similar to what we’ve experimented with in 2d
    • in 3d, the estimation of tangent direction should be much more robust
    • use fine scale in the tangent direction, coarse scale in the non-tangent directions
    • this could be one of the technical novelties

Next steps: data generation and collection

Another open challenge is to get the input curve clouds / curve soups. Sources can include:

  1. prior work
  2. synthetic data: take a clean network and artificially add noise/oversketching to it. Clean networks can be taken from e.g.
  3. hire designers to draw stuff for us. If we do that, we will need to provide very specific guidelines to
    • what should be drawn (target photo, target 3d model)
    • how it should be drawn (curves only)
    • the challenge is that we want them to draw freely, but at the same time provide data that we can work with
  4. draw stuff ourselves in TiltBrush or Emilie’s tool.
    • freehand drawing
    • constrained drawing (e.g. first draw a bunch of straight lines, then freely draw curves)
    • trace curves on existing 3d model or existing curve network imported to TiltBrush

Other open questions

  • what type of data can this method be strong with? (oversketched, but not too much)
  • frame field convergence, alternative optimization? (David mentioned LBFGS)

Other ToDos

  1. housework
    • timings
    • readme
    • script to process json in order to collect details for experiments
    • better organization of experiments
  2. import/export
    • export extracted isolines (verts, edges, params)
    • export extraction weights
    • smarter import: don’t crash if json parameters not available (solution: wrap each in try-catch)
  3. visualisation
    • tet mesh vertices, snapped tet mesh vertices, tet mesh vertices snapping groups
    • uvw mesh: cut faces
  4. comparisons
    • binary labeling on edges of a (field-aligned or not) tet mesh?