Skeletonization baseline
Trebol
- input curves
- original mesh (~5k triangles)
- subdivided mesh, 1 iter
- subdivided mesh, 2 iters
- subdivided mesh, 3 iters
Bowl
- input curves
- original mesh (~5k triangles)
- subdivided mesh, 1 iter
- subdivided mesh, 2 iter
Bumpycube
- input curves
- original mesh (~5k triangles)
- subdivided mesh, 1 iter
- subdivided mesh, 2 iter
- subdivided mesh, 3 iter
Gamepad
- input curves
- original mesh (dense, ~100k triangles)
- subdivided mesh, 1 iter
TEST
2020-04-07
Meeting: ADMT
2D vectorization project
- T: now working on SGP re-submission, Sec. 4
- the figure with junctions will replace current limitation figure with the high-valence junction
- high valence junctions: we can say that with a proper frame field (with automatically-detected or user-specified singularities), even high valence junctions could be processed properly. We can also remark that high valence junctions are quite rare in practice.
3D curves project
- T: results starting to look good
- D: we could compare our method to a baseline = alpha shapes + medial axis. This could work well for simple cases, but should fail for more complex ones.
- A: this is analogous to 2D skeletonization methods
- CGAL: 3D alpha shapes
- CGAL: Skeletonization
Stuff
2020-04-03
2020-03-31
2020-03-30
Meeting with ADJM
- parametrization issues: sphere case
- do the issues occur at cut surfaces?
- are transition functions applied correctly?
- frame field issues
- the optimization I’m using right now is not the best solution
- alternatives: LBFGS optimization, David Palmer’s SDP projection (https://github.com/dpa1mer/arff)
Synthetic generation of oversketched curves
Results on synthetic oversketched input
Current results
Updates
Latest improvements
Mesh generation
Current pipeline:
- Uniformly resample input curves.
- Define a narrow band, i.e. an implicit volume around the curves with a user-specified radius.
- Using CGAL, compute a tet mesh bounded by the implicit volume.
- Using TetGen, compute a constrained tet mesh containing the input curves bounded by the surface mesh from Step 3.
Notes:
- Step 3 is necessary since TetGen needs an explicit boundary surface.
- Step 4 is necessary since CGAL cannot compute constrained Delaunay triangulations in 3D.
- Using this pipeline, we make sure that each input edge has a corresponding tet edge. No Steiner points are added on the input edges (nor on the surface mesh).
Examples:
Frame field
I’ve had a problem with projection onto feasible SH coefficients (the curved corner example). I’ve solved it by replacing the project_original
method with the project_variant_h
, which seems to work better.
project_original
(used previously)
project_variant_h
(now using)
Parametrization
Snapping constraints are now enforced along curves instead of pointwise. In practice, we determine if neighboring curve edges should be snapped to the same isoline, if yes, they share auxiliary integers used for soft snapping. This ensures the continuity of isolines along the input curves, and prevents jumps.
Another benefit of this approach is that the number of integer variables is significanlty lower (thanks to the grouping approach), the parametrization is therefore computed much faster (we’re still using CoMISo).
ToDo and Questions
Mesh generation
ok for now.
Frame field
- iterative scheme: compute frame field, get the labels, smooth them (?), recompute the frame field with additional constraints on labels (?)
- Q: is biharmonic energy valid with current boundary constraints? What happens at tet mesh boundary? Do we even need biharmonic fields?
Parametrization
- snapping: make sure we don’t snap the same corner multiple times, or even to all three axes (?)
- add constraints between auxiliary integers at cuts
- anisotropic scale: finer scale in the tangent direction, coarser in the two non-tangent directions
- smart strategy for the order of greedy rounding?
(output) Curve network extraction
- instead of greedily choosing the nearest hex edge, use weights for hex edges
- extraction weight: measure alignment of input curves and the frame field
- extraction weight: estimate density of input samples (using KNN?)
Data generation
- Q: what kind of data do we want to apply this method to?
- Q: what are the strengths of our method?
- Q: what data can we generate?
- Q: can we get data from professional designers? (e.g. Fiverr)
Surface generation
- Q: can we use the extra information (frame field, param., hex mesh) to surface the resulting curve network? Can we even do it directly? (what we were thinking of doing originally). Iterative scheme?
Comparisons
- implement baseline comparison using binary extraction from a tet mesh edges (instead of hex mesh edges)
Visualisation
- visualise quality of the tet mesh
- visualise quality of the parametrization
- Jacobian: colormap blue->white->red
- tet type: valid=green, degenerate=red, inverted=blue
- visualise quality of the hex mesh
- visualise isolines that we snap to in the parametric space
- filter hex edges by extraction costs
- color interior tet faces differently
- color uvw cut faces in the tet mesh differently, only show them
- color tet mesh vertices/corners according to which auxiliary integer they are snapped to
Import/Export
- import/export isolines to/from a file
Misc - low priority
- add timings
- list details of tests from .json
- export extraction weights?
- smart import - don’t crash if not available
2020-02-14
Snapping, point vs. curve
Meeting notes: AJMT, Feb 10 2020
Previous focus: extraction
- issue: how do we make sure that the (selected) output edges cover well the input curves?
- heuristic: for each input edge e_input, greedily choose the closest hex edge e_nearest
- optimization: for binary labeling, assign a large negative cost to e_nearest (high probability of being selected), while penalizing the number of selected edges (uniform positive cost per edge)
- results: ok-ish on clean inputs (curved corner, bowl, trebol), unusable on oversketched inputs (sphere circles, round roof, cross) since too many edges get pre-selected
- conclusion: extraction still an open problem, right now now ideas on how to improve it that would not require a more difficult and slow optimization (OT, stochastic, …)
Next steps: parametrization
Extraction gets easier with a good parametrization. In order to improve the parametrization, I want to try two things:
- modify the snapping
- instead of point-wise snapping, snap all tets intersected by the input curves
- snap adjacent tets to the same isoline where we know they should be snapped to the same isoline
- in practice: create groups of tets, tets in a group will share the two auxiliary integer variables (i.e., they will get snapped to the same isoline). Moreover, don’t forget to include transition functions: if two adjacent groups are separated by a cut, but should still be on the same isoline up to the cut, make sure to include that constraint in the optimization (i.e., tie the two groups of aux variables using the transition functions, same as the ones used for parametric coords)
- anisotropic scale
- similar to what we’ve experimented with in 2d
- in 3d, the estimation of tangent direction should be much more robust
- use fine scale in the tangent direction, coarse scale in the non-tangent directions
- this could be one of the technical novelties
Next steps: data generation and collection
Another open challenge is to get the input curve clouds / curve soups. Sources can include:
- prior work
- ILoveSketch [Bae et al. UIST 2008]
- EverybodyLovesSketch [Bae et al. UIST 2009]
- Analytic Drawing of 3D Scaffolds [Schmidt et al. SA 2009]
- Agile 3D sketching with air scaffolding [Kim et al. CHI 2018]
- VIPSS [Huang et al. SIG 2019]
- synthetic data: take a clean network and artificially add noise/oversketching to it. Clean networks can be taken from e.g.
- Flow Aligned Surfacing of Curve Networks [Pan et al. SIG 2015]
- Surfacing curve networks with normal control [Stanko et al. CAG 2016]
- hire designers to draw stuff for us. If we do that, we will need to provide very specific guidelines to
- what should be drawn (target photo, target 3d model)
- how it should be drawn (curves only)
- the challenge is that we want them to draw freely, but at the same time provide data that we can work with
- draw stuff ourselves in TiltBrush or Emilie’s tool.
- freehand drawing
- constrained drawing (e.g. first draw a bunch of straight lines, then freely draw curves)
- trace curves on existing 3d model or existing curve network imported to TiltBrush
Other open questions
- what type of data can this method be strong with? (oversketched, but not too much)
- frame field convergence, alternative optimization? (David mentioned LBFGS)
Other ToDos
- housework
- timings
- readme
- script to process json in order to collect details for experiments
- better organization of experiments
- import/export
- export extracted isolines (verts, edges, params)
- export extraction weights
- smarter import: don’t crash if json parameters not available (solution: wrap each in try-catch)
- visualisation
- tet mesh vertices, snapped tet mesh vertices, tet mesh vertices snapping groups
- uvw mesh: cut faces
- comparisons
- binary labeling on edges of a (field-aligned or not) tet mesh?
2020-02-10
2020-02-09
2020-02-03
2020-01-28
Streamline clustering heuristic
2019-12-18
2019-12-16
Meeting notes (16/12/2019, ADJMT)
- T: news – narrow band, extraction ispired by PolyFit
- T: current priority is to improve the extraction cost (Chamfer distance?)
- D: is twisting a problem? T: Probably ok for now
- A: why do we need fitting and coverage terms in PolyFit? Seems like it’s redundant. T also lacks intuition.
- D: can we try PolyFit coverage term in 1D?
- M: merge tangent + fitting terms together
- A: to start, we can penalize the number of selected edges for the coverage term
- J: is current hex mesh sufficient? T: we can play with parameters and hope for the best (nb size etc.)
- D: harmonic or biharmonic frame field? T: Biharmonic might be better (curved corner example)
- next meeting: send mail beginning of January
ToDo
- implement better extraction costs: two-sided distance of the input edges and the output edges
3d drawing datasets
Bowl experiments – extraction
2019-12-11
PolyFit tests
PolyFit project page (Nan & Wonka, ICCV 2017)
Energy terms: Data fitting vs. Point coverage
Left to right, data fitting weight w_fit
is decreasing and point coverage weight w_pc
is increasing.
Setting w_fit = 0.1, w_pc = 0.9
(last screenshot) the algorithm does not select any of the faces, that is why the result is empty.
Current results (soft snapping works)
Curved corner
Double wave
Elliptic
Hyperbolic
Parabolic
Church
Box
Bowl
2019-12-10
2019-12-09
Sphere frame field, finer mesh (gif)
Frame field: more complex input
Meeting: David, Justin, Misha (Nov 21, 2019)
Current status
- “vectorization in 3D”
- given a hex mesh, we now have a simple binary labeling for extracting a clean curve network, see this post
Questions and comments
- David: can we restrict the hex mesh to a narrow band around the input curves? What should be the size of the narrow band? What should be the grid size?
- objective function for extraction? Negative weights?
- locally decide if an edge should have positive or negative weight
- symmetric distance, similar to [Nan and Wonka 2017] or [deGoes et al. 2011] (see below)
- Justin: this seems like a very convoluted way to get a clean curve network. Hex meshing is hard, why not just do tet meshing with alignment constraints?
ToDo
- improve binary labeling: symmetric distance, negative weights, more general constraints
- experiments with real data (TiltBrush)
Next meeting
- Monday, Dec 2, 2019, 2pm/8am
Related papers
[Nan and Wonka, ICCV 2017]
PolyFit
[deGoes et al. 2011]
An Optimal Transport Approach to Robust Reconstruction and Simplification of 2D Shapes
[Ni et al., SGP 2018]
Field-Aligned and Lattice-Guided Tetrahedral Meshing
[Levy and Liu, SIGGRAPH 2010]
Lp Centroidal Voronoi Tesselation and its applications
ILP: first experiments
Results for GD Retreat 2019
2019-09-16
2019-09-13
bigguy: frame field over narrow band vs. full mesh
eagle1: frame field over narrow band vs. full mesh
smallcar1: mask on vs. off
no mask
with mask
bowtie
closeup on the problematic junction
mask?
smallcar2: comiso problems
mecha: various frame fields
narrow band only – wreg = 0.1, scale = 2
full mesh – wreg = 0.1, scale = 2
full mesh – wreg = 0.1, scale = 3
full mesh – wreg = 1.0, scale = 3
grandpa
elephant
full mesh, wreg = 0.1
full mesh, wreg = 1.0
narrow band, wreg = 0.1
narrow band, wreg = 1.0
more stuff
From meeting with David
Custom cut graph
Parametrization: soft snapping
Summary: I’m implementing the snapping term for the 3D parametrization. For each vertex lying on an input curve, we require two of three parametric coordinates to be close to integers. To that end, I introduce new integer variables i and a set of soft constraints (x - i)^2 for each coordinate that is required to be close to an integer.
The one parametric direction that is not required to be an integer is the one closest to the curve tangent in the combed frame field.
Problem: After the optimization, all of the auxiliary integer variables i are zeros – the constrained vertices collapse onto isolines x=y=0, x=z=0, y=z=0.
Here is a very simple example. This is the input mesh, and the ground truth surface:
Below is a parametrization computed with soft snapping, but only constraining the z-coordinate to be close to an integer. The auxiliary integer variables are all 0, but in this trivial case it works out, and the blue points are close to the plane z=0. Transition fn are all identities here, so the only integer vars are the auxiliary ones.
Here is the hex mesh extracted from the above parametrization:
If we add the proper constraints (i.e. two integers per vertex), we start to see the degeneracies (again, all aux integer vars are 0).
If we increase the snapping penalty weight, we see that the curve points collapse onto coordinate axes:
2019-07-26
2019-07-25
2019-07-23
Experiments: mesh density
- in general, higher mesh density –> more linearization artifacts (field tends to get ‘more constant’)
20 input points per curve
simple_parabolic
simple_wave_1
simple_wave_2
simple_elliptic
40 input points per curve
simple_parabolic
simple_wave_2
80 input points per curve
simple_wave_2
misc
Barycentric interpolation: original SH coeffs (before projection)
Experiment: # of iterations
Meeting notes from Friday July 12 2019 (ADJT)
Summary of results
- tests on boundary curves of Coons patches
- focusing on the frame field
- biharmonic smoothness seems to work well in many cases
Next steps
Frame field: weighted smoothness
- FF smoothness is more important where we expect the surface to be = distance-based weights
- How to get the distance field?
- Start with experiments with a known ground truth surface, see if the result improves with a ‘perfect’ distance field. If that works, we can start worrying about how to get the distance field from the input curves.
- We will probably need some kind of iterative approach: start with some initial guess for the surface, get the distance field from that, update the frame field and the surface, iterate.
Frame field: biharmonic smoothness
- Verify if we need to update the way the field is optimized if we change the smoothness term (?)
Param: customize
- For our custom parametrization, start by dropping the boundary alignment term (we don’t need that).
- Then update the parametrization energy to match our needs: we want the integer sheets to be close the the input curves.
- David thinks we might need soft transition functions.
Extraction
- For visualisation, start by implementing a simple filter to select a subset of quads that are somehow close to the input curves. Maybe start with a given GT surface?
- Modify the extraction algorithm – drop the boundary sanitization (again, we don’t need that).
Tests
- Start doing tests with multiple smooth patches and sharp features
2019-07-12
Results on our inputs with [Simo-Serra et al. 2018]
2019-04-29
2019-04-28
2019-04-16
Stochastic extraction: current results (NO global constraints)
Puppy hair (ok)
initial graph + curves
Hippo cut
initial graph + curves
λ = 0.2
λ = 0.1 (oversimplified)
λ = 0.05 (junk curves)
λ = 0.08 (oversimplified)
Elephant
initial graph + curves
λ = 0.005
λ = 0.01
λ = 0.02
λ = 0.04
Man (bad initial topology)
Mouse
initial graph
λ = 0.05 (slightly worse)
λ = 0.05 (better)
λ = 0.02 (slightly worse)
λ = 0.02 (better)
λ = 0.01
λ = 0.02 – zoom in
Muten
initial graph
λ = 0.1
λ = 0.05
λ = 0.02
λ = 0.02 – zoom in
non-tangent edges removed, λ = 0.01
Sheriff
initial graph + curves
λ = 0.05
λ = 0.01
λ = 0.001
λ = 0.001 – zoom in
λ = 0.001 – param + simplified (spurious connections)
Puppy
initial graph
initial curves
λ = 0.01
λ = 0.02
λ = 0.05
λ = 0.01 – zoom in
initial with non-tangent edges removed in pre-processing
other challenges
- bleeding
- bad labeling, parametrization degeneracies
- small frame field angles
3D Frame field: examples
3D Frame field: cylinder patch test
Explanation
- This test involves computing a 3D frame field aligned to the boundary curve of a cylindrical patch.
- I’m using both tangent and normal constraints.
- Input curve (4 sides) is normalized to have unit AABB diagonal.
- There are two inputs taken from patches with angular size 90 deg and 180 deg.
- I compare two types of boundary surface: unit cube (centered at 0) and the AABB box of the (normalized) curve with a small padding (set to 0.1 here).
- I also compare two types of meshes, a coarse mesh and a fine mesh (max tet volume = 0.00001)
- The constraints are sampled quite densely along the boundary curve (128 samples along the quarter- / half-circle). This seems to prevent the undesirable twisting of frames along the input curves (example here)
- Frame field is discretized per-vertex of the tet mesh and computed using unconstrained quadratic optimization (smoothness + alignment energy), ignoring the non-linear quadratic constraint. I then do a projection on the feasible set via the gradient descent from [Ray & Sokolov 2015].
- For visualisation, the frame field is sampled on the surface of the cylinder: barycentric interpolation of spherical harmonic coefficients on tetmesh vertices + projection on the feasible set of frames.
Comments
- Some of the frames are obviously wrong: that’s most likely a bug in the projection on the feasible set that occurs when the frame is a rotation of the reference frame around x-, y- or z-axis by pi/4
- Harmonic energy gives a constant field away from circles – not what we want
- Biharmonic is a bit better, but – strangely – the results depend too much on the input mesh. For instance the result is close to what we want (both input, 90 and 180 deg) using unit cube as boundary and coarse mesh, but it no longer works if we take a fine mesh or use the AABB boundary surf.
Legend
- magenta: frame field constraints
- blue: boundary vertices of the tet mesh
- green: interior vertices of the tet mesh
- stats in the top: cylinder params, mesh stats (#verts, #tets), boundary type (cube or aabb), smoothness type (harmonic or biharmonic), optimization weights (smoothness and alignment), computation time in seconds (solve, projection)
Harmonic smoothness
Quarter-circle (90° arc)
Bd surf: unit cube
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Bd surf: AABB + margin
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Half-circle (180° arc)
Bd surf: unit cube
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Bd surf: AABB + margin
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Biharmonic smoothness
Quarter-circle (90° arc)
Bd surf: unit cube
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Bd surf: AABB + margin
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Half-circle (180° arc)
Bd surf: unit cube
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
Bd surf: AABB + margin
- coarse mesh (+ dense network)
- fine mesh (+ dense network)
2019-03-28
2019-03-27
2019-03-21
2019-02-27
Frame field regularization
We have two ideas for how to do the regularization.
- Use approach from the PolyVector paper, Section 4.2 Angle-bounded fields
- Use Misha’s approach with iterative re-weighting
I’ve tried 1., using fitting to a corrected field, but there are problems
- The local (correction) step breaks the alignment to Sobel tangents – we don’t want to rotate both directions away from the bisector, but rather only the one that is ‘further’ from the tangent direction (makes sense?)
- For higher fitting weights, the result is over-regularized and not really aligned to the corrected field – seems like a bug? (screeenshots) The result is the same if I disable tangent alignment, and set the smoothness weight to almost zero.
Approach 2. does not work well either yet.
The annoying thing is that we can have additional problems when interpolating the frame field from vertices to faces:
Influence of the frame field on the parametrization
- It seems like the parametrization with hard transition functions only around black pixels is the way to go. However the result is quite sensitive to the input frame field. We should spend some effort on improving that.
- I’ve disabled the triangle area weighting in the frame field smoothness term. All triangles now have the same weight regardless of their size; this means the frame field is less constrained in the white areas, where the mesh is usually coarser.
- Frame field regularization term should be set carefully – it orthogonalizes the frame field too much if set too high (seems like 0.1 is already too much). As a result, we have issues capturing junctions with sharp angles such as the ones in the puppy hair dataset. On the other hand, without the regularization term, the frame field tends to collapse to a line field, which makes it unusable due to artefacts in the parametrization.
- Do we really need the frame field everywhere? Why don’t we just compute it over black pixels as in Misha’s paper? Would that improve the parametrization?
- Could we improve the frame field using some kind of iterative refinement?
Results for different values of w_regular
-
each row: frame field, uv computed without/with elimination of some integer variables; some rows also show extraced curves (very basic tracing)
-
w_regular = 0
- w_regular = 0.01
- w_regular = 0.02
- w_regular = 0.05
- w_regular = 0.1
Current parametrization results
PGP with snapping
- rows: varying scale
- cols: varying snapping weights (0, 1, 5, 10, 100)
Triangle: zoom in
- uv not aligned with the frame field
- How to make the alignment term more important near the strokes? Using distance weights for the alignment term?
Triangle: with / without curl correction
Triangle
Square
Pentagon
Circle
puppy_hair
parallel
Modifications to PGP
PGP extraction, QEx vs. Vorpaline
Post-deadline summary: Vectorization project
Results
Raw todo notes
- visualise transition function weights
- visualise labeling + streamlines
- do lots of (automated) tests with different parameters, inputs
- export pdf+image
- extraction: assignmen to curve segments
- improve removal by a LOT
- improve the fitting. Global formulation? (splines?)
- re-assign the pixels when a curve is removed
- improve treatment of Y junctions
- figure out how to enable Y branching in the parametrization (Instant Meshes? PGP?)
- Bezier merging, otherwise way too many curves. Use the information from the assignment to decide which curves should be merged?
- don’t init the sparse triangulation via dense triangulation, use blue noise sampling (BNOT)
- clean up the GUIs, clean up the code (simplify!)
- optimize the code; merge stuff from parametrization and extraction
- more stress tests (sharp angles, sequence of sharp angles, e.g. puppy hair)
- comparison to vectorization papers and StrokeAggregator
- fix all parameters to reasonable default values
- use stroke width wherever possible
- paralellize where possible (streamline tracing?)
- paper: didactic figures (transition functions etc.)
- extraction: insets
- extraction: compute grid vertices in all triangles, not just the ones with a pair of isolines – take all nodes in some radius
- extraction: when grouping grid vertices, make sure the radius is smaller than feature size! (this should be done already, verify)
Fitting
- we could view the parametrization (+extraction) as assignment (once again) of pixels to param. curves. Then use the param. + position to fit a global network of splines.
TetGen tests
PGP tests
- PGP paper
- test in geogram:
Todo
- singularities
- extraction
- (custom) curl removal
- snapping
Mail from Nicolas Ray
It is not very difficult to constraint vertices. The code of Geogram aready do that for constraining hard edges, but does not expose the possibility to constrain just a vertex.You can try to directly edit “mesh_PGP_2D.cpp” to add your constraints. It actually locks extremities of edges considered as constrained, but you can edit the condition to include the vertices you want.
if(constrain_hard_edges) {
FOR(c,mesh->facet_corners.nb()) {
index_t cnstr =
Internal::get_edge_constraints(mesh, c, B);
index_t v = mesh->facet_corners.vertex(c);
// Inverse R, because when the axis turns
// clockwise, coordinates turn anticlockwise.
index_t Rcc = Internal::inverse_R(R_fv[c]);
if(cnstr & Internal::CNSTR_U) {
if(Rcc == 0 || Rcc == 2) {
nlLockVariable(4*v);
nlSetVariable(4*v,1e4);
nlLockVariable(4*v+1);
nlSetVariable(4*v+1,0.0);
} else {
nlLockVariable(4*v+2);
nlSetVariable(4*v+2,1e4);
nlLockVariable(4*v+3);
nlSetVariable(4*v+3,0.0);
}
}
if(cnstr & Internal::CNSTR_V) {
if(Rcc == 0 || Rcc == 2) {
nlLockVariable(4*v+2);
nlSetVariable(4*v+2,1e4);
nlLockVariable(4*v+3);
nlSetVariable(4*v+3,0.0);
} else {
nlLockVariable(4*v);
nlSetVariable(4*v,1e4);
nlLockVariable(4*v+1);
nlSetVariable(4*v+1,0.0);
}
}
}
}
Hard transition fn
Extraction: with removal
Current extraction results
Current parametrization results
Extraction overview
0. Parametrization
1. Computation of the integer grid map
The first step is the computation of the integer grid map, done by computing the intersections (in the sketch space) of each triangle with parametric integer isolines.
2. Trace segments and grid nodes
We start the tracing by looking at all triangles with estimated tangent directions (u or v). For each triangle t with the direction d, we find all isoline segments in the direction d in the triangle t, and we add these isoline segments to the list of seeds. Then, while the list of seeds is not empty, we pop a seed, mark the corresponding isoline segment as traced, and continue the tracing by placing more seeds to the adjacent triangles. The tracing stops if the seed segment is not inside of a triangle, or if the non-integer part of the translational part of the transition function between triangles is bigger than a threshold.
Afterwards, all grid nodes lying on a traced segment are marked as traced.
3. Cluster grid nodes
The traced grid nodes are locally organized into clusters. Two grid nodes are in the same cluster if they correspond to the same grid node in the parametric space.
4. Construct the dual graph over traced segments
We construct a dual graph over the traced segments. In this graph, two traced segments are connected by an edge if they were extracted from the same isoline.
We also construct a subgraph of this graph (a partial dual graph) in which two traced segments are connected by an edge if they were extracted from the same isoline segment [i,i+1].
5. Disambiguate curves
[WORK IN PROGRESS]
Each connected component of the partial dual graph defines a base curve (that corresponds to some isoline segment [i,i+1]). However, some of these curves might have incorrect topology (e.g. they form a Y as the one on the screenshot) – these need to be split into multiple curves.
6. Correct curve parametrization
At this stage, we have extraced a bunch of line segments and grid nodes (step 2) and we have organized them into curves (step 4, 5). Next, we assign a value t in [0,1] to each line segment in a given curve based on the compute global parametrization (step 0).
In this step, we make sure that this parametrization is consistent for each curve by flipping the value of the parameter (t := 1-t) for inconsistent segments.
Corrected parametrization:
Flipped segments:
7. Pixel assignment
At this point, the candidate curves are computed. Now, for each pixel, we find and store the set of nearest curves (primary, secondary, etc.)
8. Curve removal
[TODO : implement a scheme for interative removal]
[TODO : update assigned pixels when a curve is removed]
Having computed the pixel-curve assignment, we proceed to remove curves that have weak support. The main criteria for removing a curve are the following:
- Number of pixels for which the curve is the nearest curve (primary pixels)
- Cost of removing this curve, measured by taking the aggregate difference of distances of primary pixels to the second nearest curve.
- Parametric coverage of the curve: maximal coverage is [0.0, 1.0], if it’s smaller, e.g. [0.0, 0.2] or [0.7, 1.0], we know that the curve has an open end, and is a good candidate for removal.
- Length of the curve chain to which the curve belongs: short chains are likely just noise.
9. Bezier fitting and simplification
[TODO : implement Bezier fitting]
[TODO : implement Bezier merging]
The above process essentially gives us the topology of the output curve network. In the last step, we fit a network of Bezier curves to this topology.
We can further simplify the result by iteratively merging adjacent Bezier curves.
Tangent direction: labeling
-
picking the tangent direction in the frame field : using the root with the largest magnitude
-
archi, same parametrization scale (m=100), varying triangulation density
-
max area = 20, #V = 23512, #F = 46914, time = 924 sec (15 min 24)
- max area = 50, #V = 10838, #F = 21573, time = 268 sec (5 min 28)
- max area = 100, #V = 6095, #F = 12093, time = 93 sec (1 min 33)
Parametrization with timings
Adaptive triangulation
-
archi example: 318k black pixels
-
318k constrained vertices: all black pixels (currently used, way too many!)
- 25k constrained vertices: from dense triangulation,
max_area=10
- number of vertices can be easily controlled via the
max_area
param
- 20k constrained vertices: Poisson disk sampling
- generated with a stupid algorithm, there are better options e.g. \ BNOT [de Goes et al. 2012] \ [Bridson 2007]
Puppy stats
- puppy
- ~29k black pixels
- ~490k trans fn constraints
- ~245k integer vars
- ~583k real vars
Debugging the parametrization (before/after)
Bezier fitting on dual graph
Extraction results
Extraction: different tracing thresholds
- showing different tracing thresholds
- construction of the dual graph still has some problems
- now exploring 1-neighborhood of a triangle
- try to experiment with anisotropy? (cf. car, finer scale)
comparison of different thresholds for tracing
car, overview
car, close-up to junctions
cloud
kitten
polyline
penta
car, finer scale
Current status of the Vectorization project
Following our yesterday’s Skype meeting (Adrien, Misha, Tibor) below is a short summary of the current state of the vectorization project.
We still haven’t abandoned the idea of submitting to Eurographics, although the deadline (Tue Oct 9) looks a bit tight. I’ll do as much as I can in the next two weeks, then we can make a final decision.
To recap, here’s the pipeline:
- Estimation of tangent constraints from the sketch
- Computation of the frame field
- Computation of the parametrization
- Extraction of integer isolines and fitting of Bezier curves
Step 4 is the last missing piece. When that’s done, I can focus on improving the individual parts.
Results & comparison
We’re hoping the overall results will be comparable to Misha and Justin’s paper. Additionally, we will have:
- local feature-size control via a scalar field (computed automatically or specified manually)
screenshots/Screenshot-2018-09-11-input.png screenshots/Screenshot-2018-09-11-gradient.png screenshots/Screenshot-2018-09-11-result.png
- hole filling, see 2018-07-25.html
Possible applications
- computing line drawings from bitmap images, similarly to [Kang et al. 2007]: apply edge detector, then our method, use simple images with few curves to demonstrate this
Extraction
screenshots/IMG_20180906_182147.png
UV-labelling
Tangent constraints are now computed using Sobel, and not structure tensor + bilateral filter as before. This means that the constrained frame field direction cannot be determined by finding frame root closest to the tangent direction (too much noise).
For now, the solution is to use Sobel for frame field constraints, and Structure tensor for uv-labelling. If we have some time in future, we could try to solve this using a global approach such as graphcut.
Extraction, unwrapping
Extraction overview
Extraction
Possible issue in extraction
Improved extraction
Extraction: first steps
[MISSING] Comparison: frame/cross fields
Frame field: weights
Frame field computation
Frame field computed using Misha’s code
Parametrization
Parametrization
- parametrization – energy terms defined on the triangle soup:
E_orient
: cross field orientation (per-triangle)E_trans
: transition functions (per-edge, local Gaussian distance weights)E_fit
: fitting to integer values per-computed for each triangle (per-vertex, constrained only)E_align
: place adjacent constrained vertices on the same grid isoline (per-edge, not yet implemented)
- elephant, varying scale,
trans_w=10
,fit_w=0.1
Parametrization
Soft transition functions with distance weights
- soft transition functions with distance weights
- alignment of constrained vertices to grid isolines via auxiliary integer variables
- global weight for STF: 1, 10, 100, 1000
alignment weight = 0.1
alignment weight = 1
- with distance field
alignment weight = 0.1
alignment weight = 1
- global weight 1, constrained vertices boosted by a factor of 100
Soft transition functions
[MISSING] May 25, 2018
Videocap 2018-05-25 17.10.00:Unwrapping triangles
Videocap 2018-05-25 17.10.01:Closeup
Parametrization energy terms
Soft transition functions
Notes from today’s meeting with David
- We have a working implementation of soft transition functions in libigl, for the moment still using soft alignment constraints. In this formulation, every triangle is considered separately (triangle soup) and (u,v) parameters of adjacent triangles are bound together by per-edge transition functions, introducing two new integer variables for translational DoFs.
- For the soft alignment constraints, we have introduced auxiliary integer variables. To get rid of such a variable in the new setup, we can fix it to a pre-computed integer value. The pre-computation will be done per triangle, by solving the local parameterization problem for this triangle only.
- The per-triangle computation needs to handle the special cases when one, two or even all three vertices of the triangle are constrained (black pixels). I will see if we can enumerate all of the possible cases in order to find a closed-form solution.
- The newly introduced integer variables (translational DoFs) can be used to contain the inter-triangle constraints that we want to impose on corners corresponding to the same vertex in the original mesh.
- If we just consider a single triangle, which is unconstrained (all three vertices == white pixels), the parameterization problem is under-determined. For this reason, we should consider modifying the greedy solver so that the black pixels are processed first (seeds); we then proceed by iteratively adding adjacent pixels based on some scalar distance field.
- The same scalar distance field can be used for weighting the importance of soft transition functions.
- David suggested an experiment with the following weighting scheme for soft transition constraints: high weights for black pixels and their k-nearest neighbours; low (zero) weights for all other vertices. This should provide an interesting comparison with Misha and Justin’s paper.
- For distance field computation, Adrien pointed out connection to line integral convolution, as used e.g. in [Kang et al. 2007] “Coherent line drawing”
- Adrien has some code to compute the distance transform, there is also the bwdist function in Matlab.
As a side note, we don’t need combing the cross field anymore. It seems like we could use this fact to constrain even the vertices with low confidence from the structure tensor.
Parametrization computed on triangle soup
[MISSING] Param: Comparison of different weights
When computing the constrained parametrization, I’m using two types of soft constraints for black pixels.
- Alignment – distance of a black pixel from an integer isoline. Here we are using a priori knowledge of the direction (u or v) to which the pixel should be snapped.
- Proximity – distance of two black pixels (aligned to the same direction)
Comparison of different weights
Next steps (Skype meeting)
- soft constraints – understand/repair the bug which gives 0s for auxiliary integer variables
- adjacency constraints – implement as soft constraints
- frame fields
- compare with Gurobi
- transition functions as soft constraints
- fit Bezier curves to the sketch via optimal transport: generalization of [DCAD2011], see also video of Pierre’s talk
- Moreover: think about the modification of the greedy solver, specifically how to process adjacency constraints. The constrained points (black pixels with high confidence) could be divided into clusters separated by non-constrained points (white pixels or black pixels with low confidence weights). The clusters could then be processed independently at hte same time.
Optimization terminology
Vertices
constrained
: black pixels, high ST confidenceunconstrained
: black pixels, low ST confidencefree
: white pixels
Constraints
isoline
: snap constrained vertices to uv grid isolinesadjacency
: force adjacent constrained vertices to be close in the uv space (prevent “jumping” to other isolines)
Update
- found the bug which gave all 0s using auxiliary integer variables (
isoline
term) adjacency
term implemented as a soft constraint- the solver now takes a long time to converge (possibly due to a high number of integer variables?)
- even with high weights for the
adjacency
term, the lines are broken: think about how to repair this
Two-step parametrization
Snapping only one coordinate to grid
Unwrapping experiments
Snapping both u and v to integers
Summary: 2D Vectorization (current state of our algorithm)
-
Compute the Delaunay triangulation of black pixels in the drawing (with set max area and min angle).
-
Compute the cross field
C
overT
, and its parametrizationP
. When computing the parametrization, the constrained vertices are partitioned into two groups – those lying onu
-isolines orv
-isolines. (This direction is computed using the combed cross field and the estimated constrained tangent directions. All is saved in the matrixCorners
.) -
Extract the isolines from the parametrization by grouping together all vertices with constant
u
orv
. (This corresponds to grouping the vertices by grid lines of the parametrization.) -
For each
u
-isoline, sort points by theu
coordinate. For eachv
-isoline, sort points by thev
coordinate. -
Construct connectivity between black pixels using the sorted isolines. The connectivity is composed of the following edges:
- Edges of the sorted isolines
- Connections between isolines (green points). For each endpoint
A
of a (sorted) isolinei
, find pointB
in isolinej
such thatA
andB
correspond to the same vertexV
in the triangulationT
. These connections correspond - Intersections (magenta points). For each pair of isolines in orthogonal directions, add their intersection.
Constraining both coordinates to integers
- this seems promising
Extracting topology of the drawing from the parametrization
- extracting topology of the drawing from the parametrization
- idea is to trace curves in the parametrized mesh
- left: extracted topology, right: after Laplacian smoothing
- more examples with some issues
Videocap 2018-03-23 09.47.13:Laplacian smoothing of the extracted graph
- better estimation of orientations
- bilateral filter as in the BendFields paper
How to extract the curve network from the parametrization?
- Remove degenerate triangles from the parametrization, then apply QEx and extract the edges that we want.
- Apply QEx to the shifted parametrization, then somehow extract a "skeleton". The screenshots below is approximately what I mean. Works well for simple input, but for more compilcated data it's not so great.
- We could somehow define an implicit function on the parametrization? Then extract what we need (I don't like this idea very much)
- Adrien's suggestion: trace the curve network in the parametrized mesh, then fit Bezier curves.
One thing I observed and I think will be important to consider is where the mesh is cut. Right now, using libigl, the cuts are defined implicitly by combing the field, and therefore I cannot control their placement. It seems to me we should avoid cutting along the contour and always cut in the direction which is orthogonal to the contour.
QEx experiments
I’m experimenting with quad mesh extraction using libQEx. For now I’ve decided to focus on this instead of soft snapping.
The problem I’m having is the following: The triangles at the contour degenerate in the computed parametrization since they are all snapped to the same grid isoline. Using these uvs, libQEx either fails (empty mesh), or outputs a quad mesh with holes. If I shift the uvs by +0.5 libQEx still complains a bit, but it works and there are no holes.
What is the solution to this? Do I need to pre-process the mesh before giving it to libQEx? So that there are no degeneracies in the parametrization?
Summary, original uvs
###### Mesh Extractor Info ######
#vertices : 1559
#edges : 2963
#faces : 1405
#desired holes : 1
#undesired holes : 2
#isolated vertices removed: 683
Summary, shifted uvs
###### Mesh Extractor Info ######
#vertices : 1630
#edges : 3191
#faces : 1562
#desired holes : 1
#undesired holes : 0
#isolated vertices removed: 3
- left : quad mesh extracted with libQEx using the computed uvs (contours on grid isolines)
- right : quad mesh extracted with libQEx using shifted uvs (contours NOT on grid isolines)
Quad meshes extracted using libQEx
Full vs. adaptive triangulation
BEM octahedral fields in volumes video
- BEM octahedral fields in volumes video
Notes
- estimated orientations improved by applying a small Gaussian filter on the structure tensor (σ=1)
- parametrization computed using scaling factors as per-triangle stiffness
- tests with varying soft constraint weight
s = 1, 0.1, 0.01, 0.1
and attenuation factorsp = 1, 1, 1, 10
- global weight is multiplied by per-vertex weights (from structure tensor) computed as
w = (1 - abs(L2) ./ max(abs(L2))).^p
uniform face to vertex conversion
angle-based face to vertex conversion
Global parametrization computed with libigl
Streamline tracing
- corrected streamline tracing
- results with unfiltered orientations (image blurred before applying Sobel, no bilateral filter afterwards)
- parametrization computed with libigl, see tutorial 505
Bug fix
Streamline visualisation
What is this?
Skype with David
[Knoppel et al. 2013]
Tests
- Matlab: implemented [Crane et al. 2010] “Trivial connections” for spherical and disk topologies
- ToDo: [Knoppel et al. 2013] “Globally optimal direction fields”
- Visualisation is slow, unless we use degenerate triangles as described here
- Tracing the vector field is also slow, since there was no pre-computation of transformation needed fo the discrete connection
- Also works for surfaces with boundary (angle defect decreased by π)
Feb 14, 2018
- Matlab: implemented discrete Levi-Civita
Feb 12, 2018
- Find a slot for presenting “Directional fields” to the group : Tuesday 06/03
- EG 2018 DC submission
Feb 09, 2018
- Skype with David
- reading [Vaxman et al. 2016] “Directional field synthesis, design, and processing” (EG STAR)
- implement [Knoppel et al. 2013] “Globally optimal direction fields” (SIGGRAPH)
- think about what tools to use for coding (Matlab, OpenFlipper, CGAL)
- Compiling OpenFlipper
- Mac : problems with GLUT version (using 2.1 instead of 3.2)
- Linux : problems with untested QT version
- Regular Skype meetings : Friday, 10am
Feb 01, 2018
- postdoc start