Meeting: ADMT

2D vectorization project

  • T: now working on SGP re-submission, Sec. 4
  • the figure with junctions will replace current limitation figure with the high-valence junction
  • high valence junctions: we can say that with a proper frame field (with automatically-detected or user-specified singularities), even high valence junctions could be processed properly. We can also remark that high valence junctions are quite rare in practice.

3D curves project

  • T: results starting to look good
  • D: we could compare our method to a baseline = alpha shapes + medial axis. This could work well for simple cases, but should fail for more complex ones.
  • A: this is analogous to 2D skeletonization methods
  • CGAL: 3D alpha shapes
  • CGAL: Skeletonization

Stuff

2020-04-03

2019-09-16

Frame field: comparison of different approaches

  • linear regularizer + full mesh

  • linear regularizer + narrow band

  • nonlinear regularizer + full mesh

  • nonlinear regularizer + narrow band

  • mixed regularizer + narrow band

2019-09-13

bigguy: frame field over narrow band vs. full mesh

eagle1: frame field over narrow band vs. full mesh

smallcar1: mask on vs. off

no mask

with mask

bowtie

closeup on the problematic junction

mask?

smallcar2: comiso problems

mecha: various frame fields

narrow band only – wreg = 0.1, scale = 2

full mesh – wreg = 0.1, scale = 2

full mesh – wreg = 0.1, scale = 3

full mesh – wreg = 1.0, scale = 3

grandpa

elephant

full mesh, wreg = 0.1

full mesh, wreg = 1.0

narrow band, wreg = 0.1

narrow band, wreg = 1.0

more stuff

From meeting with David

Custom cut graph

Results on our inputs with [Simo-Serra et al. 2018]

2019-04-29

2019-04-28

2019-04-16

Stochastic extraction: current results (NO global constraints)

Puppy hair (ok)

initial graph + curves

Hippo cut

initial graph + curves

λ = 0.2

λ = 0.1 (oversimplified)

λ = 0.05 (junk curves)

λ = 0.08 (oversimplified)

Elephant

initial graph + curves

λ = 0.005

λ = 0.01

λ = 0.02

λ = 0.04

Man (bad initial topology)

Mouse

initial graph

λ = 0.05 (slightly worse)

λ = 0.05 (better)

λ = 0.02 (slightly worse)

λ = 0.02 (better)

λ = 0.01

λ = 0.02 – zoom in

Muten

initial graph

λ = 0.1

λ = 0.05

λ = 0.02

λ = 0.02 – zoom in

non-tangent edges removed, λ = 0.01

Sheriff

initial graph + curves

λ = 0.05

λ = 0.01

λ = 0.001

λ = 0.001 – zoom in

λ = 0.001 – param + simplified (spurious connections)

Puppy

initial graph

initial curves

λ = 0.01

λ = 0.02

λ = 0.05

λ = 0.01 – zoom in

initial with non-tangent edges removed in pre-processing

other challenges

  • bleeding

  • bad labeling, parametrization degeneracies

  • small frame field angles

2019-03-21

2019-02-27

Frame field regularization

We have two ideas for how to do the regularization.

  1. Use approach from the PolyVector paper, Section 4.2 Angle-bounded fields
  2. Use Misha’s approach with iterative re-weighting

I’ve tried 1., using fitting to a corrected field, but there are problems

  • The local (correction) step breaks the alignment to Sobel tangents – we don’t want to rotate both directions away from the bisector, but rather only the one that is ‘further’ from the tangent direction (makes sense?)
  • For higher fitting weights, the result is over-regularized and not really aligned to the corrected field – seems like a bug? (screeenshots) The result is the same if I disable tangent alignment, and set the smoothness weight to almost zero.

Approach 2. does not work well either yet.

The annoying thing is that we can have additional problems when interpolating the frame field from vertices to faces:

Influence of the frame field on the parametrization

  • It seems like the parametrization with hard transition functions only around black pixels is the way to go. However the result is quite sensitive to the input frame field. We should spend some effort on improving that.
  • I’ve disabled the triangle area weighting in the frame field smoothness term. All triangles now have the same weight regardless of their size; this means the frame field is less constrained in the white areas, where the mesh is usually coarser.
  • Frame field regularization term should be set carefully – it orthogonalizes the frame field too much if set too high (seems like 0.1 is already too much). As a result, we have issues capturing junctions with sharp angles such as the ones in the puppy hair dataset. On the other hand, without the regularization term, the frame field tends to collapse to a line field, which makes it unusable due to artefacts in the parametrization.
  • Do we really need the frame field everywhere? Why don’t we just compute it over black pixels as in Misha’s paper? Would that improve the parametrization?
  • Could we improve the frame field using some kind of iterative refinement?

Results for different values of w_regular

  • each row: frame field, uv computed without/with elimination of some integer variables; some rows also show extraced curves (very basic tracing)

  • w_regular = 0

  • w_regular = 0.01

  • w_regular = 0.02

  • w_regular = 0.05

  • w_regular = 0.1

Current parametrization results

PGP with snapping

  • rows: varying scale
  • cols: varying snapping weights (0, 1, 5, 10, 100)

Triangle: zoom in

  • uv not aligned with the frame field
  • How to make the alignment term more important near the strokes? Using distance weights for the alignment term?

Triangle: with / without curl correction

Triangle

Square

Pentagon

Circle


puppy_hair

parallel

Modifications to PGP

PGP extraction, QEx vs. Vorpaline

Post-deadline summary: Vectorization project

Results

Raw todo notes

  • visualise transition function weights
  • visualise labeling + streamlines
  • do lots of (automated) tests with different parameters, inputs
  • export pdf+image
  • extraction: assignmen to curve segments
  • improve removal by a LOT
  • improve the fitting. Global formulation? (splines?)
  • re-assign the pixels when a curve is removed
  • improve treatment of Y junctions
  • figure out how to enable Y branching in the parametrization (Instant Meshes? PGP?)
  • Bezier merging, otherwise way too many curves. Use the information from the assignment to decide which curves should be merged?
  • don’t init the sparse triangulation via dense triangulation, use blue noise sampling (BNOT)
  • clean up the GUIs, clean up the code (simplify!)
  • optimize the code; merge stuff from parametrization and extraction
  • more stress tests (sharp angles, sequence of sharp angles, e.g. puppy hair)
  • comparison to vectorization papers and StrokeAggregator
  • fix all parameters to reasonable default values
  • use stroke width wherever possible
  • paralellize where possible (streamline tracing?)
  • paper: didactic figures (transition functions etc.)
  • extraction: insets
  • extraction: compute grid vertices in all triangles, not just the ones with a pair of isolines – take all nodes in some radius
  • extraction: when grouping grid vertices, make sure the radius is smaller than feature size! (this should be done already, verify)

Fitting

  • we could view the parametrization (+extraction) as assignment (once again) of pixels to param. curves. Then use the param. + position to fit a global network of splines.

PGP tests

Todo

  • singularities
  • extraction
  • (custom) curl removal
  • snapping

Mail from Nicolas Ray

It is not very difficult to constraint vertices. The code of Geogram aready do that for constraining hard edges, but does not expose the possibility to constrain just a vertex.You can try to directly edit “mesh_PGP_2D.cpp” to add your constraints. It actually locks extremities of edges considered as constrained, but you can edit the condition to include the vertices you want.

if(constrain_hard_edges) {
  FOR(c,mesh->facet_corners.nb()) {
    index_t cnstr =
    Internal::get_edge_constraints(mesh, c, B);
    index_t v = mesh->facet_corners.vertex(c);
    // Inverse R, because when the axis turns
    // clockwise, coordinates turn anticlockwise.
    index_t Rcc = Internal::inverse_R(R_fv[c]);
    if(cnstr & Internal::CNSTR_U) {
      if(Rcc == 0 || Rcc == 2) {
        nlLockVariable(4*v);
        nlSetVariable(4*v,1e4);
        nlLockVariable(4*v+1);
        nlSetVariable(4*v+1,0.0);
      } else {
        nlLockVariable(4*v+2);
        nlSetVariable(4*v+2,1e4);
        nlLockVariable(4*v+3);
        nlSetVariable(4*v+3,0.0);
      }
    }
    if(cnstr & Internal::CNSTR_V) {
      if(Rcc == 0 || Rcc == 2) {
        nlLockVariable(4*v+2);
        nlSetVariable(4*v+2,1e4);
        nlLockVariable(4*v+3);
        nlSetVariable(4*v+3,0.0);
      } else {
        nlLockVariable(4*v);
        nlSetVariable(4*v,1e4);
        nlLockVariable(4*v+1);
        nlSetVariable(4*v+1,0.0);
      }
    }
  }
}

Hard transition fn

Extraction: with removal

Current extraction results

Puppy (WITH short curve removal)

Cloud

Archi

Mecha

Cartoon

Car

Current parametrization results

For the paper

Extraction overview

0. Parametrization

1. Computation of the integer grid map

The first step is the computation of the integer grid map, done by computing the intersections (in the sketch space) of each triangle with parametric integer isolines.

2. Trace segments and grid nodes

We start the tracing by looking at all triangles with estimated tangent directions (u or v). For each triangle t with the direction d, we find all isoline segments in the direction d in the triangle t, and we add these isoline segments to the list of seeds. Then, while the list of seeds is not empty, we pop a seed, mark the corresponding isoline segment as traced, and continue the tracing by placing more seeds to the adjacent triangles. The tracing stops if the seed segment is not inside of a triangle, or if the non-integer part of the translational part of the transition function between triangles is bigger than a threshold.

Afterwards, all grid nodes lying on a traced segment are marked as traced.

3. Cluster grid nodes

The traced grid nodes are locally organized into clusters. Two grid nodes are in the same cluster if they correspond to the same grid node in the parametric space.

4. Construct the dual graph over traced segments

We construct a dual graph over the traced segments. In this graph, two traced segments are connected by an edge if they were extracted from the same isoline.

We also construct a subgraph of this graph (a partial dual graph) in which two traced segments are connected by an edge if they were extracted from the same isoline segment [i,i+1].

5. Disambiguate curves

[WORK IN PROGRESS]

Each connected component of the partial dual graph defines a base curve (that corresponds to some isoline segment [i,i+1]). However, some of these curves might have incorrect topology (e.g. they form a Y as the one on the screenshot) – these need to be split into multiple curves.

6. Correct curve parametrization

At this stage, we have extraced a bunch of line segments and grid nodes (step 2) and we have organized them into curves (step 4, 5). Next, we assign a value t in [0,1] to each line segment in a given curve based on the compute global parametrization (step 0).

In this step, we make sure that this parametrization is consistent for each curve by flipping the value of the parameter (t := 1-t) for inconsistent segments.

Corrected parametrization:

Flipped segments:

7. Pixel assignment

At this point, the candidate curves are computed. Now, for each pixel, we find and store the set of nearest curves (primary, secondary, etc.)

8. Curve removal

[TODO : implement a scheme for interative removal]
[TODO : update assigned pixels when a curve is removed]

Having computed the pixel-curve assignment, we proceed to remove curves that have weak support. The main criteria for removing a curve are the following:

  1. Number of pixels for which the curve is the nearest curve (primary pixels)
  2. Cost of removing this curve, measured by taking the aggregate difference of distances of primary pixels to the second nearest curve.
  3. Parametric coverage of the curve: maximal coverage is [0.0, 1.0], if it’s smaller, e.g. [0.0, 0.2] or [0.7, 1.0], we know that the curve has an open end, and is a good candidate for removal.
  4. Length of the curve chain to which the curve belongs: short chains are likely just noise.

9. Bezier fitting and simplification

[TODO : implement Bezier fitting]
[TODO : implement Bezier merging]

The above process essentially gives us the topology of the output curve network. In the last step, we fit a network of Bezier curves to this topology.

We can further simplify the result by iteratively merging adjacent Bezier curves.

Tangent direction: labeling

  • picking the tangent direction in the frame field : using the root with the largest magnitude

  • archi, same parametrization scale (m=100), varying triangulation density

  • max area = 20, #V = 23512, #F = 46914, time = 924 sec (15 min 24)

  • max area = 50, #V = 10838, #F = 21573, time = 268 sec (5 min 28)

  • max area = 100, #V = 6095, #F = 12093, time = 93 sec (1 min 33)

Parametrization with timings

Adaptive triangulation

  • archi example: 318k black pixels

  • 318k constrained vertices: all black pixels (currently used, way too many!)

  • 25k constrained vertices: from dense triangulation, max_area=10
  • number of vertices can be easily controlled via the max_area param

Puppy stats

  • puppy
  • ~29k black pixels
  • ~490k trans fn constraints
  • ~245k integer vars
  • ~583k real vars

view extracted segments

Debugging the parametrization (before/after)

Bezier fitting on dual graph

Extraction results

dashed circles

Extraction: different tracing thresholds

  • showing different tracing thresholds
  • construction of the dual graph still has some problems
  • now exploring 1-neighborhood of a triangle
  • try to experiment with anisotropy? (cf. car, finer scale)

comparison of different thresholds for tracing

car, overview

car, close-up to junctions

cloud

kitten

polyline

penta

car, finer scale

Current status of the Vectorization project

Following our yesterday’s Skype meeting (Adrien, Misha, Tibor) below is a short summary of the current state of the vectorization project.

We still haven’t abandoned the idea of submitting to Eurographics, although the deadline (Tue Oct 9) looks a bit tight. I’ll do as much as I can in the next two weeks, then we can make a final decision.

To recap, here’s the pipeline:

  1. Estimation of tangent constraints from the sketch
  2. Computation of the frame field
  3. Computation of the parametrization
  4. Extraction of integer isolines and fitting of Bezier curves

Step 4 is the last missing piece. When that’s done, I can focus on improving the individual parts.

Results & comparison

We’re hoping the overall results will be comparable to Misha and Justin’s paper. Additionally, we will have:

  • local feature-size control via a scalar field (computed automatically or specified manually)

screenshots/Screenshot-2018-09-11-input.png screenshots/Screenshot-2018-09-11-gradient.png screenshots/Screenshot-2018-09-11-result.png

Possible applications

  • computing line drawings from bitmap images, similarly to [Kang et al. 2007]: apply edge detector, then our method, use simple images with few curves to demonstrate this

Extraction

screenshots/IMG_20180906_182147.png

UV-labelling

Tangent constraints are now computed using Sobel, and not structure tensor + bilateral filter as before. This means that the constrained frame field direction cannot be determined by finding frame root closest to the tangent direction (too much noise).

For now, the solution is to use Sobel for frame field constraints, and Structure tensor for uv-labelling. If we have some time in future, we could try to solve this using a global approach such as graphcut.

Extraction, unwrapping

Extraction overview

More examples

Greedy Bezier merging

Example: hole filling

Extraction

Possible issue in extraction

  • possible issue in extraction: when circulating around a vertex near a frame field singularity, we might connect to the orthogonal direction
  • this should not happpen with a properly chosen direction in the green triangle (currently continuing in both intersected edges)

Improved extraction

Extraction: first steps

car

elephant

car (not trimmed)

detail scalar field (Misha)

[removed the images]

[MISSING] Comparison: frame/cross fields

comparison frame fields and cross fields

Frame field: weights

Frame field computation

  • frame field, harmonic propagation:

  • frame field, computation directly on the mesh (different alignment weights)

  • hi-res:

Frame field computed using Misha’s code

Parametrization

  • fitting w = 0.1
  • trans fn w = 10
  • still no alignment
  • optimization time ranges from 25min (car, coarsest) to 10h 2min (mecha, finest)

Parametrization

  • parametrization – energy terms defined on the triangle soup:
    1. E_orient : cross field orientation (per-triangle)
    2. E_trans : transition functions (per-edge, local Gaussian distance weights)
    3. E_fit : fitting to integer values per-computed for each triangle (per-vertex, constrained only)
    4. E_align : place adjacent constrained vertices on the same grid isoline (per-edge, not yet implemented)
  • elephant, varying scale, trans_w=10, fit_w=0.1

Parametrization

  • fitting penalty=0.1
  • transition functions penalty=10 (+local Gaussian distance weighting)
  • optimization took 27’11’’

  • varying scale

Soft transition functions with distance weights

  • soft transition functions with distance weights
  • alignment of constrained vertices to grid isolines via auxiliary integer variables
  • global weight for STF: 1, 10, 100, 1000

alignment weight = 0.1

alignment weight = 1

  • with distance field

alignment weight = 0.1

alignment weight = 1

  • global weight 1, constrained vertices boosted by a factor of 100

Soft transition functions

  • soft transition functions (no alignment to constraints) with different weighting schemes
  • the bottom scheme takes a long time to optimize (11’30’’ for the car, 5’00’’ for circles), all the others are instantaneous

car

circles


  • with alignment constraints, took 15’00’’

[MISSING] May 25, 2018

Videocap 2018-05-25 17.10.00:Unwrapping triangles

Videocap 2018-05-25 17.10.01:Closeup

Parametrization energy terms

Soft transition functions

  • soft transition functions, high uniform weights
  • optimization took 9′54″ (circle) and 17′25″ (car)
  • circle: QEx fails to recover the quad mesh

Notes from today’s meeting with David

  1. We have a working implementation of soft transition functions in libigl, for the moment still using soft alignment constraints. In this formulation, every triangle is considered separately (triangle soup) and (u,v) parameters of adjacent triangles are bound together by per-edge transition functions, introducing two new integer variables for translational DoFs.
  2. For the soft alignment constraints, we have introduced auxiliary integer variables. To get rid of such a variable in the new setup, we can fix it to a pre-computed integer value. The pre-computation will be done per triangle, by solving the local parameterization problem for this triangle only.
  3. The per-triangle computation needs to handle the special cases when one, two or even all three vertices of the triangle are constrained (black pixels). I will see if we can enumerate all of the possible cases in order to find a closed-form solution.
  4. The newly introduced integer variables (translational DoFs) can be used to contain the inter-triangle constraints that we want to impose on corners corresponding to the same vertex in the original mesh.
  5. If we just consider a single triangle, which is unconstrained (all three vertices == white pixels), the parameterization problem is under-determined. For this reason, we should consider modifying the greedy solver so that the black pixels are processed first (seeds); we then proceed by iteratively adding adjacent pixels based on some scalar distance field.
  6. The same scalar distance field can be used for weighting the importance of soft transition functions.
  7. David suggested an experiment with the following weighting scheme for soft transition constraints: high weights for black pixels and their k-nearest neighbours; low (zero) weights for all other vertices. This should provide an interesting comparison with Misha and Justin’s paper.
  8. For distance field computation, Adrien pointed out connection to line integral convolution, as used e.g. in [Kang et al. 2007] “Coherent line drawing”
  9. Adrien has some code to compute the distance transform, there is also the bwdist function in Matlab.

As a side note, we don’t need combing the cross field anymore. It seems like we could use this fact to constrain even the vertices with low confidence from the structure tensor.

Parametrization computed on triangle soup

  • without transition functions (+ soft alignment constraints)

  • with soft transition functions (+ soft alignment constraints)
  • uniform weights
  • optimization took 5’ 58’’

  • next step: adaptive weights based on distance from the black pixels

[MISSING] Param: Comparison of different weights

When computing the constrained parametrization, I’m using two types of soft constraints for black pixels.

  1. Alignment – distance of a black pixel from an integer isoline. Here we are using a priori knowledge of the direction (u or v) to which the pixel should be snapped.
  2. Proximity – distance of two black pixels (aligned to the same direction)

Comparison of different weights

car

circles

Next steps (Skype meeting)

  1. soft constraints – understand/repair the bug which gives 0s for auxiliary integer variables
  2. adjacency constraints – implement as soft constraints
  3. frame fields
  4. compare with Gurobi
  5. transition functions as soft constraints
  6. fit Bezier curves to the sketch via optimal transport: generalization of [DCAD2011], see also video of Pierre’s talk
  • Moreover: think about the modification of the greedy solver, specifically how to process adjacency constraints. The constrained points (black pixels with high confidence) could be divided into clusters separated by non-constrained points (white pixels or black pixels with low confidence weights). The clusters could then be processed independently at hte same time.

Optimization terminology

Vertices

  • constrained: black pixels, high ST confidence
  • unconstrained: black pixels, low ST confidence
  • free: white pixels

Constraints

  • isoline: snap constrained vertices to uv grid isolines
  • adjacency: force adjacent constrained vertices to be close in the uv space (prevent “jumping” to other isolines)

Update

  • found the bug which gave all 0s using auxiliary integer variables (isoline term)
  • adjacency term implemented as a soft constraint
  • the solver now takes a long time to converge (possibly due to a high number of integer variables?)
  • even with high weights for the adjacency term, the lines are broken: think about how to repair this

Two-step parametrization

  1. use hard constraints: snap black pixels to integer iso-lines
  2. use soft constraints: use the integer coordinates computed in the first step as soft constraints

  • car (original input)

  • car (50% downsampled input)

Scale tests

  • added new constraint to prevent edge splitting

Fandisk

Snapping only one coordinate to grid

  • snapping only one coordinate to grid
  • snap both if the constraint direction is not clear (around junctions)
  • fitted Bezier curves, trying to implement C0 constraints

Unwrapping experiments

Snapping both u and v to integers

  • snapping both u and v to integers

  • computed partitions:

  • cubic Bezier curves, computed independently for each region

ToDo

  1. global constraints for Bezier curves (G0 & G1 at intersections)
  2. iteration (update parameters, augment degree if needed)

Varying degree of the Bezier curves

Summary: 2D Vectorization (current state of our algorithm)

  1. Compute the Delaunay triangulation of black pixels in the drawing (with set max area and min angle).

  2. Compute the cross field C over T, and its parametrization P. When computing the parametrization, the constrained vertices are partitioned into two groups – those lying on u-isolines or v-isolines. (This direction is computed using the combed cross field and the estimated constrained tangent directions. All is saved in the matrix Corners.)

  3. Extract the isolines from the parametrization by grouping together all vertices with constant u or v. (This corresponds to grouping the vertices by grid lines of the parametrization.)

  4. For each u-isoline, sort points by the u coordinate. For each v-isoline, sort points by the v coordinate.

  5. Construct connectivity between black pixels using the sorted isolines. The connectivity is composed of the following edges:

    • Edges of the sorted isolines
    • Connections between isolines (green points). For each endpoint A of a (sorted) isoline i, find point B in isoline j such that A and B correspond to the same vertex V in the triangulation T. These connections correspond
    • Intersections (magenta points). For each pair of isolines in orthogonal directions, add their intersection.

Constraining both coordinates to integers

  • this seems promising

Extracting topology of the drawing from the parametrization

  • extracting topology of the drawing from the parametrization
  • idea is to trace curves in the parametrized mesh
  • left: extracted topology, right: after Laplacian smoothing

  • more examples with some issues

Videocap 2018-03-23 09.47.13:Laplacian smoothing of the extracted graph


How to extract the curve network from the parametrization?

  1. Remove degenerate triangles from the parametrization, then apply QEx and extract the edges that we want.
  2. Apply QEx to the shifted parametrization, then somehow extract a "skeleton". The screenshots below is approximately what I mean. Works well for simple input, but for more compilcated data it's not so great.
  3. We could somehow define an implicit function on the parametrization? Then extract what we need (I don't like this idea very much)
  4. Adrien's suggestion: trace the curve network in the parametrized mesh, then fit Bezier curves.

One thing I observed and I think will be important to consider is where the mesh is cut. Right now, using libigl, the cuts are defined implicitly by combing the field, and therefore I cannot control their placement. It seems to me we should avoid cutting along the contour and always cut in the direction which is orthogonal to the contour.

QEx experiments

I’m experimenting with quad mesh extraction using libQEx. For now I’ve decided to focus on this instead of soft snapping.

The problem I’m having is the following: The triangles at the contour degenerate in the computed parametrization since they are all snapped to the same grid isoline. Using these uvs, libQEx either fails (empty mesh), or outputs a quad mesh with holes. If I shift the uvs by +0.5 libQEx still complains a bit, but it works and there are no holes.

What is the solution to this? Do I need to pre-process the mesh before giving it to libQEx? So that there are no degeneracies in the parametrization?


Summary, original uvs

###### Mesh Extractor Info ######
#vertices                 : 1559
#edges                    : 2963
#faces                    : 1405
#desired holes            : 1
#undesired holes          : 2
#isolated vertices removed: 683

Summary, shifted uvs

###### Mesh Extractor Info ######
#vertices                 : 1630
#edges                    : 3191
#faces                    : 1562
#desired holes            : 1
#undesired holes          : 0
#isolated vertices removed: 3
  • left : quad mesh extracted with libQEx using the computed uvs (contours on grid isolines)
  • right : quad mesh extracted with libQEx using shifted uvs (contours NOT on grid isolines)

Quad meshes extracted using libQEx

  • varying edge length

Full vs. adaptive triangulation

Full triangulation

w=0.001, 0.01, 0.1, 1, 10, 100

Adaptive triangulation

w=0.001, 0.01, 0.1, 1, 10, 100

Parametrization : varying edge length (full triangulation)

Parametrization : varying edge length (adaptive triangulation)

BEM octahedral fields in volumes video

  • BEM octahedral fields in volumes video

Notes

  • estimated orientations improved by applying a small Gaussian filter on the structure tensor (σ=1)
  • parametrization computed using scaling factors as per-triangle stiffness
  • tests with varying soft constraint weight s = 1, 0.1, 0.01, 0.1 and attenuation factors p = 1, 1, 1, 10
  • global weight is multiplied by per-vertex weights (from structure tensor) computed as w = (1 - abs(L2) ./ max(abs(L2))).^p

uniform face to vertex conversion

angle-based face to vertex conversion

Global parametrization computed with libigl

  • hard snapping of curves to integer locations

circle

chair

cloud

branch

car

Streamline tracing

  • corrected streamline tracing
  • results with unfiltered orientations (image blurred before applying Sobel, no bilateral filter afterwards)

Bug fix

  • Found a minor bug in structure tensor implementation
  • Cross fields with corrected constrained orientations:

Streamline visualisation

  • my quick matlab implementation

What is this?

  • Experiments with parametrization
  • Complex argument, adaptive triangulation

  • Complex argument, dense triangulation

  • Elephant

Skype with David

  • Corrected the implementation of [Knoppel et al. 2013]
  • Results on the bunny with/without scaling:

  • Images: structure tensor, Gaussian filtering, adaptive DT via Shewchuk’s triangle (min angle = 30 deg)

  • Cloud (with scaling factor)

  • Elephant from [Bessmeltsev & Solomon 2018] (normalized)

[Knoppel et al. 2013]

  • Doing the 2D case of [Knoppel et al. 2013]
  • Does not seem right…

  • With trivial connections:

Tests

  • Matlab: implemented [Crane et al. 2010] “Trivial connections” for spherical and disk topologies
  • ToDo: [Knoppel et al. 2013] “Globally optimal direction fields”
  • Visualisation is slow, unless we use degenerate triangles as described here
  • Tracing the vector field is also slow, since there was no pre-computation of transformation needed fo the discrete connection

  • Also works for surfaces with boundary (angle defect decreased by π)

Feb 14, 2018

  • Matlab: implemented discrete Levi-Civita

Feb 12, 2018

  • Find a slot for presenting “Directional fields” to the group : Tuesday 06/03
  • EG 2018 DC submission

Feb 09, 2018

  • Skype with David
    • reading [Vaxman et al. 2016] “Directional field synthesis, design, and processing” (EG STAR)
    • implement [Knoppel et al. 2013] “Globally optimal direction fields” (SIGGRAPH)
    • think about what tools to use for coding (Matlab, OpenFlipper, CGAL)
  • Compiling OpenFlipper
    • Mac : problems with GLUT version (using 2.1 instead of 3.2)
    • Linux : problems with untested QT version
  • Regular Skype meetings : Friday, 10am

Feb 01, 2018

  • postdoc start