Previously we introduced the idea of liquid filtering and its motivation:

We have a situation where we have to collectively choose among many options. In the presentation this was referred to as

scaling the solution space. It is infeasible to apply voting as is, because voters cannot consider all these options to make a judgement. So what we do isdistribute the cognitive load in a way that reflects user delegation. The problem of liquid filtering is the assignment of voters to questions according to delegation choices, in an optimal way.

and we spoke of optimality which we defined as

minimize the number of questions that each voter has to answer while ensuring that every voter votes on each question

and its math equivalent

minimize |

A| subject to ∃ (v, q) ∈T∀ q∈Q, v ∈V

The first step in implementing a liquid filtering algorithm is to convert this optimality criterion into an objective function. For simplicity we will modify our criterion slightly to

given a fixed number assignments, maximise the effective number of questions answered

which is saying basically the same thing but from the reverse point of view. With ‘*effective’* we refer to the fact that although voters may not answer a question directly, they can answer it by virtue of having one of their delegates answer it for them. The effective number of questions answered is thus the sum of questions answered directly or via delegation. Optimality is thus defned as

given some |**A**| maximize |(v,q)| such that (v, q) ∈ **T**** **where q** **∈** Q**, v ∈** V**

We will also assume for simplicity that each voter answers the same number of questions. We referred to this possiblity before

even if the sum total of questions is minimzed, one does not want to overload specific voters with too many questions

One way to define an objective function could simply be the sum of questions effectively answered by each voter, over all voters:

f(**A**) = Σ t(v, q), where

t(v, q) =

1 if (v, q) ∈ **T** and

0 otherwise

the maximum value that this function can attain is that in which every voter has effectively answered every question.

max f(**A**) = |**V**| x |**Q**|

Using this quantity we can define a more intuitive function, **coverage, **which corresponds to the fraction of all the question-voter pairs that have been effectively answered:

coverage(**A**) = Σ t(v, q) / |**V**| x |**Q**|

this ranges from 0 (no coverage) to 1 (full coverage).

### What about delegation chain length?

The coverage function we just defined is the simplest case. One variation could address something we said in the previous post

one goal could be to reduce transitive chains lengths such that user delegates are in fact aligned with the user’s preferences

The intent here is to accomodate the fact that long delegation chains may potentially lead to results that are not representative of voter’s choices. This possibility is also considered in viscous democracy[1], and more generally in systems that penalize edge chain lengths with damping, such as PageRank[2]. So we can define a **non-transitive coverage function**

non-transitive coverage(**A**) = Σ t1(v, q) / |**V**| x |**Q**|, where

t1(v, q) =

1 if (v, q) ∈ **T1** and

0 otherwise

where we define **T1 **as equal to **T** but with a *delegate chain cutoff *of 1. A cutoff of 1 means we consider only direct delegates. Delegates of delegates, and so on, do not count in calculating a voter’s effective response. Thus **T1 **is the same as **T** but replacing **D ^{+}** with its 1-cutoff equivalent:

**T1** = { (v, q) | ** **∃ (v’, q) ∈ **A** ∧ ∃ (v, v’) ∈ **D1 ^{+}** ∧ v, v’ ∈

**V**, q

**∈**

**Q**}

where **D1 ^{+ }**is the 1-cutoff equivalent of

**D**(recall that

^{+ }**D**is the transitive closure of

^{+ }**D**, the binary relation encoding delegation)

Generalizing, we could define a **weighted coverage function** which penalizes delegation lengths with values other than 0 or 1, and uses arbitrary cutoffs, but we won’t go into it in detail now.

### Some experiments

Below are some results we’ve had running simulations on random graphs to get a feel of how the solution space looks like. The key parameters that define an instance of a random filtering problem are

**voters**: the number of voters that participate in the decisions

**questions**: the number of questions that must be decided on

**average delegates**: the average number of delegations each voter makes

**questions asked**: the number of questions each voter is presented

**coverage function**: a choice of objective function, such as those just described above

**non-transitive coverage**

The following plots were made with **questions asked** fixed to 1

**transitive coverage function**

The following plots were made adding an additional axis for the **questions asked** parameter, varying from 1 to 5. The average delegates was the other horizontal axis, whilst the vertical axis was the coverage function, using a *delegate chain cutoff *of 3. Additionally we show results of simple hill climbing optimizations (with the ‘X’ symbol) on the assignments.

here we show the same runs but using a **weighted coverage function**, again with a cutoff 3. The coverage function used here penalized delegation according to their length.

**tentative conclusions**

Although these are early results in liquid filtering some conclusions are available.

- participation is most sensitive to average degree and questions asked
- population size has no effect provided it is much larger than questions
- participation becomes saturated at high average degree due to graph topology
- graph topology dominates optimization (at least using hill climbing)

Other observations to make

- Secure voting precludes the possiblity of optimization, in these cases it is not possible to improve assignments beyond uniform distribution
- Without transitivity, average degree must have a magnitude roughly comparable to questions in order to achieve high participation

The biggest flaw in these experiments is probably the purely random nature of the delegation graphs. In order to obtain more realistic results, the generator should produce graphs that, although still random, exhibit properties of those emerging in a practical liquid filtering system. For example, degree variance could be constrained in such systems. Another possibility is that these graphs could feature small world effects or power law degree distributions, as has been observed for social networks.