Here is a fun combinatorial puzzle. I’ve probably seen this used to teach before, but let’s try to define or work this one from memory. I would love to hear more solutions/analyses of this problem.

Suppose you have `n`

kettles of soup labeled `0`

through `n-1`

. For our problem we assume that `k`

kettles of soup are extremely spicy. We want to figure out which kettles contain spicy soup.

Image source: Mad Dog 357 / Amazon

This presents an interesting puzzle when `k`

is much smaller than `n`

. We are assuming that spicy is a rare event we want to detect. We are also assuming the spicy soups are so spicy, that they remain spicy even when combined with other soups. So when we prepare mixtures of soups we experience the union of the spiciness of the included soups.

The question is: if we prepare tasting bowls that are mixtures of samples from the kettles- how many bowls do we have to prepare to reliably identify all of the spicy soup kettles? This is hopefully in the spirit of the “counterfeit gold coin puzzle” as seen in the Columbo detective show (though I end up using a bit more math).

One solution is to use `n`

bowls. For each kettle we place a portion alone in a bowl, and then taste all the bowls in order.

The fun is: by designing our sample bowls can we cut down the number of tastings?

For the case `k=1`

there is a clever solution, which we will show for `n=8`

. Number the kettles from `0`

to `7`

in binary: `0:000`

, `1:001`

, `2:010`

, `3:011`

, `4:100`

, `5:101`

, `6:110`

, and `7:111`

. Build three sample bowls: the first containing soup from kettles `{1, 3, 5, 7}`

(the soups with a `1`

in the first binary digit), the second containing soup from kettles `{2, 3, 6, 7}`

(the soups with a `1`

in the second binary digit), and the third containing soup from kettles `{4, 5, 6, 7}`

(the soups with a `1`

in the third binary digit). Then the identity of the spicy soup is revealed by writing down the digits of which sample bowls are spicy. For example, if no sample bowl is spicy the spicy soup is `0:000`

, and if all the samples are spicy the spicy soup is `7:111`

.

So for `k=1`

, we see about `log_2(n)`

sample bowls are enough. What we have done is: taste mixtures and then imputed or attributed the spiciness of the mixtures back to the component kettles.

Photo: Laura LaRose, some rights reserved

Note the above method breaks down if we don’t have exactly `k`

spicy kettles of soup. If there are no spicy kettles, then kettle zero is falsely accused. And if all the kettles except the last are spicy, this incorrectly decodes to only the last kettle being declared spicy.

Our puzzle is: what can we do for general `k`

? Solving this is a bit like designing an error-correcting code (except we are working with an arithmetic of `OR`

s, instead of the more traditional finite fields).

Let’s try using the techniques of *The Probabilistic Method* to solve for a smaller set of tastings. That is: we are going to use randomized methods to prove the existence of a deterministic structure that solves the problem.

For simplicity, assume `(2 k)`

divides into `n`

evenly (if not- just add a few more non-spicy soups to make this so).

Define a `(2 k)`

-partition `P`

of `{0, 1, ..., n-1}`

as a set of `(2 k)`

disjoint sets, each of size `n / (2 k)`

, such that `{0, 1, ..., n-1} = union_{T in P} T`

.

Assume `S`

is the set of (unknown, or presumed) spicy soup kettles. For `i`

not in `S`

a partition `P`

can not certify kettle `i`

as being not-spicy, when kettle `i`

samples is in a mixture soup with a sample from `S`

Formally: a partition `P`

is “ruined for `i`

by a set `S`

” if `i`

is not in `S`

and the `T in P`

such that `i in T`

is such that `T intersect S`

is not empty. That is: `i`

is confused with elements of `S`

for this partition.

If `P`

is a `(2 k)`

-partition `P`

of `{0, 1, ..., n-1}`

drawn uniformly at random from all such partitions then, for a given `i`

the probability of `P`

being ruined for a set `S`

of size `k`

is no more than `1/2`

. This is because `P`

has `(2 k)`

sets, and each of `k`

elements of `S`

can ruin at most one set of the partition. Thus no more than `k`

elements of the partition are ruined. So at least half the sets of `P`

are not ruined, and `i`

is equally likely to to be in any of the sets (as `P`

is chosen independent of `i`

).

Let `P1, ..., Pc`

be a sequence of `c`

`(2 k)`

-partitions of `{0, 1, ..., n-1}`

. Write an indicator random variable `X(P1, ..., Pc; i; S)`

which is `1`

if `i`

is not in `S`

and all of the the `Pi`

are simultaneously ruined for `i`

by the set `S`

(and `0`

otherwise). For `P1, ..., Pc`

drawn uniformly and independently at random we can see the probability of all the `Pi`

being simultaneously ruined for a given `i`

for a given set `S`

of size `k`

is no more than `(1/2)^c`

. So for a given `i`

and `S`

, the expected value `E[X(P1, ..., Pc; i; S)]`

is no more than `(1/2)^c`

(expectation taken over choice of `P1, ..., Pc`

drawn uniformly and independently at random).

We are going to use a standard probabilistic method construction to convert a solution system that has exponentially small per-example failure probability into a specific solution known not to fail for any example. The construction proves there is set of partitions that perform the test correctly, but is non-constructive as it leaves finding such a partition as a later step.

To bound the probability of a `P1, ..., Pc`

being simultaneously spoiled for any `i`

/`S`

, we use two main tools:

- The union bound: the probability of some event happening is no more than the sum of the event probabilities.
- The linearity of expectation: the expected value of a sum is the sum of expected values (with no additional conditions needed).

Lets call a plan a “good-plan” if for any set of `k`

spicy soup kettles, we can prove every not-spicy soup is in fact not-spicy (by seeing a member of a not-spicy mixture). We want to estimate an upper bound on the probability of a plan being “not-good.”

So (taking all probabilities and expectations over choice of `P1, ..., Pc`

taken uniformly independently at random) write:

```
```P[P1, ..., Pc is not good]
(expanding definition)
= P[Exists i, S with |S| = k such that
P1, ..., Pc is ruined for i by S]
(quantifier elimination)
= P[OR over i, S with |S| = k of
P1, ..., Pc is ruined for i by S]
(using indicator)
= E[MAX over i, S with |S| = k of
X(P1, ..., Pc; i; s)]
(union bound)
≤ E[SUM over i, S with |S| = k of
X(P1, ..., Pc; i; s)]
(linearity of expectation)
= SUM over i, S with |S| = k of
E[X(P1, ..., Pc; i; s)]
(by bound from earlier discussion)
≤ SUM over i, S with |S| = k of
(1/2)^c
(expand sum)
= n (n choose k) (1/2)^c
(crude, (n choose k) ≤ n^k)
≤ n^(k+1) (1/2)^c

Now if we choose `c ≥ 1 + (k+1) ln(n) / ln(2)`

we have:

```
```P[P1, ..., Pc is not good] ≤ 1/2

For `c = ceiling(1 + (k+1) ln(n) / ln(2))`

we see at least half of the uniform random choices of `P1, ..., Pc`

are good. So we could find such a `P1, ..., Pc`

by generating a few such sequences and testing if we have a good sample plan or not (which can be done much quicker than checking if it is not ruined for any set of size `k`

).

These `c = ceiling(1 + (k+1) ln(n) / ln(2))`

partitions each specify `(2 k)`

sample bowls. So `2 k ceiling(1 + (k+1) ln(n) / ln(2))`

sample bowls are enough to identify which of `k`

of the `n`

soups are spicy. This is asymptotic to `O(k^2 log(n))`

, but we left the size explicit to show there are no hidden large constants. Also, this solution also works for any `k' ≤ k`

.

Fun follow-up activities I would love to hear about include:

- Find variations of this puzzle and solutions in the literature.
- What are the transitional statistical experimental designs for this problem?
- Is there an easy way to adapt an error correcting code to this problem?
- In this example we are using a very simple decoding rule: only non-spicy soups can be members of non-spicy mixtures. It would be interesting to instead use some more ideas of sparsity / compressed sensing and instead solve for spicy soups using a linear program (minimizing
`L1`

weight of the spicy indicators). With a more powerful decoder, we may be able to use more sophisticated encoding schemes. - Can we reduce the dependence on
`k`

for the above batch method (where all tests are designed before checking any results)? We expect to need`O(k log(n))`

bits to write down the indices of the spicy soups, so that supplies a lower bound to compare to. - Confirm that we can use a
`k+1`

design to detect if there are more than`k`

spicy soups (without fully identifying which, or even how many). This is interesting, as without the build-up it isn’t clear you deterministically establish, in a batch or oblivious experiment design, if there are more than`k`

spicy soups with a moderate number of experiments. - Produce a concrete example solution for
`k = 3`

and a non-trivial`n`

. Note: even for`k = 3`

the construction we gave here (chosen for simplicity, not efficiency) isn’t interesting until`n ~ 200`

. So for`n < 200`

, it makes sense to just taste all the kettles directly. - Can we de-randomize our batch construction, or use pseudo-random or low-discrepancy methods for this problem?
- What does a good incremental/interactive solution, where we can use earlier results to plan later sample construction, look like? For example if we build one partition into
`2 k`

combined samples, this declares at least half non-spicy the soups as not-spicy in one step. So the sampling cost has a recursion of the form`cost(n, k) ≤ 2 k + cost(n/2, k)`

, which should quickly give us a`O(k log(n))`

incremental solution strategy.

Categories: Mathematics Tutorials

### jmount

Data Scientist and trainer at Win Vector LLC. One of the authors of Practical Data Science with R.

Hoping to recruit some follow-up references here.

Thank you Rob Pratt for the answer!

The statistical term is “group testing”, invented by Robert Dorfman to screen WWII draftees for syphilis. And it look like the lower-bound for “K-disjunct” codes is k^2 log(n) / log(k), so within a log(k) of the construction procedure (ref).