There is a very interesting Polymath project going on right now: Polymath3. It is about the polynomial Hirsch conjecture.

As I feel like contributing, I am going to temporarily post my ideas as comments of this post before copying them over to the main blog (in order not to pollute the blog with half-baked ideas).

### Like this:

Like Loading...

I have looked at the proof of the upper bound for (ie Lemma 1 and Corollary 1 of Polymath3) and I had the following idea which does not quite work but may inspire others:

As usual, we start with a sequence of -uniform multiset families .

Let’s partition the sequence into intervals in the following way:

We first pick a random element say of the support of and denote by the interval where is the last index of an containing .

Then we pick a new element in the support of and denote by the interval where is the last index of an containing . And so on until the end.

By definition, after , cannot appear anymore since when one element is removed from the support it cannot be added again.

This means that our sequence of intervals can contain at most elements.

Also, if we restrict the for by excluding , we get a convex sequence of -uniform multiset families.

So we have as in the original proof .

Now assume that we would be able to prove that , then together with the fact that we have at most intervals, and using the induction hypothesis of , then we would obtain

.

So this would give exactly the right bound, but there are two things that do not exactly work:

But the interesting point that this illustrates is that if one can decompose the sequence into exactly intervals with the property that each interval contains at least one common element and that , then we can prove the bound . How realistic is such a construction?

Another way to formulate the above is the following: if we can always decompose the sequence into intervals such that and then we can prove Nikolai’s conjecture (I am using here the notation of Corollary 1 of Polymath3). Note that since the intervals satisfy the condition that the in intervals don’t contain the common element in for any , then has to be smaller than . So only the sum condition needs to be verified.

It seems that this is the case (on some simple examples I looked at), provided one can construct the sequence without being forced to start at one end. So in other words, we need to find a decomposition which minimizes , and on some examples this seems to be possible and yield the desired properties.

Thinking a bit more about this led me to this example:

{11111, 11112, 11123, 11234, 12345, 23455, 34555, 45555, 55555}

In this case, if you consider all the subsequences whose support contain one particular element, summing up Nikolai’s bound for them (ie if you consider that there length is bounded by f^*(d-1,k) for some appropriate k) gives a total length that is always larger than f^*(5,5), no matter how the sequence is decomposed. So this decomposition approach does not seem to allow to prove Nikolai’s conjecture but only a weaker upper bound.

Thanks for participating, Olivier

Here is a different approach to the problem.

I am considering the following equivalent formulation: given a down-set (which is nothing but the union of the since one can always add subsets of the existing elements of to complete it into a down-set), the convexity and disjointness condition corresponds to the existence of a labeling function which is such that for any , maps the set to an interval.

In this formulation, the question is how large can the interval be?

Let’s call this the diameter of , denoted by and defined as .

It seems that the maximal elements of play a crucial role in controlling this diameter. Let’s call the set of maximal elements of . A simple case is when is a singleton, say . Indeed in this case, one can easily prove that (this corresponds to the case where one of the contain which gives in the standard formulation).

Similarly, it seems that when one can prove .

However, the situation becomes quickly tricky when one considers three elements.

One can further generalize this approach in the following way:

The down-set we start with can be replaced by any arbitrary directed graph, and the goal is then to find a map from this directed graph to some undirected graph with the following properties: if we consider the set of elements of that are reachable from , then we want the induced subgraph to be connected, and we want to be of maximum diameter.

In the above formulation (diameter of down-sets), I am wondering whether there is always a way to order the elements of in a sequence such that each forms a down-set, and such that the diameters of these sets are monotonously increasing from up to and then monotonously decreasing to .

For example for , this sequence does it: (with diameters ). For , this sequence has diameters .

The case is where things are likely becoming interesting since we know that .

Following up on this idea, if one considers a down-set and looks at the effect on its diameter to add or remove one element (assuming this is done so that the resulting set is still a down-set), we can observe the following: when adding an element of size , it connects to elements of size . If those elements of size were all maximal, this means we introduced the constraint that they all have to be at distance one of the new element which means we cannot increase the diameter (although we also allow them to space out a bit if they were within distance 1 of each other).

A related notion that is potentially interesting to study is the following: given a down-set , one can define the -diameter of this down-set as the diameter restricted to its sets:

.

With this definition, we obviously have and we can prove easily that and it seems that although I don’t have a proof for it yet.

It would be interesting to study the function defined as the maximum -diameter for down-sets over elements.

Regarding the proof of , here are a few observations:

Can these be generalized somehow?

In particular, can one decompose a convex sequence of 3-uniform multisets with respect to its 2-shadows?

The starting point of a -shadow generalization of Corollary 1 of the wiki is to replace supports by -shadows in Lemma 1.

Note that if we consider, for each of the -shadows, the indices at which they appear and then disappear in the sequence (they span intervals so this is well defined), and order them by the interval order (first by when they appear and then by when they disappear), then this forms a convex sequence of -uniform singleton multiset families, whose length is thus bounded by , which means that the number of intervals in the partition is at most .

Then, if we use this generalized Lemma as in the proof of Corollary 1, we have to bound the sum of the supports . If we sum the number of -shadows instead of the sum of the number of -shadows, because of the generalized Lemma 1, we know that this is upper bounded by , but the number of -shadows is larger than the number of -shadows for .

Ideally we would want to show that for we have .