Other proposed projects

From Polymath Wiki
Jump to: navigation, search

This page is a repository for any polymath proposals which are not fleshed out enough to have their own separate posts for the proposal. Contributions are welcome.

The transition from quantity to quality

The transition from quantity to quality (probably, first formulated by the philosopher Hegel) is a well known philosophical concept. Can we formalize it?

I propose a formalization:

For an ordinary differential equation, it is when the problem of a property of a differential equation (we can for simplicity assume that it has exactly one solution for a given initial value problem) that it takes a positive value for a given point x is undecidable.

I will explain this more intuitively: The value of the solution at a given point x is quantity. If it is above zero is quality. The transition from quantity to quality may be undecidable. (Really? May it be? If yes, for which kinds of differential equations?)

The issues I propose for the polymath project:

  • Further generalize my definition of transition from quantity to quality (e.g. for partial differential equations).
  • Research properties of such transitions.

In principle, further development of mathematics may discover whether Marx's (or is it Hegel's) concept that e.g. biology is undecidable based only on chemistry, etc. (however, this would be extremely hard to prove and even formalize). But this shows that my research proposal may have great perspectives for further development and applications to natural sciences.

Inherent math vs human inventions such as chess

We feel that group theory is a beautiful mathematical theory, but rules of chess are not but are an invention of people.

I propose to consider which formally described systems count as mathematics and which are simple human inventions.

One approach would be to assign a rate (probably a real number) for different systems of how much "natural" they are. For example chess would have low naturality rate and group theory high one.

No good idea how to approach this issue. But Polymath Project is specifically for such problems which are difficult to approach by a single person.

One idea is to consider of "how much" (whatever this may mean) problems or situations are reduced to a given formalistic, how much general it is, whatever this may mean.

Beating the trivial subset sum algorithm

This problem was proposed by Ernie Croot, though he would not have the time to run a project based on this problem.

It is a problem that is easy to state, probably will succumb to elementary methods, and probably could be solved if enough people contributed ideas. Here goes: consider the usual subset sum (or is it knapsack?) problem where you are given a list of positive integers N_1, …, N_k, and a target number T, and you must decide whether there is some subset of N_1, …, N_k that sums to T. The problem is to beat the “trivial algorithm”, which I shall describe presently.

The first thing to realize is that there is a subset sum equal to T iff there is one equal to S-T, where S=N_1 + … + N_k. Furthermore, subsets of size t summing to T correspond uniquely to subsets of size k-t summing to S-T. In this way, you only need to consider subsets of size at most k/2 (and check whether they sum of T or S-T) to solve the problem. But now you can use the usual “collision technique” to reduce the problem to subsets of size at most k/4, by forming a table of all subsets of at most this size, along with their sum of elements, until you find a disjoint pair of subsets that sums to either T or S-T. The running time of this procedure should be comparable to (k choose k/4) = c^(k+o(k)), for a certain constant c that is easy to work out. This is what I mean by the “trivial algorithm”. Now, the problem is find an algorithm — any at all — that runs in time at most d^k, where d < c. To my knowledge no such algorithm is know to exist!

note: Stasys Jukna has worked on lower bounds wrt dynamic programming algorithms running on the Knapsack problem which the Subset Sum is a special case. see sec 3.2 of this ECCC paper, Limitations of Incremental Dynamic Programming Vzn 17:50, 28 December 2012 (UTC)

Coupling determinantal processes

Any n-dimensional subspace V of a Euclidean space [math]{\Bbb R}^N[/math] gives rise to a random subset A_V of {1,...,N}, with the probability that [math]A_V = \{i_1,\ldots,i_k\}[/math] being the square of the magnitude of the projection of [math]e_{i_1} \wedge \ldots \wedge e_{i_n}[/math] to V. This is known as the determinantal process associated to V.

If V is a subspace of W, it is known that one can couple [math]A_V[/math] to [math]A_W[/math] in such a way that the former set is a subset of the latter, but no "natural" way of doing this is known. One problem in this project would be to find such a natural way.

A related problem: if V, W are orthogonal, is it always possible to couple [math]A_V, A_W, A_{V+W}[/math] together in such a way that [math]A_{V + W} = A_V \cup A_W[/math]?

These questions are raised in page 38 of this paper of Lyons, and also discussed at this blog post.

Stable commutator length in the free group

Let G be the free group on two generators, and let [G,G] be the commutator subgroup. Given any g in [G,G], the commutator length cl(g) is the least number of commutators needed to express g, and the stable commutator length scl(g) is the lim of cl(g^n)/n.

It is known that scl(g) >= 1/2 for any non-trivial g. Find a combinatorial proof of this fact.

It is conjectured that { scl(g): g in [G,G] } has an isolated point at 1/2. Prove this.

Reference: scl, Danny Calegari

Quantum cellular automata

The proposal is tackling these 2 questions.

Yang-Mills existence and mass gap

Quite recently, Alexander Dynin, a mathematician at Ohio State University with a reputable CV, proposed a solution of the Millennium prize problem on Yang-Mills existence and mass gap (see here and here). I have discussed this in my blog (see here) but I am a physicist and I cannot be sure about the correctness of all the mathematical arguments by the author. Of course, I am involved in this line of research as a physicist and, in our area, significant progress seems to have been made (e.g., besides my works, see Alexander (Sasha) Migdal's papers here and refs. therein and my blog entry). So, it would be of paramount importance to have such a question addressed by the community of mathematicians at large, much in the same way as happened last year with "N vs. NP" question for Deolalikar's paper, in order to fix the avenues to pursue for the community of theoretical physicists.--Marco Frasca 09:30, 30 November 2011 (UTC)

2012 is the 100th anniversary of Landau's problems

I am suggesting a "Just For Grins" attack on Oppermann's Conjecture that, once proved, makes proofs of Legendre's Conjecture, Brocard's Conjecture, and Andrica's Conjecture slam dunks. Also, there is an outside chance that the twin-primes conjecture could be proved (requires Brocard's Conjecture).


User: Rudy Toody

Jun Fukuyama NP vs P/Poly proof

Jun Fukuyama, PhD announced a proof for NP vs P/Poly in July 2012 but the theoretical computer science and mathematics communities have not engaged with it much so far. this page Jun Fukuyama's P≠NP Paper has more info on his background, the proof, discussion, etc —Vzn 17:34, 28 December 2012 (UTC)

Outline for a NP vs P/Poly proof

outline/overview for a NP vs P/Poly proof based on monotone circuits, hypergraphs, factoring, and slice functions was posted to a blog 12/8/2012. the author is not claiming a proof but presents a general/novel high-level outline/overview based on many years of research. the author is active online and has 2.7K rep on the theoretical computer science stack exchange and has posted a peer-reviewed fragment of the outline there, "cartesian bitwise join of bit vectors", and also on mathoverflow where there is a distant link to research on ZDDs by Knuth. —Vzn 18:11, 28 December 2012 (UTC)

Collatz conjecture via computational analysis

Also by the same author, Collatz conjecture experiments has computational/algorithmic experiments to analyze the Collatz conjecture. in May 2013 some early/preliminary yet remarkable analysis (somewhat related to techniques described) was obtained that seems to be the basis for an inductive proof. the code is not yet included on that page and the proof will require a sort of "reverse engineering" of the code result. the current page has code to analyze the problem via FSM transducers. a new approach that does not require the transducer was devised and it reveals a striking fractal-like, apparent hidden, yet highly regular order/pattern of traversal of all integers. the new approach is based on a simple graph ordering of the computational verification sequence. —Vzn 17:03, 24 May 2013 (UTC)