tag:blogger.com,1999:blog-3722233.post2522710913790179311..comments2022-12-02T17:41:58.702-06:00Comments on Computational Complexity: Thanks for better upper bound. Still want better lower bound- On Comm Comp of MAXLance Fortnowhttp://www.blogger.com/profile/06752030912874378610noreply@blogger.comBlogger15125tag:blogger.com,1999:blog-3722233.post-24955544408135523512008-10-21T19:11:00.000-05:002008-10-21T19:11:00.000-05:00A small optimization: suppose we relax the require...A small optimization: suppose we relax the requirement that the special block of L bits occurs nowhere in the number. Instead, disambiguate it. Every time it is sent, it's followed by 0 if it's actually a normal block occurring in the number, by 10 if it's special and A > B, and by 11 if it's special and A < B. <BR/><BR/>Let's choose the least frequent among all the L-bit blocks, then in the worst case it occurs n/(L*2^L) times. The total number of bits is n + 4L + n/(L*2^L) + O(1). The trade-off between the second and third summands is a little more favorable than the previous condition on L. In particular, L = lg(n) - lg(lg(n)) really works here. Probably some better functions do, too, although not as good as lg(lg(n)).Anatoly Vorobeyhttps://www.blogger.com/profile/15075147858952180578noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-44709532501455311192008-10-18T16:44:00.000-05:002008-10-18T16:44:00.000-05:00This comment didn't go through before (probably I ...This comment didn't go through before (probably I forgot to click the last button), so I'm trying again. If you can show that the guy with the largest number knows roughly (say within log n bits) which is the first bit that differs, then information theory gives the correct lower bound. I don't see how to prove this, but my intuition says that in any good protocol, he will know this.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-14867079730309093092008-10-16T17:18:00.000-05:002008-10-16T17:18:00.000-05:00Not a Romanian, Anon 8's bound comes from a si...Not a Romanian, Anon 8's bound comes from a simple manipulation of the algebra; We require that 2^L > ceil(N/L), so<BR/>e^(L lg2) > N/L<BR/>(L lg2) e^(L lg2) > N lg2<BR/>L lg2 > W(N lg2)<BR/>L > W(N lg2)/lg2<BR/><BR/>The Kro, we don't need L to divide evenly into N; as long as the choice of L is deterministic (the ceiling of the logarithm, for example), we can just pad N with leading zeros to divide things evenly.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-77558173045011672342008-10-16T16:23:00.000-05:002008-10-16T16:23:00.000-05:00Anon #8 gives no proof, and hardly an explanation,...Anon #8 gives no proof, and hardly an explanation, but his solution sounds very close to the truth, perhaps exactly the truth. Using the special markers A and B is similar to the need to send an end-of-file marker or an escape symbol when compressing.<BR/><BR/>If I had to guess, then #8 is either a compression person, or Mihai Patrascu. The arrogant style fits Mihai just fine, as well.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-10535133188492449502008-10-15T13:31:00.000-05:002008-10-15T13:31:00.000-05:00TheKro, for the last chunk, which is (n mod L) bit...TheKro, for the last chunk, which is (n mod L) bits long, just budget a couple of bits from the O(1) term to signal more efficiently than sending a length L chunk.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-43819746640186322652008-10-15T04:31:00.000-05:002008-10-15T04:31:00.000-05:00Perhaps I'm missing something, but this result onl...Perhaps I'm missing something, but this result only seems to hold if we know that n has a prime factor of O(log n).<BR/><BR/>What if n is prime?Stevehttps://www.blogger.com/profile/07132779476489171855noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-10798244968410333192008-10-15T03:58:00.000-05:002008-10-15T03:58:00.000-05:00I am always wondering, why theoretical computer sc...I am always wondering, why theoretical computer science is never taking physical time into account. At least with quantum computers, a direct connection to physical time does exist and synchronization of imperfect clocks of classical computers (see Lynch's book) is no longer an issue. So, lets extend our model by assuming Alice and Bob both having access to the ticks of physical time. Alice proceeds by sending the most significant bit first in, say, a clock tick and then waits for a maximum timeout of anohter tick for Bob's response. If there is no response, Alice proceeds with sending her next significant bit and so forth. Bob will only respond to Alice's transmission if he detects a mismatch with the respective bits of his own bit string. If he detects a mismatch, Bob will take over control and transmit all further bits of his string. In any case, both parties will know the maximum of both bit strings at the end of the protocol by combining the two half strings in the obvious way. The number of bits transmitted is exactly n+1 (the extra bit was the first and most significantly mismatching bit). The new resource to bound in this example is physical time spent by the protocol. In the worst case it will take Alice 2n time ticks to transmit n bits: one for transmitting, one for listening. If Alice and Bob shared a full-duplex channel (a pair of opposite unidirectional channels), Alice could transmit on bit per tick and listen on the other channel for Bob's response. In such a model, exactly n+1 bits and n+1 time ticks are consumed. Acknowledgements: The idea of using time to communicate by <I>not</I> sending something is not new and due to a 1984 paper of Leslie Lamport, as far as I am aware of.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-72257821493344739942008-10-14T12:35:00.000-05:002008-10-14T12:35:00.000-05:00L must be at least L => W(N ln2)/ln2Where W(x) ...L must be at least L => W(N ln2)/ln2<BR/>Where W(x) is the Lambert W function, which is the inverse of f(x) = xe^x.<BR/>W is also the solution of W(x) = ln(x)-ln(W(x)) so L = lg(n) - lg(lg(n) - lg(lg(n))) is pretty close.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-25442118591760148972008-10-14T00:46:00.000-05:002008-10-14T00:46:00.000-05:00Anon #6: if you want to try a counting argument, y...Anon #6: if you want to try a counting argument, you're going to have to explain why the same argument doesn't also apply to the variation of the problem in which the two players don't care to know what value the max has, they only want to know which one of them has it. That problem can be solved easily in n+1 bits: the first player sends her number to the second, and the second one sends back one bit saying who has the max.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-71927775729766830682008-10-13T17:45:00.000-05:002008-10-13T17:45:00.000-05:00Proving lower bound: Has anybody tried to count nu...Proving lower bound: Has anybody tried to count number of possible m-bit communications and compare it to the number of possibilities of input?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-51732290000560704832008-10-13T14:18:00.000-05:002008-10-13T14:18:00.000-05:00If you had looked at the analysis in the paper, yo...If you had looked at the analysis in the paper, you would have seen that they get n + 2 log n + O(1).<BR/><BR/>-Anon 5 from the previous postAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-40504408642734689302008-10-13T13:17:00.000-05:002008-10-13T13:17:00.000-05:00Why are you interested in this gap between n and n...Why are you interested in this gap between n and n+O(lg n), for a rather uninteresting problem?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-71034989769704439992008-10-13T13:04:00.000-05:002008-10-13T13:04:00.000-05:00I think L = lg(n) - lg(lg(n) - lg(lg(n))) works as...I think L = lg(n) - lg(lg(n) - lg(lg(n))) works as a slightly better version of L = lg(n).<BR/><BR/>To anonymous above - the parties need some way to signal to each other that received a block not identical to their own, and therefore they should stop the exchange and the larger party should transmit the rest. You need some way to make this signal stand out from a regular block of L bits in step 3. If you use even one additional 'control bit' in every transmission, you're already wasting too many bits. Stage #2 is a neat trick to avoid that.Anatoly Vorobeyhttps://www.blogger.com/profile/15075147858952180578noreply@blogger.comtag:blogger.com,1999:blog-3722233.post-82663975824191732842008-10-13T12:54:00.000-05:002008-10-13T12:54:00.000-05:00I might be a bit daft - but what is stage #2 for? ...I might be a bit daft - but what is stage #2 for? Can't it be skipped?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3722233.post-36238124684270028972008-10-13T12:24:00.000-05:002008-10-13T12:24:00.000-05:00A few typos: you use A and B as terminating string...A few typos: you use A and B as terminating strings at the beginning, B0 and B1 later. The condition on L is mistyped as 2^L > L/n at the end. Also, L = lg(n) - lg(lg(n)) appears not to work, e.g. n = 256, L = 8 - 3, 2^L is too small.Anatoly Vorobeyhttps://www.blogger.com/profile/15075147858952180578noreply@blogger.com