Sometimes words or phrases are used in MATH and then spread to the REAL WORLD. I have blogged about how the terms Prisoner's Dilemma has become a real-world-phrase here and speculated about the terms Venn Diagram, Zeno's Paradox, and n+1 here.
I recently came across a pair of words that are related--- one of them seems to be (like Prisoner's Dilemma) going from MATH to THE REAL WORLD. The other one is very odd in that I've seen it in the REAL WORLD but it SHOULD be in MATH.
Long Tail: A Probability distribution has a long tail if there are MANY items that have a SMALL but NON-ZERO prob of happening. This is a term in probability. However, I have seen it used in the REAL WORLD as in Amazon has a long-tail strategy meaning that they will sell LOTS of DIFFERENT things even if the number of people buying some of them is small (like this which is ranked 9,578,520- though I doubt they can be that precise). This article from the Atlantic Monthly points out that ESPN used to have a long tail strategy (e.g., showing Billiards and others sports that are not that popular, but ALOT of them) but then abandoned it for... see next definition. Note that the term Long Tail is used for both a type of Prob Dist and a marketing strategy related to it. How common a word is Long Tail? It gets 66,500,000 hits on Google. The first page has the definition above only. The 10th page had about half of the hits with the def above.
Fat Head: A strategy where you concentrate on just a few items. ESPN is doing that by covering just a few sports, but the most-watched ones (too bad, I was hoping they would cover my favorite sport, chess boxing). This SHOULD be a math term for a Prob Dist with just a few points of high prob. I asked my friends in the ML community and he assures me that NO its not a math term--- but it SHOULD be! How common a word is this? It gets 2,300,000 hits on Google. The first page seems to have NOT have ANY reference to the definition above.
SO- this COULD be a case where a term used in the REAL WORLD migrates to MATH with essentiallythe same meaning. This isn't that uncommon (the term Continuity comes to mind) but this timeI will have predicted it! Maybe I should do Machine Learning.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, September 30, 2013
Saturday, September 28, 2013
Complexity and FOCS
The Conference on Computational Complexity Call for Papers is out, deadline November 27.
The deadline for early registration and hotel for the FOCS conference in Berkeley is October 4. Student travel support is available.
The deadline for early registration and hotel for the FOCS conference in Berkeley is October 4. Student travel support is available.
Thursday, September 26, 2013
Dealing with Death
Mary Jean Harrold, a professor of software engineering at Georgia Tech, passed away last week. Mary Jean was 67 and still quite active before the cancer struck.
Computer science is still a relatively young field and most of even the early computer scientists remain quite alive. So a death in the field, particularly a colleague, makes a mark because it typically is happening at a young age. I've lost five co-authors (Avner Magen, Steve Mahaney, Andrej Muchnik, Nick Reingold, Carl Smith) all well before their time. Each death is a stark reminder of what's important in life, what does the next theorem mean when life seems so short?
We're nearing a time in computer science that many of our ranks will die after living to a ripe old age. Those remembrances will be of a life well lived. But there will always be lives cut short. The best we can do is remember them and move on and continue to build on the research tradition they left behind.
Computer science is still a relatively young field and most of even the early computer scientists remain quite alive. So a death in the field, particularly a colleague, makes a mark because it typically is happening at a young age. I've lost five co-authors (Avner Magen, Steve Mahaney, Andrej Muchnik, Nick Reingold, Carl Smith) all well before their time. Each death is a stark reminder of what's important in life, what does the next theorem mean when life seems so short?
We're nearing a time in computer science that many of our ranks will die after living to a ripe old age. Those remembrances will be of a life well lived. But there will always be lives cut short. The best we can do is remember them and move on and continue to build on the research tradition they left behind.
Tuesday, September 24, 2013
Crystal Math- What NUMB3RS and BREAKING BAD both get wrong
The TV show Numb3rs had as a premise that a GENIUS mathematician
could help solve crimes. Is this true? I rather doubt you need a GENIUS-
though of course some prob, state, data mining, the math behind forensics, and a few other things help. And it may help to know some number theory if a mathematician who is working on the Riemann hypothesis has his daughter kidnapped. But I don't think you need someone on the level of Charles Eppes.
The TV show Breaking Bad (see Honest Trailor for Breaking Bad and/or
Idiots Guide to Breaking Bad if you've seen the first 4.5 seaons at least)
has as a premise that a GENIUS chemist can make really good crystal meth. And as a by product it's blue. I know nothing about the crystal meth business; however a chemist friend of mine (who has never made the stuff) tells me that YES, being a careful chemist is good, and certainly better than a meth-head who is more likely to blow up his lab than to produce any, a GENIUS chemist would not be any better than a good chemist.
The TV show Elementary (Sherlock Holmes in modern day New York) and many other shows (Monk, Perception, Psyche, The Mentalist, Columbo, and others I am sure) has as a premise that a GENIUS observer could help solve crimes. This may be more true then the above, but there are other tools available today (e.g., DNA).
All of these shows, and others, make the FALLACY OF EXTRAPOLATION. Taking a good idea and extrapolating it to absurdity.
Here is a non-TV example: If blogging is part of my job, and I can deduct job expenses for Tax purposes, then I should be able to deduct the cost of the DVD's for Numb3rs that I bought because of this post.
could help solve crimes. Is this true? I rather doubt you need a GENIUS-
though of course some prob, state, data mining, the math behind forensics, and a few other things help. And it may help to know some number theory if a mathematician who is working on the Riemann hypothesis has his daughter kidnapped. But I don't think you need someone on the level of Charles Eppes.
The TV show Breaking Bad (see Honest Trailor for Breaking Bad and/or
Idiots Guide to Breaking Bad if you've seen the first 4.5 seaons at least)
has as a premise that a GENIUS chemist can make really good crystal meth. And as a by product it's blue. I know nothing about the crystal meth business; however a chemist friend of mine (who has never made the stuff) tells me that YES, being a careful chemist is good, and certainly better than a meth-head who is more likely to blow up his lab than to produce any, a GENIUS chemist would not be any better than a good chemist.
The TV show Elementary (Sherlock Holmes in modern day New York) and many other shows (Monk, Perception, Psyche, The Mentalist, Columbo, and others I am sure) has as a premise that a GENIUS observer could help solve crimes. This may be more true then the above, but there are other tools available today (e.g., DNA).
All of these shows, and others, make the FALLACY OF EXTRAPOLATION. Taking a good idea and extrapolating it to absurdity.
Here is a non-TV example: If blogging is part of my job, and I can deduct job expenses for Tax purposes, then I should be able to deduct the cost of the DVD's for Numb3rs that I bought because of this post.
Sunday, September 22, 2013
STOC CFP still delayed- but I was asked to pass this along
Since there were comments on the blog about the STOC and CCC CFP not being out yet I mailed various people who are in-the-know. I got email from David Shmoys (STOC PC chair 2014) telling me
(1) The STOC the call is still delayed,
(2) There is a website about it, here, that is INCORRECT - the REAL deadline for submission will be Nov 11, 2013 (4PM east coast time.)
(3) Please post this correction on complexity blog.
(So I just did.)
Note that Lance and I are NOT involved with the organization of STOC or CCC. The blog entry is just passing along information.
(1) The STOC the call is still delayed,
(2) There is a website about it, here, that is INCORRECT - the REAL deadline for submission will be Nov 11, 2013 (4PM east coast time.)
(3) Please post this correction on complexity blog.
(So I just did.)
Note that Lance and I are NOT involved with the organization of STOC or CCC. The blog entry is just passing along information.
Wednesday, September 18, 2013
tl;dr
Every now and then we need new words and phrases come into our lexicon, like the unfortunate "twerking", but here's another "tl;dr", short for "too long; didn't read". I'm guessing it started as an insult/excuse not to read a long document, blog post or email. Respond "tl;dr" and you've put the blame on the writer for being too loquacious to the tweet-friendly generation.
Now I see tl;dr used as a short synopsis sometimes by the author themselves, the new executive summary if the executive has six seconds to read. Maybe I should tl;dr my class lectures: "Finite Automata: Easy to analyze but too weak to do much interesting".
Are we really moving to this brave new world of micro-attention spans? Is this just another reason that newspapers are dying and blogs are passé? When I write email should I keep it short and be misunderstood, make it long and have it not be read or add a tl;dr summary and get the worst of both worlds?
Now I see tl;dr used as a short synopsis sometimes by the author themselves, the new executive summary if the executive has six seconds to read. Maybe I should tl;dr my class lectures: "Finite Automata: Easy to analyze but too weak to do much interesting".
Are we really moving to this brave new world of micro-attention spans? Is this just another reason that newspapers are dying and blogs are passé? When I write email should I keep it short and be misunderstood, make it long and have it not be read or add a tl;dr summary and get the worst of both worlds?
Monday, September 16, 2013
Did YOU think the NSA could factor fast?
Before the recent revelations about the NSA (see Lances Post and Scott's post )I would tell my class, when teaching P and NP,
I will need to revise that now. BEFORE the recent revelations there were
the following points of view on factoring:
The truth seems to be that the truth is between 1 and 2, closer to 1, and also item 6.
In particular, the NSA does not seem that much ahead of academics.
In the past governments were way ahead of academics in crypto. This no longer seems to be the case (at least in America). I speculate that this is because there is now a large community of people doing research in crypto openly, publishing openly, so the government is no longer the only (or almost the only) game in town. Also, many non-government industries use crypto and some do research in it. This also helps the NSA- they can use results in the open literature, but they can't get that much ahead of it.
Are there other fields where the government is ahead of academics and industry? On a guess stuff with weapons and weapon detection, since not many academics work on that. Maybe sociology since the government has census data and other data that is not available to the public.
We have very good reasons to think that Factoring is NOT NP-complete. As for P--- much murkier. People have tried to get it into P because of crypto and have not succeeded, hence many people think that Factoring is NOT in P. But there is so much math there that perhaps could be exploited to show it IS in P. Another very odd possibility is that it is KNOWN to be in P but only by the NSA. Crytpo has had this before- where some concepts were known to governments before they were known to the public, even the academic public.
I will need to revise that now. BEFORE the recent revelations there were
the following points of view on factoring:
- The NSA cannot factor any better than what is known in the literature. Maybe a bit better because they use more parallelism.
- The NSA has taken the known algorithms and found the right parameters and has special purpose hardware so can do them better than anyone else, but nothing of interest mathematically. Perhaps some very interesting subtle points of math and hardware. What they have would not get into STOC/FOCS/CRYPTO (though maybe it should- that's another debate). This is the one I believed.
- The NSA has an algorithm that is better than the literature (e.g., exponential in (log n)^{1/5}). But not in P. This would surely get into STOC/FOCS/CRYPTO and win a prize.
- The NSA has factoring in P through some very interesting and new mathematics. If this was public then perhaps a Turing Award. Some serious number theorists do think that Factoring IS in P, so this one is not quite so implausible.
- The NSA has a quantum computer that factors quickly. I do not now of anyone serious who believed this. Of course, this could be a case of the No True Scotsman Paradox--- if someone really believed this I would (perhaps unfairly) declare them non-serious.
- The NSA does not have a better algorithm, but has managed to put trapdoors in stuff so that they and only they could break certain codes.(A covert version of Clipper Chip.) So they can break codes but not in a way that is interesting mathematically.
The truth seems to be that the truth is between 1 and 2, closer to 1, and also item 6.
In particular, the NSA does not seem that much ahead of academics.
In the past governments were way ahead of academics in crypto. This no longer seems to be the case (at least in America). I speculate that this is because there is now a large community of people doing research in crypto openly, publishing openly, so the government is no longer the only (or almost the only) game in town. Also, many non-government industries use crypto and some do research in it. This also helps the NSA- they can use results in the open literature, but they can't get that much ahead of it.
Are there other fields where the government is ahead of academics and industry? On a guess stuff with weapons and weapon detection, since not many academics work on that. Maybe sociology since the government has census data and other data that is not available to the public.
Thursday, September 12, 2013
Cryptography and the NSA
Back at Northwestern I occasionally taught an undergraduate cryptography class since I was the local expert in the field (a statement not even remotely true at Georgia Tech). I would cover the Advanced Encryption Standard, an implementation of a one-way key-based permutation. AES had many components included an S-Box that seems like a random shuffle but is actually based on the inverse of a polynomial. One sentence in the textbook of Trappe and Washington made the following point (page 161).
After reading last week's New York Times article on the NSA, I realize my naivety. The NYT article doesn't go into how and which protocols the NSA has their hand in but I now understand the concern.
It doesn't look like the NSA has actually broken cryptographic protocols, have a secret quantum-like computer in their basement or polynomial-time algorithms for SAT. I could go on for pages but Scott has done an excellent job talking about the complexity issues involved. They've more likely found ways to access your information before it has been encrypted or after its been decrypted.
Matthew Green wrote a nice post speculating on what the NSA might be able to do, so nice that it caused some controversy at Johns Hopkins.
The whole Snowden affair gives us a glimpse into the NSA but they hide their capabilities well and we'll never know the full extent of their knowledge.
The S-box was constructed in an explicit and simple algebraic way so as to avoid any suspicions of trapdoors built into the algorithm.Really? Can't we trust the government not to put back doors into our standardized cryptographic algorithms?
After reading last week's New York Times article on the NSA, I realize my naivety. The NYT article doesn't go into how and which protocols the NSA has their hand in but I now understand the concern.
It doesn't look like the NSA has actually broken cryptographic protocols, have a secret quantum-like computer in their basement or polynomial-time algorithms for SAT. I could go on for pages but Scott has done an excellent job talking about the complexity issues involved. They've more likely found ways to access your information before it has been encrypted or after its been decrypted.
Matthew Green wrote a nice post speculating on what the NSA might be able to do, so nice that it caused some controversy at Johns Hopkins.
The whole Snowden affair gives us a glimpse into the NSA but they hide their capabilities well and we'll never know the full extent of their knowledge.
Monday, September 09, 2013
T/F - No Explanation needed VS T/F-Explanation needed.
One of the comments on my blog on Types of question for exams
A True/False math question where they have to prove their answer. A student who picks the wrong answer can figure that out during the proof and then correct their answer. A student who picks the wrong answer and proves it has proven they really don't have a clueActually I once did an experiment about this! It's only one so I don't know what to read into it, but I will describe it and speculate.
CMSC 250 is the Sophomore Discrete Math course, required for all majors. CS 3 is a co-req. It's a course on how to prove simple things. We DO go over how a FOR ALL statement can be true vacuously (E.g.,all of the students over 10 feet tall will get an A+). Around 150 students take the course. In the spring there is an honors section of about 20. I PLANNED the following:
- In Spring of 2008 one of the questions on the final was a set of FIVE statements where the students had to, for each statement, say if its TRUE or FALSE and NO JUSTIFICATION NECC. One of the statements was If A is a set of natural numbers such that the powerset of A has 5 elements then A is infinite.
- In Spring of 2010 one of the questions on the final was a set of FIVE statement where the students had to, for each statement, say if it's TRUE or FALSE and IF TRUE THEN GIVE A SHORT JUSTIFICATION, IF FALSE THEN GIVE A COUNTEREXAMPLE.
Note that the statement is TRUE since there are NO such sets A.
So, how did they do?
- When NOT needing to justify or give a counterexample, of the 150 students in the class, 14 got it right. There was no correlation (or perhaps a very weak one) between those who got it right and those who did well in the course or those that were in the honors section.
- When the DID need to justify or give counterexample, of the 152 students in the class, 19 got it right. Slightly stronger correlation to those who got it right and those who did well in the course and to those in the honors section.
tricky question which some non-theory faculty members had trouble with when I explained this story to them. Exam Pressure was likely NOT a factor as my exams have very little time pressure- by the end of the exam
there were only about 30 students left taking it.
Here are the answers I got:
- FALSE- clearly A is finite.
- FALSE- too obvious to say why.
- FALSE- there is no such A
- Variants of the above.
- Incoherent things that may be similar to the above.
Friday, September 06, 2013
Myhill Nerode versus Pumping Lemma
I have seen some recent backlash against the pumping lemma for showing that languages are not regular and as I am now teaching regular languages I had to choose should I teach the pumping lemma or Myhill-Nerode to show languages are not regular. Let's review both definitions (taken from Wikipedia)
Pumping Lemma: If a language L is regular, then there exists a number p ≥ 1 (the pumping length) such that every string uwv in L with |w| ≥ p can be written in the form uwv = uxyzv with strings x, y and z such that |xy| ≤ p, |y| ≥ 1 and uxyizv is in L for every integer i ≥ 0.
Myhill-Nerode: Given a language L, and a pair of strings x and y, define a distinguishing extension to be a string z such that exactly one of the two strings xz and yz belongs to L. Define a relation RL on strings by the rule that x RL y if there is no distinguishing extension for x and y. It is easy to show that RL is an equivalence relation on strings, and thus it divides the set of all finite strings into equivalence classes.
The Myhill–Nerode theorem states that L is regular if and only if RL has a finite number of equivalence classes, and moreover that the number of states in the smallest deterministic finite automaton (DFA) recognizing L is equal to the number of equivalence classes in RL. In particular, this implies that there is a unique minimal DFA with minimum number of states.
The two basic complaints about the pumping lemma: Five quantifiers and it is not complete--there are nonregular languages that can be pumped. To the first point if you think of the pumping lemma as a game with the adversary choosing p, x, y and z, the quantification is not as confusing as some would think. Myhill-Nerode also has five quantifiers when you spell it out: For all regular L, there exist x1,...,xk such that for all y there is an i such that for all z, xiz is in L iff yz is in L.
As to the second part, the counterexamples are contrived and usually go away with simple closure properties. Consider the one from wikipedia:
Take L ∩ (01(2∪3))* eliminates the strings in the first part of L and now it is easy to pump.
So I don't buy the arguments for Myhill-Nerode over pumping. Nevertheless I'll teach the pumping lemma and Myhill-Nerode because they are both so cool.
Pumping Lemma: If a language L is regular, then there exists a number p ≥ 1 (the pumping length) such that every string uwv in L with |w| ≥ p can be written in the form uwv = uxyzv with strings x, y and z such that |xy| ≤ p, |y| ≥ 1 and uxyizv is in L for every integer i ≥ 0.
Myhill-Nerode: Given a language L, and a pair of strings x and y, define a distinguishing extension to be a string z such that exactly one of the two strings xz and yz belongs to L. Define a relation RL on strings by the rule that x RL y if there is no distinguishing extension for x and y. It is easy to show that RL is an equivalence relation on strings, and thus it divides the set of all finite strings into equivalence classes.
The Myhill–Nerode theorem states that L is regular if and only if RL has a finite number of equivalence classes, and moreover that the number of states in the smallest deterministic finite automaton (DFA) recognizing L is equal to the number of equivalence classes in RL. In particular, this implies that there is a unique minimal DFA with minimum number of states.
The two basic complaints about the pumping lemma: Five quantifiers and it is not complete--there are nonregular languages that can be pumped. To the first point if you think of the pumping lemma as a game with the adversary choosing p, x, y and z, the quantification is not as confusing as some would think. Myhill-Nerode also has five quantifiers when you spell it out: For all regular L, there exist x1,...,xk such that for all y there is an i such that for all z, xiz is in L iff yz is in L.
As to the second part, the counterexamples are contrived and usually go away with simple closure properties. Consider the one from wikipedia:
Take L ∩ (01(2∪3))* eliminates the strings in the first part of L and now it is easy to pump.
So I don't buy the arguments for Myhill-Nerode over pumping. Nevertheless I'll teach the pumping lemma and Myhill-Nerode because they are both so cool.
Tuesday, September 03, 2013
Types of questions for exams
QUESTION: Give as many types of exam questions you can, give examples, and comment on if this is a good type of question.
My answer below.
When teaching a large course such as Sophomore discrete math (150-200 students) I tend to get a uniform distribution skewed a bit on the high side. More precise: I tend to get at roughly 10 students in EVERY 10-point interval: 0-10, 10-20, 20-30,..., 90-100, with less on the low side and more on the high side. The benefit of this is that the students who get (say) less than 40 CANNOT say Well--- everyone did badly. They really are send a signal to either work harder or drop (I tell them this directly as well). I don't understand profs who give exams where nobody cracks 50/100 (I have heard this is common in Physics). They are wasting half of the grade spectrum.
My answer below.
- A problem that some students can get right even if they never had the course because they have seen it in some other course. EXAMPLE: In a course on Ramsey Theory have a question that uses the Prob. Method. PRO: The question is still in scope for the courses. CON: A bit awkward that someone may have learned the material elsewhere. UPSHOT: This is FINE.
- A problem that some students can get right even if they never had the course because they are quite clever. EXAMPLE: Easy Combinatorics or Probability in a sophomore Discrete Math Course. PRO: The question is still in scope for the courses. CON: A bit awkward that someone may have missed class but still got it right. UPSHOT: This is FINE.
- A rigged question--- students saw two examples in class, two examples on the HW and now have to do one themselves. EXAMPLE: proving numbers irrational. PRO: Clearly in scope and fair. PRO: They will surely understand what you are asking for. CON: They may get it right via memory rather than understanding (they may not even know the difference.) UPSHOT: This is FINE though it requires some planning ahead of time.
- A rigged question with a twist--- students saw two examples in class, two examples on the HW and now have to do one themselves but its DIFFERENT in an important way. EXAMPLE: In class and HW do many problems like Here is the distribution, here is a random var, what is its expected value but on the exam give Here is a random var, here is what we want for the expected value, give a distribution that gives us that. PRO: Harder to memorize template. CON: May be hard to grade as they say odd things. CON: May be confusing to know what you are asking for, even for good students. UPSHOT: This is FINE though it requires some planning ahead of time.
- A problem that requires utter mastery of the material but no creative thought. EXAMPLE: Give the algorithm (that we did in class) for proving that a CFG's are in P. Write it up so that someone who had never seen it can understand it. PRO: Straightforward yet hard to get via memorization. CON: Might be too time consuming for an exam. CON: (From experience) no matter how much you say in bold letters things like Write it up so that someone who had never seen it can understand it. They will skip steps and write it up badly and its hard to tell if THEY really know it. UPSHOT: I do this but only in certain cases.
- A problem that requires them to be creative (this is ill defined but its the opposite of the one above). PRO: If they truly understand the material they can do this. CON: My PRO may be incorrect. UPSHOT: Absolutely fine for HW which are not worth much for the grade anyway and I can enlighten them. I tend to avoid these on exams. Though the line between creativity and standard is a thin one. (Problem for an exam: How thin in millimeters?)
- A giveaway question. When I teach Formal Lang Theory I have (going back to when I was Harry Lewis's TA in 1981) have on the exam Give an example of a string of length 4 over the alphabet {a,b}. An unintended consequence- if they CAN"T do this its a really bad sign. I have asked this question many times and I have literally NEVER seen someone get it wrong and pass the course. I have gotten the following answers: ab*, ababa, and a DFA recognizing aaaa (that I was tempted to give credit to but did not). Incidentally, the most common right answer has always been abab. Second is abba. PRO: I have this one early in the exam to calm them down.
When teaching a large course such as Sophomore discrete math (150-200 students) I tend to get a uniform distribution skewed a bit on the high side. More precise: I tend to get at roughly 10 students in EVERY 10-point interval: 0-10, 10-20, 20-30,..., 90-100, with less on the low side and more on the high side. The benefit of this is that the students who get (say) less than 40 CANNOT say Well--- everyone did badly. They really are send a signal to either work harder or drop (I tell them this directly as well). I don't understand profs who give exams where nobody cracks 50/100 (I have heard this is common in Physics). They are wasting half of the grade spectrum.
Subscribe to:
Posts (Atom)