I started this month asking about the nature of randomness and how we generate it for our computers. Let me end the month talking about Intel's clever new digital approach to creating random bits.
Intel chips used to have an analog random bit generator based on noise in the circuits. But improved circuits reduced the noise limiting the effectiveness of these generators.
In the September IEEE Spectrum, Intel Architects Greg Taylor and George Cox give an overview of a digital random bit generator will sit in future Intel chips. The article is a short and fun read.
The heart of the article describe a simple digital circuit.
Initially when both transistors cause full voltage at both Nodes A and B. Intel had to use special inverters (NOT gates) that can withstand not being able to invert at this point. A clock signal slowly turns off the transistors and the inverters go to work reducing A and B to an equilibrium of half voltage each.
The magic now happens. This isn't a stable equilibrium so even the slightest noise quickly drives one of the nodes to full voltage and the other to no voltage. Which node goes to one depends on the direction of the noise and that's how you get your random bit.
Taylor and Cox find an idea that they might have sketched on a napkin and yet gives an incredible simple and elegant solution to an important problem. This is why I love computer science.
Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch
Monday, October 31, 2011
Tuesday, October 25, 2011
John McCarthy (1927-2011)
First Steve and then Dennis and now we have the death of a third computing pioneer this month. John McCarthy passed away earlier this week at the age of 84.
McCarthy was one of the founders and early promoters of Artificial Intelligence and gave the field its name. He developed Lisp for the same reason Newton invented calculus, he needed a system to do his research so he created his own. Lisp, built on Church's λ-calculus, was the first popular example of a functional programming language, an entirely different way to think about programming than the more structured languages. McCarthy received the ACM Turing Award in 1971.
McCarthy truly believed a computer could capture human intelligence and his pioneering work may yet help make that happen.
McCarthy was one of the founders and early promoters of Artificial Intelligence and gave the field its name. He developed Lisp for the same reason Newton invented calculus, he needed a system to do his research so he created his own. Lisp, built on Church's λ-calculus, was the first popular example of a functional programming language, an entirely different way to think about programming than the more structured languages. McCarthy received the ACM Turing Award in 1971.
McCarthy truly believed a computer could capture human intelligence and his pioneering work may yet help make that happen.
Monday, October 24, 2011
It's Open Access Week
Open Access Week starts today. Interestingly a number of traditional journal publishers, like Springer, are sponsors as they try to figure out how to modify their business model in a changing publication environment.
We'd love all our papers to be as widely available as possible but no journals are truly free. They either need a revenue stream whether that comes from authors, readers, libraries or some other outside source, or require a considerable amount of volunteer effort and coordination beyond just editing and reviewing.
I found out about Open Access Week from the ACM as they are promoting their Author-ize service that lets authors give a link on their home pages that allows their readers to freely download the ACM version of their papers. I consider ACM one of the good publishers, reasonably priced, and they've already allowed us to publish our own papers on their website and use them in other publications. David Rosenthal has other opinions.
There is a pledge Research Without Walls going around to "assist in the peer review process (as a reviewer, board/committee member, chair, editor, etc.) only for conferences, journals, and other publication venues that make all accepted publications available to the public for free via the web." Are you signers willing to forgo serving on a STOC or FOCS PC?
I heard a great (though hard to parse) quote second hand from an economist about the ethics of illegally downloading music.
My biggest fear of the open access movement is that without a strong alternative model it will just lead to even less CS papers getting published in journals. Even open access won't give access to a paper never written.
We'd love all our papers to be as widely available as possible but no journals are truly free. They either need a revenue stream whether that comes from authors, readers, libraries or some other outside source, or require a considerable amount of volunteer effort and coordination beyond just editing and reviewing.
I found out about Open Access Week from the ACM as they are promoting their Author-ize service that lets authors give a link on their home pages that allows their readers to freely download the ACM version of their papers. I consider ACM one of the good publishers, reasonably priced, and they've already allowed us to publish our own papers on their website and use them in other publications. David Rosenthal has other opinions.
There is a pledge Research Without Walls going around to "assist in the peer review process (as a reviewer, board/committee member, chair, editor, etc.) only for conferences, journals, and other publication venues that make all accepted publications available to the public for free via the web." Are you signers willing to forgo serving on a STOC or FOCS PC?
I heard a great (though hard to parse) quote second hand from an economist about the ethics of illegally downloading music.
If if I had to pay for it I would pay for it then I will pay for it.The iTunes model addressed this concern by pricing music so cheaply that one would feel better paying for it than not. Academic publishers should learn this lesson and also price downloads of research papers at $0.99.
My biggest fear of the open access movement is that without a strong alternative model it will just lead to even less CS papers getting published in journals. Even open access won't give access to a paper never written.
Friday, October 21, 2011
The Cup Holder Principle
The story goes that when Toyota engineers started to design the first cup holders in the 80's, they went to a local 7-11 and got every different cup 7-11 had to make sure their design would work on all the cups in current use. These days the cup designers have to make sure their cups will work in today's cup holders.
This is one of the starkest examples of initial Technology for A being driven by B and now the technology for A drives the technology for B.
How much does this happen in theoretical computer science? Do we design algorithms to use an already established data structure? Do we modify our definitions of some object to make it group or a field? Do we create a cryptographic protocol so that some "standard assumption" makes it secure?
Are these good or bad things? Sometimes it is really useful to make square pegs fit into round holes and other times we miss new opportunities.
Wednesday, October 19, 2011
Theorems that are impressive at first but then....
Mission Impossible
was my favorite show as a kid. As an adult it would not make my
top 20, and I wonder why I liked it so much as a kid.
(Actually I do know- at the beginning they are given a well defined problem
and they solve it, which is how Math works.)
This inspired this post: Is there some theorem that you were initially impressed with but are now far less impressed? I list things I have heard of for this category. I request that you submit your own examples.
This inspired this post: Is there some theorem that you were initially impressed with but are now far less impressed? I list things I have heard of for this category. I request that you submit your own examples.
- Every number is the sum of 4 squares. This is impressive and still is. Number theorist must use this all the time! Alas, aside from its use in Hilbert's 10th problem, and maybe a few few other places, I has never seen it used and is now less impressed. However, this may be unwarranted. Some theorems in math are impressive for the achievement, others for their use later. This one IS impressive for its achievement. But, as far as I can tell (and I could be wrong), not for its uses.
- Every group is a group of permutations. Group Theorists must use this all the time! Alas the proof makes you realize its more of a tautology. Rarely used by Group Theorists. It is used in some of the proofs of Sylow's theorem. I do not know of any others uses. And this one is not impressive for its achievement.
- The Prime Number Theorem. Since results that are very very close to it can be gotten with much much much less effort, getting the actual constant down to 1 seems like too much sugar for a cent. (For more on PNT and a link to an easy proof of a weaker version see an old post of mine here.) However, this one is an achievement certainly. And it inspired other great mathematics.
- Poincare's Conjecture says that if something looks, feels, and smells like a sphere, then its a sphere. Is that really worth $1,000,000? Perhaps Perelman didn't think so either.
Monday, October 17, 2011
Teaching PCPs to Undergrads
The last few times I've taught undergraduate theory I cover the PCP theorem. It's not complicated if you state it the right way:
PCP Theorem: For any constant α > 7/8, there is a polynomial-time computable function f mapping 3-CNFs to 3-CNFs such that for all formula φ,
PCP Theorem: For any constant α > 7/8, there is a polynomial-time computable function f mapping 3-CNFs to 3-CNFs such that for all formula φ,
- If φ is satisfiable then f(φ) is satisfiable.
- If φ is not satisfiable then every assignment to f(φ) satisfies at most an α-fraction of the clauses.
I point out to the class you can satisfy 7/8 of the clauses by just choosing a random assignment.
Also I show how the PCP theorem gives some (weak) approximation bounds for Clique. For each variable in each clause of f(φ) create a node of a graph and connect two nodes as long as they aren't in the same clause or connect a node representing a variable with one representing its negation. That gets you an 8/7-ε approximation lower bound for clique. I give showing a constant approximation lower bound for Vertex cover as a homework assignment.
Also I show how the PCP theorem gives some (weak) approximation bounds for Clique. For each variable in each clause of f(φ) create a node of a graph and connect two nodes as long as they aren't in the same clause or connect a node representing a variable with one representing its negation. That gets you an 8/7-ε approximation lower bound for clique. I give showing a constant approximation lower bound for Vertex cover as a homework assignment.
I don't even try to give an idea of the proof of the PCP theorem. I just say it would take an entire graduate class to cover the proof in its full detail. That's probably a lie, one of our ten-week quarter-long classes is not enough time to prove the very strong version of the PCP theorem stated above.
Thursday, October 13, 2011
Dennis Ritchie (1941-2011)
We lost another computing pioneer of a very different kind. Dennis Ritchie, who developed C and co-developed Unix, passed away last weekend. Ritchie and Ken Thompson received the 1983 Turing Award for their development of Unix.
In the early 80's I programmed extensively in assembly language on the Apple II and IBM 370. In both cases we needed speed a high-level language couldn't get us. C and Unix let us take advantage of high-level constructs and abstraction and yet retain the full power of the underlying machine. Ritchie changed the way we did computing.
If there is a moment that captures both Jobs and Ritchie, it was in 2002 when the Mac moved to the Unix-based OS X from which also the Apple iOS was later derived. As you play with the new iOS 5, you can't help but think of these great pioneers that made it possible.
In the early 80's I programmed extensively in assembly language on the Apple II and IBM 370. In both cases we needed speed a high-level language couldn't get us. C and Unix let us take advantage of high-level constructs and abstraction and yet retain the full power of the underlying machine. Ritchie changed the way we did computing.
If there is a moment that captures both Jobs and Ritchie, it was in 2002 when the Mac moved to the Unix-based OS X from which also the Apple iOS was later derived. As you play with the new iOS 5, you can't help but think of these great pioneers that made it possible.
Wednesday, October 12, 2011
If Bill Tweeted what would he tweet (Steve Jobs Edition)
- A more nuanced view of Steve Jobs: here.
- A less nuanced view of Steve Jobs: here.
- The next Steve Jobs: here
- A very nice NON-Steve Jobs post here.
-
Is this an appropriate use of logarithms?
From Andrew Sullivan's Blog (the boldface is mine):
In some ways, the emergence of a Republican candidate (Rick Perry) who takes every single aspect of George W. Bush's political persona and adds a logarithm, is a healthy sign. I'd rather have a candidate who is explicitly saying that his politics is based on religion and his political rallies are actually spiritual rallies, than one whose theocratically-driven conservatism is on the downlow.
- A type of Math Anxiety
- Should people learn math?
- The president invokes math: here
- A bad idea for a TV series: The intuitionist defense attorney: Just because you proved that A OR B did the crime, and then you showed NOT(A did it), does not mean that you have proven B did it.
Monday, October 10, 2011
More than East and West
The Obama Campaign is creating a Campaign Analytics Team.
Chicago has amazing arts, music, food, sports and architecture, everything you'd want in a city and considerably cheaper than the coasts. We know we have great universities and a strong intellectual core. How many cities have a battle for the minds between the Chicago Ideas Week starting today and the Chicago Humanities Festival beginning next week?
Chicago isn't as well known for its high tech. Computer science in Chicago is good but not yet as strong as it should be. We do have some very strong CS departments nearby at Illinois, Wisconsin, Michigan and Purdue. Chicago has had its successful startups most notably Groupon. Chicago should be a major high tech hub but we need to sell ourselves better.
Next time you are stranded at O'Hare, take the El to the Loop and check out our great city. Prepare to be amazed.
To find good computer scientists, the Obama campaign feels the need to look 1850 miles away. A stark reminder that especially in the tech world, people often forget that there is an America between Oakland and Philadelphia. Google, IBM, Microsoft and Yahoo have research labs around the globe but generally ignore middle America. Chicago is often considered at best a place to change planes.Love Data, Predictive Analytics, Social Media and Politics? The Analytics team for the Obama Campaign is hiring full-time analytics engineers and scientists at our headquarters in Chicago!To find out more, come meet us at the Stanford campus ...
Chicago has amazing arts, music, food, sports and architecture, everything you'd want in a city and considerably cheaper than the coasts. We know we have great universities and a strong intellectual core. How many cities have a battle for the minds between the Chicago Ideas Week starting today and the Chicago Humanities Festival beginning next week?
Chicago isn't as well known for its high tech. Computer science in Chicago is good but not yet as strong as it should be. We do have some very strong CS departments nearby at Illinois, Wisconsin, Michigan and Purdue. Chicago has had its successful startups most notably Groupon. Chicago should be a major high tech hub but we need to sell ourselves better.
Next time you are stranded at O'Hare, take the El to the Loop and check out our great city. Prepare to be amazed.
Thursday, October 06, 2011
Steve Jobs 1955-2011
It's one of those events. You'll always remember where you were when you heard that Steve Jobs passed away. I was at dinner with several visiting computer scientists, holdovers from the CCC council meeting earlier in the day. Our iPhones started buzzing and the news quickly spread. We gave a toast to the great Mr. Jobs.
Steve Jobs put the algorithm in our pocket. Computation moved from big rooms to the desktop to something we carry with us every waking minute. Jobs make it happen not just for us tech heads but for everyone. Jobs did it without an algorithm that analyzes data to see what the public wants. Jobs realized it takes an inner vision, a true artist to make the power of computation something we all can use.
Steve Jobs put the algorithm in our pocket. Computation moved from big rooms to the desktop to something we carry with us every waking minute. Jobs make it happen not just for us tech heads but for everyone. Jobs did it without an algorithm that analyzes data to see what the public wants. Jobs realized it takes an inner vision, a true artist to make the power of computation something we all can use.
Wednesday, October 05, 2011
If you find a mistake in someone elses paper you should....
What do you do if you read a paper or book and find mistakes in it?
My first impulse is to say:
Email the author. Always be polite and admit (which is true) that you
may be misunderstanding something.
A few thoughts:
- If you offer a suggested correction then MAKE SURE IT IS CORRECT. The author may say Oh, I guess that's a mistake, I better make the correction having thought that you read it carefully. While that may well be the authors responsibility, be very careful. The first rule of proofreading is DO NO HARM.
- If the paper is a preprint then the author should be VERY GRATEFUL since they will be able to make the correction before it becomes official. But see next point.
- If the paper is already in a journal the author might want to correct the version on their own website. I can picture a day when the version on the authors website or arXiv are BETTER than the so-called official version. So the author should be grateful here as well.
- For arguments sake, lets say that in Karp's classic paper Reducibility Among Combinatorial Problems, where he proves 21 problems NP-compete, he made a mistake on 0-1 programming. The problem IS NP-complete and the correct proof is in many textbooks (also, anyone reading this blog can probably do it themselves). Is it worth wasting Karp's time with this?
- Most authors will be surprised and delighted that someone read their paper.
- Some authors won't care. Either they left the field or they don't care about the constant they got wrong or they don't want to be bothered. That is their right; however, what to do? You can't publish an erratum for them.
-
In High School while studying some Combinatorics
I came across the following passage.
The number of ways to arrange n distinct objects is n × (n-1) × ... × 1. For example, the number of ways to arrange 5 distinct objects is 5!.
I did not understand why they were so excited about the answer. Using an exclamation point seemed over the top. And CLEARLY there were two mistakes!- The answer of 5 is WRONG. The answer should be 5 × 4 × 3 × 2 × 1 = 120.
- There is a spurious period after the exclamation point.
Monday, October 03, 2011
What is Random?
One can get into great philosophical debates on what is randomness. Information that we can't compress. Information that's unpredictable. Information that we are willing to bet on.
When I define a probabilistic Turing machine I give it a special "coin state" which it enters and magically lands in a special "heads" state and "tails" state uniformly and independently each time. I imagine a computer hooked up to a little box with a coin inside that gets flipped and some sensor or camera determines whether it landed heads or tails.
I have no problems thinking about probabilistic computation just like I have no issues with quantum machines which haven't been built yet or nondeterministic machines which will never exist.
We don't care where those random bits come from as long as they fulfill the right properties. Of course our computers don't have little coin boxes so they generate randomness using pseudorandom generators which don't fulfill all the properties we expect from true randomness. So we developed theories of PRGs and under what assumptions good PRGs exist. Whether we can use them depends on whether we use randomness for searching or hiding.
We can't disprove that BPP = NEXP (everything in nondeterministic exponential time can be solved in probabilistic polynomial time). Then true randomness will give us the secrets of the universe and PRGs won't help much. Random bits would be worth their weight in gold but can we get them? I'd make a fortune selling little coin boxes.
Subscribe to:
Posts (Atom)