Wednesday, February 29, 2012

The Erdos- de Bruijn theorem

The mathematician Nicolaas Govert de Bruijn passed away on Feb 17, 2012. The number of things named after him is quite large. I will discuss the Erdos-de Bruijn theorem.

Erdos-de Bruijn theorem: An (infinite) graph G is k-colorable iff every finite subgraph is k-colorable.

I will sketch the proof for the case where G is countable. We can assume the vertices are {1,2,3,...}. Let COLi be a k-coloring of G restricted to {1,...,i}. We will use the COLi's to obtain a k-coloring of the entire graph G. We can assume the COLi's use colors {1,...,k} so we can speak of the least color j such that blah blah.

We color node 1 by the least color that an infinite number of the COLi's color 1. Then REMOVE all of the COLi's that do not use that color on 1. (We kill all those that disagree with us! as I told my students.) Note that there are still an infinite number of COLi's left.

We color node 2 by the least color that an infinite number of the COLi's THAT ARE LEFT color 2. Then REMOVE all of the COLi's that do not use that color on 2. Note that there are still an infinite number of COLi's left.

And so on.

End of Sketch of Proof.
  1. This type of argument can be used to proof the following:
    1. If we already have the infinite Ramsey Theorem on N, we can obtain the finite Ramsey Theorem and (with a small trick) the Large Ramsey Theorem.
    2. If we already have the finite dilworth theorem (any FINITE partial order of width w can be covered with w chains) then we can obtain the infinite version of Dilworth's theorem: if an INFINITE partial order has width w can be covered with w chains.
  2. The method is called compactness argument and is very general. It is related to topological compactness, but I won't to into that here. (If a reader has a short explanation or pointer, please post.)
  3. The method is noneffective in that if you are given a Turing machine that tells you, for all i,j, COLi(j), then the proof does not appear to be able to give you a Turing machine for a coloring of G. There are two ways this has been formalized, though I list three since the third one strengthens the second one. (For details see my survey of recursive combinatorics here.)
    1. (Bean, 1976) There is a computable graph (Vertex set N, Edge set decidable) that is 3-colorable but there is no computable finite coloring whatsoever. (He also made the graph planar, which was not needed but nice.)
    2. (Carstens and Pappinghaus, 1983) For every k ≥ 3 There is a highly computable graph (Vertex set N, the function that, given a graph, outputs its finite set of neighbors is computable) that is k-colorable but not computably k-colorable. (NOTE: if a highly comp. graph is k-col then IT IS computably 2k-1 colorable.)
    3. (Schmerl, 1980) For every k ≥3 There is a highly computable graph (Vertex set N, the function that, given a graph, outputs its finite set of neighbors is computable) that is k-colorable but not computably (2k-2)-colorable.

Monday, February 27, 2012

Nash and the NSA

By now most of you have heard about Nash's recently released letters to the NSA (press release, letters). Not only did John Nash think about computation and cryptography, there are many ideas in these letters that were a bit ahead of their time when Nash sent these letters in 1955.
  • Expressing a cryptographic process as a Boolean function with input bits. 
  • Breaking the cryptographic system as a function of the key length.
  • Exponential in key length as computationally hard and polynomial in key length is computationally easy.
His conjecture is quite striking.
For almost all sufficiently complex types of enciphering, especially where the instruction given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering, the key computation length increases exponentially with the length of the key, or in other words, with the information content of the key.
The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable. As ciphers become more sophisticated the game of cipher breaking by skilled teams, etc. should become a thing of the past.
The nature of this conjecture is such that I cannot prove it, even for a special type of cipher. Nor do I expect it to be proven. 
Nash's conjecture, even for a specific cipher, would imply P ≠ NP, 16 years before Cook defined the problem, and the significance resonates with the Diffie-Hellman article written 21 years later
Theoretical developments in information theory and computer science show promise of providing provable secure cryptosystems, changing this ancient art into a science.
Given Nash's insights, why did the NSA react so cautiously to these letters.

  • This was not a letter from Nobel Laureate Nash but from Assistant Professor Nash.
  • There is a strong positive correlation between people who claim they are not a crank and those that are.
  • Nash's arguments didn't apply that well to the relatively slow digital computers of the time.
  • Nash didn't give a particularly useful cryptosystem.
  • I don't even believe Nash's conjecture: There are plenty of complex enciphering techniques, such as the Engima machine, which the NSA did know how to break. Hard to break cryptosystems come from more structured ciphers based on algebraic properties like AES and RSA.

Friday, February 24, 2012

Is 99.8% Secure Secure?

Guest post by Janos Simon

A group of researchers (Arjen Lenstra and collaborators  from EPFL Lausanne and James Hughes from Palo Alto) published a study, Ron was wrong Whit is right, of new vulnerabilities of cryptosystems. The New York Times picked up the story. Although Lenstra et al discuss several cryptosystems, their results are particularly relevant to those based on RSA. The title mirrors their conviction that cryptosystems based on a single random element have fewer key generation problems than RSA, that uses two random primes.

The technical problem they identify in RSA is the following: The RSA cryptosystem uses a modulus n that is the product of two large "random" primes p and q. Actual keys may not be truly random, and this may cause several possible problems:

1. Different users may end up with the same n. Since a user knows the factors p, q, she will be able to decrypt the data of any user with the same modulus.

2. If two users share one of the factors, (user A's modulus is pq, user B's is pr) they will be able to decrypt each other's data. Given two moduli, one can use the Euclidean algorithm to determine whether they have a common factor, and find it if it exists.

Note that the second vulnerability is more insidious: in the first only the user with the matching key can decrypt the messages of its mate, while anyone can  explore the web looking for pairs of keys with a common factor.

The lack of randomness in key generation may be caused by bad choices for the seed of a random number generator. As an extreme example, devices may be shipped with a standard common seed. In this case all devices would generate the same n. In general, if the collection of seeds is a low entropy set, with high probability insecure keys will be generated.

The EPFL group collected 11.7 million public keys "while avoiding activities that our  system administrators may have frowned upon" and essentially found that about 99.8% of the keys were not insecure (to the extent that they did not suffer from the vulnerabilities above.)

Is this secure enough?

Note that .2 percent of 11 million is tens of thousands of bad keys.

To make matters murkier, another group with researchers from the University of Michigan and UCSD did a somewhat similar experiment. Their results are not published yet, but one of the authors, Nadia Heninger blogs about their results in Freedom to Tinker. They find a similar proprtion of bad keys, but they claim that the vulnerability mostly occurs with embedded devices like firewalls and routers, so "important" keys like bank certificates are not affected. Lenstra et al disagree.

Perhaps we should be happy that these vulnerabilities are not due to weak Theory, but to bad implementations of good theoretical ideas....

Thursday, February 23, 2012

The Envelope Please

The conference that shares its namesake with this blog has announced their accepted papers. The 27th Conference on Computational Complexity itself will be held in Porto, Portugal June 26-29. If you go, stop by England on the way and celebrate the 100th anniversary of Turing's birth (June 23) in either Cambridge or Manchester.

Lots of great papers accepted to the conference. For biased reasons I like Limits on Alternation-Trading Proofs for Time-Space Lower Bounds by Sam Buss and Ryan Williams. They give some compelling logical reasons why we've hit the limit of current techniques in proving time-space tradeoffs for Satisfiability. Alon, Shpilka and Umans found connections between Sunflowers and Matrix Mulitplication. Finally a plug for my student Josh Grochow's first Complexity paper Matrix Lie Algebra Isomorphism.

Wednesday, February 22, 2012

Presidents Day Poll- what does the youth of american think about....

(In honor of President's day which was two days ago.)

On Presidents Day last year I had my classes fill out a form saying who they thought was the best, second best, third best, and worst president. I gave them a list and asked them to just mark 1,2,3 (for best, second best, third best) , BAD (for worst) on it. I omitted Obama, Bush Jr, Clinton from the list since they are too recent. So, what does the youth of America think? Or at least the youth taking Honors Discrete Math or Automata theory?

Here is the list or presidents ranked by roughly how well they did. (Some are not included since they did not get any voters pos or neg.) I (somewhat arbitrarily) gave 3 points for each ONE, 2 points for each TWO, 1 point for each THREE and -3 points for each BAD. Are these the best weights to use? Is there a way of arguing which weights are best? This is a variant of a standard voting problem. The standard problem does not include the option of BAD for negative points. I don't think there is an optimal answer. Weights that would NOT be good to use would make the ONES's get a lot more than the TWO's since I suspect this gap was not so large in peoples minds. Or I could have had THEM give point values between (say) 1 to 100 for the ONE, TWO, THREE and between -1 and -100 for the BAD. Maybe I'll do that next year.
  1. Abraham Lincoln: 15 ones, 10 twos, 10 threes: 75 points.
  2. Theodore Roosevelt: 8 ones, 8 twos, 4 three: 44 points.
  3. Franklin D. Roosevelt: 11 ones, 7 twos, 7 threes, one B: 51 points.
  4. George Washington: 7 ones, 4 twos, 8 threes, one B: 34 points.
  5. Thomas Jefferson: 6 ones, 5 twos, 5 threes: 33 points
  6. Dwight Eisenhower: 1 one, 3 twos, 5 threes: 14 points.
  7. John F Kennedy: 2 ones, 5 twos, 6 ones, 1 B: 14 points.
  8. Woodrow Wilson: 1 one, 1 two, 3 threes: 10 points.
  9. Harry S Truman: 2 twos. 6 points.
  10. James Polk: 1 one, 1 two: 5 points.
  11. John Adams: one 1, one 2, one B: 4 points.
  12. William Henry Harrison: one 1: 3 points.
  13. Lyndon B. Johnson: one 2, one 3, one B: 2 points.
  14. Andrew Jackson: 2 ones, 1 two, 2 B's: 2 points.
  15. Ulysses S. Grant: 1 two, 2 B's: -4 points.
  16. Zachery Taylor, Rutherford B Hayes, Chester Arthur, Millard Fillmore, Warren Harding, Gerald Ford: 1 B: -3 points.
  17. Jimmy Carter: 1 one, 1 two, 4 B's: -5 points. CORRECTION ADDED LATER: SHOULD BE -7. ARITHMETIC MISTAKE. MAKES HIM RANK BELOW BUCHANAN AND TAFT!
  18. James Buchanan, William Taft: 2 B's: -6 points.
  19. Ronald Reagan: 1 one, 3 twos, one 3, 8 B's: -12 points. CORRECTOIN ADDED LATER: SHOULD BE -14. ARITHMETIC MISTAKE. RELATIVE ORDER UNCHANGED.
  20. Herbert Hoover: 5 B's: -15 points.
  21. George Bush: 6 B's: -18 points.
  22. Richard Nixon: 1 one, 1 two, 10 B's: -25 points.
My thoughts
  1. I thought George Washington would do better.
  2. I'm surprised that Theodore Roosevelt did so well. Bart Simpsons likes him, though Lisa Simpson prefers FDR (From the episode Bart stops and smells the Roosevelt's.
  3. The person who ranked Hayes as the worst president of all time either knows much more about the Hayes administration then I do or was just putting things down at random.
  4. The vote for William Henry Harrison was a joke- the guy who voted for WHH is named Henry and liked that his first name was WHH's middle name.
  5. Dwight Eisenhower did better than I thought he would.
  6. Richard Nixon did worse than I thought he would. I thought today's youth didn't know about Watergate. George McGovern (who Nixon beat in 1972 and is still alive) recently said that if he had won in 1972 then Nixon's legacy would be much better (going to China, Detente with Russia, EPA, Okay on Civil Rights.) Nixon would be considered a left wing democrat today.
  7. George Bush did so bad that I think people may have confused him with his son W.
  8. Some of my opinion: (1) I rank George Washington first since the very act of STEPPING DOWN after two terms set the tone for peaceful transitions of power. Note that young democracies today the most important election is the one where the person in power has to voluntarily step down. (2) For worst prez I wouldn't call someone BAD just because I happen to disagree with their policies. It has to be someone who (in contrast to Washington) did things that undermine our democracy. Two that come to mind are Nixon (Watergate) and John Adams. (Alien and Sedition Acts). George W Bush (Patriot Act) might also qualify but its too early to tell. Other wartime restrictions on freedom (happened in many wars) might also qualify. The corruption of the Grant and Harding's administration were deplorable but I don't think they rise to the level of undermining our democracy. There are probably other presidents who qualify for this honor but not being an expert on Presidents, I don't know who they are.
  9. In the book Hail to the Chiefs (a humorous though mostly accurate look at the presidents) in the first edition she said that Buchanan and Andrew Johnson were not looked upon kindly by historians, which is true. In the second edition she made the points many times with many presidents (including those two) that how well you do is VERY MUCH a matter of timing, luck, and History. To paraphrase Buchanan couldn't stop the Civil War. By that point nobody could. Andrew Johnson had to reunite the country and deal with the South after the Civil War. That's pretty hard too. I agree that there are many thing outside a presidents control, and they some blame or praise may be unwarranted.

Monday, February 20, 2012

Aggie for a Day


About 25 years ago I visited a college friend, David Jackson, then a grad student at Texas A&M. He was a Ph.D. student in Food Science doing his doctorate research on starch. He had a tortilla maker in his lab. Made me wonder if I was in the right field. David is now making tortillas in Nebraska.

Last week I made my second trip to College Station this time to visit the CS department, give a talk and meet lots of great researchers.

Landlocked Texas A&M has one of the world's leading Nautical Archaeology programs. We went to visit and a grad student came out, said "Howdy", and gave us a tour of models of the ships they have been excavating.

Robin Murphy arranged a tour for me at Disaster City (that's us pictured above). Disaster City is one of the largest training grounds for emergency responders with collapsed buildings, rubble piles, derailed trains and other sites to train people, dogs and robots, the last of which is Robin's specialty. Some of Robin's students were testing out a flying video drone that day. Robin gets involved in disaster areas such as Fukishima. Saving lives with computer science. Makes me wonder if I got in the right field.

Friday, February 17, 2012

People solve math problems for the prize money! NOT!

Why do people or organizations offer Prize Money for mathematics?
  1. Paul Erdos: He offered money to solve problems that he found interesting. I assume he wanted them solved but he also wanted to encourage a line of research beyond the problem. He had a (well deserved) reputation as a brilliant mathematician, so if he couldn't solve a problem it was hard. People would somtimes not cash the check and frame it. I've heard that with color copiers people now copy it, frame the copy, and cash the check. Did he insist that it appear in a journal or just need to be convinced? I don't know but I would think just need to be convinced.
  2. Bill Gasarch: He offered $289 dollars for one problem, which, as you know, was recently solved by Steinbach and Posthoff (see here). While Gasarch has nowhere near the reputation of Erdos and his problem was not a deep math problem, this problem caught on as a matter of luck and timing. The blog helped, and Brian Hayes picking up on it helped. Gasarch wanted to get this problem solved, but did not quite know if it would inspire a line of research. It did (according to the solvers) present a problem just on the edge of what is possible to solve of this type. Gasarch used paypal. Hence, alas, Steinbach and Postoff won't be able to frame a check or its copy.
  3. Scott Aaronson: His 100,000 offer (see here) for ...demonstration, convincing to me, that scalable quantum computing is impossible in the physical world This is different than most prize offers in seveal ways: (1) He gets to decide, not a ``refereed journal''. (SIDE NOTE-here is an idea: a prize that pays out only if the article appears in a non-elsevier journal.) (2) He does not expect to pay out (but he happily will if someone really convinces him). I believe him on this, though 100,000 is a lot of money. He wants to inspire people to think about these questions. The only thing analagous I can think of is prizes for REAL parapsychology- they don't expect to pay out but would be happy to since the world is more interesting if parapsychology is true.
  4. Millienium prizes: I believe these one million dollar prizes are the most ever offered for solving particular math problems by an order of magnitude (if that is not correct, please leave a polite comment correcting me). Clearly the Clay Instuite wants to encourage research in these areas. Why so much money? I assume to REALLY put these problems on the map. There is no mathematician of the stature of Hilbert nowadays who could state problems of importance in a way people would listen. Smale tried (see here) but those problems never got the status of either Hilbert's problems or the Millenium problems.
  5. Godel Prize: Best paper in theory published in the last 14 years (used to be 7). I wonder- if someone posted a solution to P vs NP on arXiv and it was correct, would they really not get the Godel prize? I suppose not. A bit awkward in that if your publish in a period of time when many good papers come out you could be out of luck. Why did they extend the window from 7 years to 14 years? Speculation: people are getting worse at getting papers out into journals so they had to extend it. Enablers? Given once a year.
  6. Turing Award: I am not quite sure if this is for one paper, a body of work around one idea, or a career. It can go to people who never proved a theorem since its open to all computer scientists. The prize money has gone from $2000 to $250,000. Given once a year.
  7. Fields Medal: Given for a body of work. About $15,000. High Prestige, low dollars. How come the Turing Award was able to increase its money value but the Fields medal was not? I honestly want to know. Given once every 4 years to a set (group? locus?) of people.
  8. King Faisal prize: I blogged about this here so I'll be brief: High dollars ($400,000), but low prestige. I assume the origin was to try to give glory and prestige to Saudi Arabia who gives out the prize. I don't think it worked. Aside from its origins it also has the problem of being unfocused in that they have awards for Sciene (which is sometimes math) and also for Muslim scholarship, and other areas.
  9. Here is a list of other prize. Some thoughts
    1. Some are for solving a particular problem, some are for a body of work in a particular area, some are for a body of work and the area can be anything within mathematics.
    2. Some are restricted to a subset of people, some are not. Thats a tautology!
    3. People do not solve problems for the money. Most of the prizes are too small for that and those that are large are for really hard problems.
    4. There are many of them, more than I thought. I still doubt I'll win one. The closest I ever came was being linked to on the Wikipedia page on the Godel Prize (see this Blog Entry about why that happened.)

Wednesday, February 15, 2012

Sloans and More

The Alfred P. Sloan Research Fellows were announced today including Northwestern's own Nicole Immorlica. Other winners in theoretical computer science include Xi Chen, Nate Foster and Prasad Raghavendra. A shout out to TTIC who have their second Sloan Fellow in Jinbo Xu.

Computer science did well in the president's budget for FY 2013. CISE head Farnam Jahanian gives the details. Of course now the budget has to get through congress.

Tomorrow there will be a celebration of twenty years of the NITRD (Networking and Information Technologies Research and Development), an interagency program that has heavily supported CS research over the past two decades. Quite an impressive list of speakers. There will be a live webcast of the event.

FOCS call for papers is out. Submission deadline is April 4.

Monday, February 13, 2012

Barney the Evil Dinosaur

This is an old story from before I had a blog, but one of my favorite on when technology goes bad.

In the late 90's, the undergraduate CS coordinator at the time, Don Crabb, also wrote a column on technology for the Chicago Sun-Times and would get tons of tech stuff to review. One of these items was a robotic Barney the Dinosaur that interacted with the Barney and Friends show on PBS. I took Barney home and tried him out with my then 3-year old daughter Annie.

He worked as advertised, singing along with the characters on TV. But at one point Annie said "Let's read a book Barney". Barney replied "Let's watch TV". Annie said "OK". Definitely sending the wrong message here.

Later Annie was playing with Barney in the kitchen. Annie, who was toilet training, said to Barney "I have to go wee-wee". Barney said "Let's play a game". Annie said "OK". Soon I had a mess to clean up.

That was the end of Barney.

Friday, February 10, 2012

STOC 2012 accepts are posted

STOC 2012 paper accepts are posted here. Travel support for grad students (which I am involved with) is posted here. On a quick glance:
  1. I tried counting the paper for how many were in which categories. I got dizzy so I stopped. Also, for some its hard to tell the area just from the title. There DO appear to be many papers on complexity.
  2. I hope that when the papers are finished there are pointers to all of them on the website.
  3. There will be four workshops (see here) Under Tutorials it says TBA, so there may be some of them. What is the difference between a workshop and a Tutorial? I ask nonrhetorically.
  4. When I goto an MAA or AMS conference there are (1) invited papers, (2) contributed papers, (3) Math Jeopardy game, (4) demos, (5) other things. When I goto STOC or FOCS or CCC or just about any theory conference its (1) submitted papers that got accepted. There MIGHT be a rump sessions (CCC, Crypto does this) a workshop or tutorial (STOC, FOCS does this- anyone else?) an invited talk (FCRC has these, Sometimes others do) a poster session (FCRC has had these. Have others?) My objection here is NOT that STOC has the submitted paper format and that STOC and FOCS are other conferences are too highly valued. (That is another debate which we've had before.) My objection is that all of the theory conferences only have VERY FEW kinds of activity- talks on papers that were accepted. I would like to see more VARIETY in activities.
  5. The word Quantum only appeared in two titles. Are there other Quantum papers (I would guess yes).

Wednesday, February 08, 2012

The 17x17 problem SOLVED! (also 18x18)



THE 17x17 PROBLEM HAS BEEN SOLVED!!!!!

On Nov 30, 2009 I posted here the following challenge:
If someone emails me a 4-coloring of 17x17 with no monochromatic rectangles then I will give you $289.00
Bernd Steinbach (Institute of Computer Science, Freiberg University of Mining and Technology, Freiberg (Saxony), Germany) and Christian Posthoff (retired from Department of Computing and Information Technology, The University of the West Indies, Trinidad and Tobago, but now in Germany) have found a 4-coloring of 17x17 without monochromatic rectangles!! The coloring is here. I have verified it (actually I asked Daniel Apon and Jim Purtilo to separately verify it, and they have. Also, Semmy Purewal and Brian Hayes did later.) The methods Steinbach and Posthoff used to obtain the coloring will appear in their paper
Most Complex Four-Colored Rectangle-free Grids - Solution of an Open Multiple-Valued Problem (ISMVL 2012. ISMVL stands for International Symposia on Multiple-Valued Logic). They will present this paper on May 14, 2012 during session B1 of ISMVL in Victoria, Canada.)
Once the paper appears there will be a post (with their help, perhaps guest posted by them) on the techniques they used.

Some thoughts:
  1. Some very serious people had worked very hard on this. I actually began thinking that 17x17 is NOT 4-colorable.
  2. CONGRATULATIONS to Bernd Steinbach and Christian Posthoff!
  3. They also found a 4-coloring of 18x18.
  4. The only grid that we do not know if it is 4-colorable is 12x21. This is still open and you are URGED to work on it. Sorry, no cash on this one. Do it for the glory!
  5. If you are going to ISMVL 2012 then find Bernd and Christian and say hello. More important, talk to them about there work. (I won't be there alas.)
  6. The reason I thought that 17x17 was 4-colorable is that there is a rectangle free subset of size 74, so I assumed that one of the colors would appear 74 times. WRONG- The max number of times a color appears is 73.
  7. I thought that each color would appear 4 or 5 times in each row and column. WRONG- some appear 3 times in a row or column.
  8. I am DELIGHTED to pay out the $289.00.
  9. I asked them if they did it for the money (which I doubted). No, but the money made them more aware of the problem.
  10. How did they do it? I do not know, but I am looking forward to reading their paper in May when it is available and blogging about it.

Monday, February 06, 2012

Competition

A few people have asked me my opinions on Oded Goldreich's essay On Struggle and Competition in Scientific Fields. I read through Oded's essay I expected to highly disagree with Oded, after all he attacks competition, which is just un-American, and he lays some blame on a small number of scientists "especially those holding administrative positions in the field" which as SIGACT chair puts me in that group.

But as I read the essay, I find myself agreeing with much of what he says. Our conferences have become too much more like competitions rather than focusing on distributing knowledge or bringing the community together. I also agree with many of his suggestions including having conference with more plenary talks, that program committees should create a "program" more than just choosing best papers and that hiring/promotion committees should focus more on the research itself rather than the decisions of program committees, awards committees and grant committees.

There is an faulty underlying assumption in Oded's essay that competition within theoretical computer science is a zero-sum game. Theoretical computer science competes within computer science for faculty slots and grant money. Computer science competes with other sciences and science competes with other needs.

SIGACT is not in the business of choosing winners and losers within the community but rather to promote the field to help increase the number of jobs and grants available to theoretical computer science. As I mentioned last week, awards are an important mechanism that lets us highlight the important research in theory.

Competition for grants, jobs, awards and just attention of other computer scientists helps make us all better allowing us to push for more resources for theory. It would be nice to say that we should just all do good self-motivated research, but the reality is we need those resources if we want theoretical computer science to continue to thrive.

Wednesday, February 01, 2012

Why do we have awards?

You have a month to get in your nominations for the Donald E. Knuth Prize and the SIGACT Distinguished Service Award.

Why do we have these awards and others like the Gödel Prize, The Turing Award, conference best paper awards, ACM Fellows, Nobel prizes and so much more. Are we just creating CV stuffers? Are we giving departments another measure to rank people? Are we trying to encourage good research through competitive awards? Does anyone have the conscious thought, "I wouldn't normally work on this problem but it could win me the Turing award so I'll do it"?

None of the above. We have awards for the publicity. We want to tell the world about the great researchers and work that they produce. A major award rises above the clutter and let's us say "Les Valiant must be a great computer scientist, he won the last Turing Award" or " Håstad's optimal approximation bounds are a great work in theoretical computer science as you can see from the Gödel Prize." Not a surprise that almost every award comes with a press release.

Are all awards fairly given? Of course not, prize committees are full of humans often comparing apples and oranges. But that's not the point. Awards let us celebrate what's great in computer science with ourselves and with the world.

Dusting off my bookshelf I find a book on FORTRAN

The following quote is from the back of a book that I dusted off and took off of my shelf recently:
FORTRAN is one of the oldest high-level languages and remains the premier language for writing code for science and engineering applications. (NOTE- The back of the book uses Fortran but the spell checker I am using insists on FORTRAN. As a fan of capitol letters, I don't mind going along.)
Was this book written in
  1. The 1960's
  2. the 1970's
  3. the 1980's
  4. the 1990's
  5. the 2000's (meaning 2000-2009)
  6. the 2010's (meaning 2010-2012)
Some questions and thoughts about FORTRAN:
  1. How much is FORTRAN used? Perhaps more than you might think: Its in packages that have been running smoothly and quickly for a long time. It may not be worth it to swap it out for C or C++ or something else. However, it may be hard to modify.
  2. How much new code in FORTRAN is being written? I suspect not much. UMCP does not have a course in it. Does your school?
  3. Since people are not learning FORTRAN we will we become like one of those Science Fiction Aliens who says: Our ancestors left us these machines,but over time we only know how to operate them, not how to fix them or build new ones.
So, when was the book written?

The book is Modern Fortran by Clerman and Spector. The book came out in 2012 (I really need to dust my shelves). I doubt that FORTRAN is the premier languages for writing code in science and engineering. But how far off the mark is this? As always I ask non-rhetorically.