The main thesis of Blink states that people can make good predictions quickly and with a small amount of information if they have experience or are properly trained. The real challenge is to get people to ignore excess information and focus on the few bits that can really can accurately determine an outcome. Often people make better decisions when rushed since they won't have time to focus on the extraneous details.
Gladwell does his homework and makes his case best by describing experts who do a good job predicting the longevity of a marriage, the flight of a tennis ball or the popularity of a new food item. I find his one-time examples less convincing since anyone can get lucky and even his best experts make occasional mistakes.
How do these ideas relate to computer science? In many more ways than I can mention in this post. Juntas (or NC0 as we used to call them) are functions that depend on a constant number of input bits and we've seen recent work on learning juntas. In general most of learning theory focuses on finding a short description that predicts well. Transformations like Fourier transforms and wavelets which allow researchers to focus on a small amount of important information useful in many areas like computer vision. The recent areas of sublinear algorithms and property testing often can say something interesting about an input by only looking at a small part.
In my job I also find myself using less information to make just as good decisions. I can often get a good feel about a research paper or a recommendation letter by a quick focus on a few key elements. I can almost always predict the quality of a talk within the first thirty seconds. Even in research where one has to narrow the list of techniques, I can eliminate large sets of approaches without having to work them out thoroughly.
One needs care not to weed out a good idea too quickly but if you allow yourself to get bogged down in details you will spend too much time often making the same or possibly worse choices.