In theoretical computer science and in particular computational complexity we straddle between these two worlds. Our goal is to understand the power of efficient computation so we have complexity classes like NC, P, P/poly, BPP and BQP that try to approximate real-world models of computation. We have classes like NP, #P and PSPACE that capture a large number of interesting problems that we would like to solve. We have models like Probabilistically Checkable Proof Systems (PCPs) whose parameters help us understand the limitations of approximation. We have combinatorial tools like expanders and extractors that have wide applications in many areas of complexity and beyond.
But all these classes, models and tools have very nice theoretical properties as well. We tend to focus more on the theoretical foundations judging papers more for their theorems and the difficulty of the proofs than the importance of the underlying problem. In the end we reduce the amount of motivation in the paper often to a single sentence of the introduction and a theory audience only rarely questions the importance of a model during a talk.
Once we deemphasize the motivation of a model, then others, in an attempt to find open problems, look at variations of the model. Often these variations are motivated solely by the importance of the original model, even if the variations have little relevance with the original motivation of the model. Researcher then consider variations on the variations deviating quite far from the original model and its motivations.
Other computer scientists often complain, rightly or wrongly, that theoretical computer science and computational complexity have lost touch with real computational issues. We must be careful to not focus too much on presentations that don't express or don't even have some reasonable motivation beyond the difficulty of the proofs.