Sorry, completeness results are algorithms. They are proved by algorithms people using algorithmic techniques. There is simple-minded view that upper bounds are algorithms and lower bounds are complexity. But that doesn't reflect reality: Upper bound results like the PCP theorem or SL=L are complexity. It's not even clear whether a result like circuit lower bounds implies derandomization is an upper or a lower bound.
Since we both algorithmicists, like complexity theorists, lack techniques to prove general lower bounds, they instead use reductions to to show that problems are hard. This gives them a two-pronged attack on a problem where the failure to find a reduction might lead to an algorithm or vice-versa. In structural complexity, we see a similar dichotomy between inclusions and relativized separations.
There is no absolute dividing line between algorithms and complexity, but loosely algorithms deals with specific problems while complexity studies classes of problems based on some computation model with certain resource bounds. So the definition of PPAD and its relationship to other classes is complexity but the reduction from PPAD to Nash Equilbrium is algorithmic.