Wednesday, March 27, 2024

The Softening of Computing Jobs: Blip or Trend?

Tech companies are performing exceptionally well, driving the S&P 500 to new heights with their soaring stock prices. However, the tech sector, apart from AI, expects a job decline to persist throughout the year, accompanied by a tougher hiring process. The situation becomes even more challenging for foreign students requiring sponsorship to secure a job after college, or older tech workers without the latest skills.

Despite these challenges, tech jobs remain more abundant than in most other fields, although the era of immediate employment with nearly six-figure salaries straight out of college with any CS degree has ended.

In discussions with industry leaders, I encounter varied perspectives on whether these changes represent a temporary fluctuation or a long-term trend. Let's explore some of the reasons for this 

The Elon Effect

Since Elon Musk took over Twitter in the fall of 2022, he cut about 80% of the employees. The platform had some hiccups but didn't fall apart. You might not like what is now called X became but technically it still functions pretty well. Many other companies started looking at their workforce and starting thinking whether they needed all the software develops they've hired.

Over Supply

We've seen tremendous layoffs among the larger tech companies, paring down from over hiring during the pandemic, and massive growth of computer science programs at universities across the country and world. We just have too many job searchers going after too few tech jobs.


Companies hold back hiring in the face of uncertainty. Uncertainty in elections in the US and abroad, international relations particularly with China, regulatory and legal landscapes, wars, interest rates, and the economy. 

Artificial Intelligence

Almost everyone I talk to thinks (wrongly) that their careers will not dramatically change with AI, except for programmers where we already see significant productivity improvements with ML tools like Github co-pilot. So many companies are re-assessing how many software developers they need. AI also adds to the uncertainly as the tools continue to improve, but how much and how fast remain difficult to predict. 

Blip or Trend?

The supply will balance itself out in the future, though possibly through a drop in the number of CS majors. The world will get more certain, hopefully in a good way. But how AI will affect the tech (and every other) job will take years if not decades to play out.

So what advice for those looking for tech jobs: build skills, get certificates, perhaps a Masters to wait out the job market, learn AI both as a skill in demand but also to make yourself more productive. Be aggressive (but not too aggressive), network, enter competitions, build a portfolio. The jobs are not as plentiful but they are out there. 


  1. As the Monty Python skit goes "What has computing ever done for us?" other than further enabling humans to suck the planet dry at a stupendous rate.
    Computation is just one recent facet of the "modern" drive to be industrious.
    Wherever industrious sixteenth century Europeans sailed to discover what they euphemistically called the "new world" they found "natives" had already settled (not always with tranquility) the land for millennia without trampling the precious earth under their feet.

    1. I think you missed the point of that Life of Brian moment.

      All right, but apart from the Instant Communication, Medical Marvels, Space Exploration, Education Revolution, Creativity Tools, Economic Empowerment, Environmental Tools, Entertainment Advances, Making Workers More Efficient and letting me get paid to work from home where I really spend half the day watching cat videos while eating fast food delivered to my door by strangers, what has computing ever done for us?

  2. Product managers at google are being tasked at estimating how many developers will no longer be needed as their AI agents are deployed.

  3. The executives are oversold on the power of AI.

    AI is useful, but it is no where near where people think it is going. Someone who thinks GitHub Copilot can actually program hasn't written real programs.

    It makes some talks simpler, like writing unit tests.

    There are good studies that show GitHub Copilot doesn't actually increase productivity. And in any case, programming is mainly an entry level job in tech in software engineering. What more senior people do is more on problem solving side than actually writing code.

    You should trust these executive as much as you should have trusted them when they were saying everyone should learn programming and as much as they were hiring massively during covid. Most of them are overpaid short-term trend following idiots.

  4. If Google executives were that smart, they would have built something useful over the past decade. They have not, there is not single thing that they have boasted about publicly that have actually turned into reality.

    Remember the Duplex demo from a few years ago when the Google CEO was claiming it will be able to do things for you? Or when self-driving cars were around the corner 10 years ago? Or Google Gemini demo which turned out to be fake?

    Rather than buying the bs from these executives running high on hype talk to a few real ML researchers.

    Manny AI companies are raising huge amounts of money, but failing to deliver the value. It is almost like the Internet bubble of 2000. There are real things that will happen but the industry is running pretty high at this point on hype.

    A model that fails over 20% of time cannot be trusted to do almost anything. And these current models are fundamentally flawed and fail unreasonably high fraction of times, on essentially anything that is not median output of training datasets.

  5. I've been predicting the coming of another AI winter for a while now.

    But I'm not the only one:

    My prediction of said winter is based on two points: the whole LLM game is parlor trick, and a really stupid one at that. And the whole "neural net" schtick is based on a lie.

    As someone who actually passed the AI quals (Yale, 1984), I claim the above two points are, in fact, technically correct.

    The underlying processing that LLMs do is statistical pattern matching on strings of undefined tokens. They have no mechanisms for (or theory of how to) relate the strings of undefined tokens they generate to anything in the real world. They just don't do that.

    So their outputs are only coincidentally related to things in the real world, i.e. these things are exactly and only parlor tricks in that they don't do the work of thinking.

    And neural nets look nothing like mammalian neurons at all.

    NNs are locally connected, regularly spaced arrays of simple devices.

    Real neurons are distantly connected (with vast numbers of connections: hundreds of inputs, thousands of outputs for a typical neuron) randomly placed (but globally structured) sets of complex computational devices that compute (among other things) logic functions of subsets of their inputs.

    It takes an enormous "neural net" to begin to simulate a single neuron.

    Computer Science, as people here shouldn't need reminding, is a real and valid academic and intellectual discipline. AI, when it's being the branch of psychology that use computation (the mathematical abstraction) as a guide to analysing the operation of neuronal systems and programming as a means of testing the theories that result from those analyses, is also a real and valid academic and intellectual discipline. (Well, could be, if anyone did that. Sigh.)

    But when AI goes all hype-city (as it also did in the "expert systems" period), it gets silly.

    So I'd advise students to get a good foundation in basic computer science, and look around for minor programs that interest them. But make sure what you are studying is an actual real academic disciple.