The Rise of the Machine and Believe in Privacy
And, so, 5% of Meta’s workforce is being made redundant, and the company is targeting low perceived performance — in a similar approach to Twitter’s cull of staff. Many of those who have been laid off identify that time off work was a major factor in the decision. But, in the case of Twitter/X, it came down to metrics related to coding, such as the number of GitHub commits per day or the number of lines of code produced. In the end, Twitter laid off around 80% of their staff [here]:
This is perhaps a different world we are entering, and where “The Machine” is watching us. And who is replacing the redundant staff at Meta? That will be ML engineers, and whose role is possibly to further reduce staff so that the optimal company runs with little staff and makes the maximum amount of profit for the minimum amount of effort. The mighty dollar means more to some companies than a happy workforce. They may not know it, too, but the ML engineers will eventually make themselves redundant. That’s the whole ethos of computer science: make yourself redundant by automating something!
So, watch out for your next performance review; it could come back to bite you. Make sure you know your KPIs and what is expected of you. I appreciate that workers’ rights in the US are quite different from those in Europe, but this is a new world we are entering, and where machines can spy on us for every microsecond of our lives. Companies are increasingly using metrics such as line of code produced per day, time away from work, coding bugs per line of code, and so on. The rise in AI will only accentuate this, and where AI agents can monitor every part of our working life.
For a teacher, for example, a metric-based approach can be a disaster, especially for those who aim to cheat the system. For example, average marks and pass rates for modules within internally set exams are often poor metrics, as those setting and marking the exams can have low standards applied. They can thus give no real indication of the quality of teaching or the standards which are applied. The true assessment of quality often comes from the students themselves, and in their feedback on the help and support that they were provided, and in the quality of their teaching environment.
A researcher who creates one amazing, ground-breaking, high-impact research paper over a period of a few years is possibly doing much better work than one who churns out papers in paper mills and who is just doing it to advance their paper count metrics. And, when it comes to research income it is perhaps not the awarding of a grant to a researcher that is the most important thing, but what they have done with the investment. If, at the end of a grant/project, there is little to show, then it has possibly been a waste of time, and a tally up of the grant income can show little of the real impact of someone's work.
Like it or not, an AI agent will not be able to properly assess the quality of work, and will focus on gathered metrics that can be falsely defined. For the Elon Musk approach, there is a perception that 80% of the work is done by 20% of the staff, and you must find the 20% who actually do most of the work and fire the rest. This becomes a de-humanised work of metrics and performance assessments.
So, as we move to a world increasingly focused on monitoring performance through AI, believe in privacy as a fundamental right of our society.