Capitalism and the dangers of Machine Learning
After gaining an interest in learning about the economy, I originally planned to write a grand essay on the ways that machine learning undermine the things that make capitalism good, or at least tolerable for the majority of people. However, I soon realized that I'm in over my head; my ambitions far eclipses my knowledge or experience in this matter. So instead I'll share some thoughts I have gained from the books I've read.
Here's my read list:
- Technofeudalism - What Killed Capitalism by Yanis Varoufakis
- Another Now by Yanis Varoufakis
- Talking to My Daughter About the Economy by Yanis Varoufakis
- The Trading Game by Gary Stevenson
Before I get to my main point, I would like to say that I am finding myself more sympathetic to these ideas. Not out of a rational conclusion from empirical analysis, but from personal experience of seeing my buying power erode. This phenomena should be painful for anyone who have nothing but their labor to sell, which includes myself. I think it's fair to recognize a basic principle of life: that everything changes, everything is mutable, and we create and sustain the order that surround us. In the context of the economy: the power of money extends so far as our belief in it, so with the power of banks and markets.
The dual nature of Labor
To recount the ideas in Technofeudalism, which I suspect Yannis got from Marx, labor has two simultaneous values, just as light has two interpretations of particle and wave. The value of labor that is quantifiable and most effable is its exchange value - the commodified part of it that markets attach a price to. Yet the true underlying value of labor is its experiential value, which in the context of intellectual work is the result of creativity, invention, or thinking that proceeds from labor. This is something that money cannot assign a price to, as one cannot purchase the process that produces a thought or an idea, but there is some nebulous connection of this process with 'time', so we get paid by time when the thing that's valuable about us is our ability to solve problems and create ideas.
There is also the equivalent of exchange and experiential values for physical labor: the exchange value remains the same: time, yet the experiential value seems harder to define. It is the tacit experience of the worker in his work, the skills in his employ and the motivation and willingness to utilize those skills to an economically valuable end. Truly no one can buy your experience, your experiential labor, unless you will it. But they can certainly influence you or cajole you into making your time productive. For the domain of physical work the equivocation between value and time is subtle, as the economic value of labor can be finely divided into units: the number of goods you assemble per minute. For knowledge work this is no longer the case; good ideas can't be created on a cadence, insights can't be scheduled on demand. Measuring the value of a software engineer by the number of lines, pull requests or features they deliver is foolish, to name one example. This explains the amount of narratives around tech companies, the tech workers have to tell a story about why their time is valuable by proving that they indeed can summon valuable insights on demand, and how this impacts the bottom line.
Ideally businesses want to be able to measure the true value of knowledge work, and be able to equate this metric to a worker's time. This would completely commodify labor and kick experiential mumbo jumbo out of the picture, the thing that is not controlled by money.
The dual nature of Capital
The mirror side of labor is capital. Besides being the thing that helps labor to produce value: the tools, the machines, the infrastructure, etc. The dual side of capital is its ability to command labor. Now, it goes without saying that power has always controlled labor, and as a by-product of markets separating capital from the exclusive purview of political power (as the two were married in feudalism), the control of capital gave its owners powers over labor.
I don't think it's a coincidence that it turned out this way. If I as an inventor created a new technology that does not require much up-front costs, and it could be easily replicated and adopted by anyone, then the rewards I could reap from this technology would be small while it rapidly assimilates throughout society. On the other hand, if I invented a technology that is useful, yet requiring large upfront investments and large operations and expenses to maintain, then the vast majority would be gated from adopting this technology and access to it could be controlled by me. Society reaps whatever surplus I choose to let go.
Yet I look around myself. The only thing I could have conceivably produced myself was my dinner. Everything physical around us went out of a factory, and some of such complex design that you could have never conceivably produced it yourself, no matter who you are (ie. the computer I'm writing this on). Given this no wonder the commanding power of capital. The only reason it all goes so swimmingly, the reason why we can afford these things to begin with is competition, the invisible hand of free-market ideology that turns individual greed into collective virtue (and definitely not due to sweat shops in China or Vietnam for example, or the strength of the army backed dollar).
The role of computers
As Friedrich Georg Juenger writes in The Failure of Technology, "[by] definition, technology is really nothing but a rationalization of the work process." First physical labor as it became formalized as a sequence of steps in an assembly line. Then knowledge as encoded into bits, and formally described through an universal Turing machine. The consequence of such formalization is capital possessing more control over labor, as the capital itself is augmented with new forms that change its relationship with labor in a discipline:

Is Amazon, for example, a regular capitalist corporation? For Varoufakis, no. Because they and other internet giants have stumbled upon a form capital so potent as to render even other capitalists their vassals. In Technofeudalism it is named "cloud capital".
Cloud capital is ownership of digital land built upon the internet commons, where through convenience, network effects and low interest rates, owners of this digital land, also known as a platform, can become monopolists over a category of economic activity and extract rents from their tenants, also known as "users". Hence the name technofeudalism.
The logic of software platforms is such: venture capitalists have become enamoured with the idea of the exponential distribution, where a small percentage of firms deliver outstanding returns while the rest fail. Software firms are then pumped full of cash to grow as fast as possible to dominate a market though a platform, and only after deliver profits. This is only possible because the software can rationalize an aspect of labor in a market that was originally performed by it's participants. For example, communication and scheduling for Uber, communication of prices and distribution for Amazon, etc. And convenience and network effects of these platforms that allow them to stay in power and ward off competitors.
However, what's more, what's most dangerous about this form of capital, the ability that gives it's owners super powers, is the extraction of value from everyone that uses and interacts with it.
Machine learning formalizes, pretty much everything
I'm of course talking about big data.
The premise of machine learning is simple: we can represent a process we care about as a probability distribution, and through optimization fit a mathematical function to replicate the distribution's characteristics. Suppose we want to distinguish between pictures of cats and dogs, then the process that distinguishes between cats and dogs, that of human acuity, is formalized as a trained neural network. More precisely, the probability distribution associating images of cats and dogs with their label, either "cat" or "dog" is approximated by a neural network, which is trained on such pictures of cats and dogs and their labels to approach the true distribution.
The rationalization that happened here is that 1) we can represent acts of cognition as some probability distribution, 2) the probability distribution is well approximated by a sufficiently large neural network, and 3) the training procedure for the neural network actually recovers a faithful approximation of the true distribution. All of these are assumptions and none of them are guarantees. There is of course more technical treatments of what is meant by "well approximated" and "faithful approximation", but to my knowledge the theoretical foundations of deep learning is not even there to say that the technical definitions are met, let alone that the technical definitions are sufficiently non reductive of the original pre-rationalized process.
But machine learning, particularly deep learning, has produced good results empirically, so very tantalizingly it shows the potential to rationalize previously tacit and experiential tasks. For example, computer vision for image classification, object detection. Natural language processing for machine translation and text generation. Recommendation algorithms for preference prediction, etc. The promise is that with sufficient data and compute, any act of cognitive labor can be distilled down into a neural network - a computational artifact that represents the value of that labor. Therefore, the process that creates that final artifact gains immense value that is proportional to the value of the final artifact - the data, the compute, the algorithms.
So with the power of machine learning, the data that we generate naturally can be tapped like a natural resource, and like natural resources that are already being tapped, we are not paid a cent. Furthermore, with the data we give for free, the corporations owning the platforms that collect and sell this data can then create these computational artifacts: neural networks, that competes with the labor that generated the data and degrades its value.
So color me not surprised why people do not like AI. I don't like it in its current incarnation precisely because the whole ordeal is an insult. Machine learning the technology is interesting and fun, at least for the people that understand it, but what we see now is the consequence of it being used by the corporations we have, in the economy we have, under the institutions we have.
Behind every technology there is a force of centralization and democratization that is independent of the virtues of the technology. For machine learning, there is an diverse open source community that trains models and educates people on it's capabilities, and academia that publishes their research for all to read (sometimes behind a paywall). The converse force of centralization is the enterprise AI labs that are incentivized to make modeling expensive and out of reach for the hobbyist, this is done by making models prohibitively large as to become too expensive for independents to train and test. The actual utility of the models aside, there is an economic reason for the model sizes and the neural scaling laws, because research that prioritizes large models benefits those with more capital.
So yes, my conclusion is that machine learning is very dangerous like electricity is dangerous for a toddler or crack cocaine is dangerous for an adult. Because we don't understand its limitations well enough to control our fantasies of automating all forms of labor, we end up in the current AI bubble timeline on top of everything else with social media. So let's not give this technology to corporations and people gone too far down the loopy end.