A short apology for the way things are

Very recently I have been reading from a group blogs from what I can only describe as a cadre of liberal academics decidedly very outside the mainstream. Here are they:

And some books linked by those:

From reading these types of works, especially by Illich, that mount an all inclusive criticism on what you previously thought was the "way things are", you would either recoil from the blasphemy, summarily dismiss the author as a lunatic, or be deeply disillusioned with "the way things are". In no specific order, L. M. Sacasas of the convivial society makes you look askance at the whole of consumer capitalism as he makes (more) explicit the tacit deal you are making with engine of capitalism: trading some part of what is immensely valuable to living for material and objects. This is similar to the idea of "domestication" by modernity that Taleb writes about, we are being domesticated by the tools that we use, and our desires and motivations are flattened into the calculus of "value", "return on investment", and productivity. The blog heavily references Illich who reads like a manifesto in "Deschooling Society", and whose words I cannot do justice with paraphrase or summarization. This is a quote from the first page showing the breadth of his agenda:

"Many students, especially those who are poor, intuitively know what the schools do for them. They school them to confuse process and substance. Once these become blurred, a new logic is assumed: the more treatment there is, the better are the results; or, escalation leads to success. The pupil is thereby "schooled" to confuse teaching with learning, grade advancement with education, a diploma with competence, and fluency with the ability to say something new. His imagination is "schooled" to accept service in place of value. Medical treatment is mistaken for health care, social work for the improvement of community life, police protection for safety, military poise for national security, the rat race for productive work. Health, learning, dignity, independence, and creative endeavor are defined as little more than the performance of the institutions which claim to serve these ends, and their improvement is made to depend on allocating more resources to the management of hospitals, schools, and other agencies in question. In these essays, I will show that the institutionalization of values leads inevitably to physical pollution, social polarization, and psychological impotence: three dimensions in a process of global degradation and modernized misery. I will explain how this process of degradation is accelerated when nonmaterial needs are transformed into demands for commodities; when health, education, personal mobility, welfare, or psychological healing are defined as the result of services or "treatments." I do this because I believe that most of the research now going on about the future tends to advocate further increases in the institutionalization of values and that we must define conditions which would permit precisely the contrary to happen. We need research on the possible use of technology to create institutions which serve personal, creative, and autonomous interaction and the emergence of values which cannot be substantially controlled by technocrats. We need counterfoil research to current futurology."

Meanwhile the gist of the perspective from argmin.net by Ben Recht[^1] is exposing some of the limits of statistical techniques given an aura of omniscient generality by the heady techno-maximalists of the machine learning field, or something along those lines. This and the gradient article by Arjun Ramani and Zhengdong Wang are reality checks on the capability of current algorithms for learning in a larger social, structural context that cannot be countered by capability based arguments. Even if the design ethos of these machine learning algorithms were not problematic, which they certainly are for recommendation on social media platforms, "We're getting the social media crisis wrong" by Henry Farrell, "Context Windows" by Kevin Baker and "In the belly of the MrBeast" by Kevin Munger show the extent that algorithmization, metricization, digitization are contributing or perhaps the cause of our current social problems.

From my perspective, I believe that in order to prevent descent into extremal states of nihilism or denial, as developers of technology, we must recall the stages of problem solving in complex, non-repeatable, opaque causal domains, also known as the real world.

  1. Intuition. We have an intuition or gut feeling that something is wrong, that this is not right. Something is wrong somewhere and we can vaguely point at possible causes, but a-priori we don't know and from experience pointing at something precise to "solve" leads to the squashing of a symptom, not a cause.
  2. Identification and refinement of precision. We gradually gather information about the problem within the current framework or system. Taking care to not make drastic moves for that will invalidate our efforts. "Data informed decision making" is more of a catch phrase, or a best a well-intending wish here, because the data we collect is not identically distributed, not from any normal distribution we can hazard, and high-dimensional enough to make cherry-picking very easy, supporting whatever position we care to take - forget about using statistical tools. The information we collect is not formal, very tacit and experiential, we therefore must take care to avoid any systematic cognitive biases that analysis with such data incurs.
  3. Proposal of Solutions. Once the shape, structure or some other synonym for "complex high-dimensional terrain that we cannot observe or explore in full", of the problems becomes known, we can propose various solutions that must causally infer the root of the problem and intervene, potentially changing the shape, structure, etc. of the problem.
  4. Balance of tradeoffs. We have explored various solutions in the space and have enough experience to infer tradeoffs between them. One familiar setting which this stage is usually reached is the selection of the tech-stack for a project. We have enough experience (perhaps) to pick from a finite set of combinations each with tradeoffs that may make our future miserable or enjoyable. We might never know whether our selection is optimal but our experience may provide another data-point in the community.

It's clear that these writings fall into the second stage and some into the third stage for the audacious writers (Illich). I've identified a sense of utopianism or dystopianism that could befall the readers of articles such as these (such as myself), or those by AI maximalists, effective altruists, or other peddlers of technology such as tech CEOs. It is the sense of radicalism that everything could change, everything must change, be demolished by nature of progress, or for sake of all that is good.

This is a naive childish way of thinking. Real radical change comes unexpectedly by the virtues of the world whose mechanisms we cannot claim to know. Everything else must submit to the wisdom of problem solving of incremental development for the changes we want to see.

[^1]: From the articles I read