Algorithmic Fatigue, Bullshit Jobs and Data Trusts

Welcome to this week’s Via Waverly, where I expose diverse and unexpected finds that were served to me by Waverly.

Fighting Algorithmic Fatigue

I’m still looking for the right term to capture that nauseous feeling that grasps me when I’ve spent too much time stuck in an algorithmic stream. Doomscrolling is my favorite one for now, but I’d like one that captures the emotion, not the action. I stumbled on algorithmic fatigue this week.

I can’t really find a way to communicate with this app or service to say that’s not what I want, or at least that is not everything I want.

Female, 30 (Shanghai, China)

This is a quote from a series of interviews in this research report by the University of Helsinki, Alice Labs and Reaktor. You might prefer reading this summary.

The whole report is full of interesting findings:

The meticulous, first-hand observations demonstrate that recommender systems and digital assistants repeatedly fail in their promise of providing pleasurable encounters, rather delivering irritating engagements with crude and clumsy machines. […]

Digital technologies are often developed to a ‘one size fits all’ model. Yet, as the experiences with recommender systems and digital assistants suggest, in different contexts, people take up very different stances in relation to technologies. They might want to be passive, or prefer to be actively involved.

Wave: ☕ Design Strategy

Are Bullshit Jobs Bullshit?

I’ve been a fan of David Graeber’s Bullshit Job hypothesis ever since I read the books a few years ago. In fact, I believe the reason we don’t see more jobs being displaced by automation is because we’re in some weird “job bubble” where bullshit jobs are being created through a very complex and opaque system of incentives.

Yet, when I created Waverly, one of my goal was to help me step out of my filter bubble, so when I saw that article pop up in my daily stack, I was not exactly happy (it’s not fun to have one’s beliefs challenged by a triggering title) but I still went ahead and read it:

Graebers made a number of claims that the researchers attempted to corroborate:

Between 20% and 50% of the workforce are working in bullshit jobs. No, only 4.8% of EU workers said they were doing meaningless work.

The number of bullshit jobs has been ‘increasing rapidly in recent years’. Nope. Actually, the percentage of bullshit jobs fell from 7.8% to 4.8% in 2015.

Graeber argued bullshit jobs clustered in certain occupations, like finance, law, administration, and marketing. The researchers found no evidence that those occupations had more people feeling like their work was meaningless.

OK, so my gut feeling — that the number of Bullshit Jobs is constantly increasing — doesn’t seem to be corroborated by these researcher’s finding.

Maybe I’m wrong? Maybe we need more research on this? Anyway, interesting data point.

Wave: 👍 Modern Leadership

Lessons from Existing and Failed Data Trusts

Interesting research from Cambridge’s Bennett Institute that contrasts failed data trusts with successful ones. Starting with a famous failure:

Sidewalk Labs’ proposal for the Urban Data Trust in Toronto, Canada was abandoned amid a heated public controversy. Legal scholars and privacy advocates argue the goal of the trust may have been to make the data collected in the city exempt from Canada’s privacy laws.

However, there seems to be some agreement that sharing data is needed to improve our common goals:

European policymakers argue it is important for individuals to accept their role as “data donors” who willingly share information with the trustworthy organisations for collective benefit.

And some examples where data trusts are working:

One example of a data trust that works for a civic purpose is the Silicon Valley Regional Data Trust, which is operated by the University of California in partnership with several district school boards and local social services. The trust is a non-profit cooperative and shares the data only among the organisations that donate data.

Wave: 💡 Value Alignment

Learning about Slipstream

Thanks to this article, I’ve learned a new term for a literary genre: Slipstream. It seems to be very ill-defined, but from what I gather from the article, it’s a genre I’m quite likely to enjoy:

Sterling then goes on to name “slipstream” for a group of books that straddle the fence of mainstream and genre, even acknowledging the term as a parody of the word “mainstream.”

Sterling admitted it’s not clearcut what slipstream is. Most of the essay brainstorms and then acknowledges arguments against the term. In a nutshell he wrote, “this is a kind of writing which simply makes you feel very strange.”

Slipstream novels are categorized as not strictly under science fiction, fantasy, or horror, but may be recommended by their ardent readers. On the other hand, a mainstream reader may also recommend a slipstream novel. Although they may add it might be a bit on the weirder side.

Wave: 📗 Litterature Lover

Faster Synthetic Data

I started my research career in Computer Graphics, which is all about (approximate) physics simulations. Now that I’m more into Machine Learning, I often find myself to be one the biggest proponent of synthetic training data: going back to first principles to synthesize something that looks like the real thing, and try to train an ML system on this.

This project goes further and proposes to use ML to speed-up the generation of synthetic data… to train future ML system!

This may sound ridiculous. If you already succeeded in training a system to generate your synthetic data, why use it to train a new system?

But it might be brilliant… If you have a fast ML-based data synthesizer, you might be able to use it as a component within a more complex synthesizer, ultimately allowing you to train better downstream AI models.

Wave: 🧠 Generalized Machine Intelligence