Waverly Monthly Update — May 2022

The thrill of the search. Building something you believe in, working towards a vision — an ideology recently said one of our mentors — is at once exciting and anxiety-inducing. You’ll hear most startup founders say as much.

Personally, I wouldn’t trade this for anything.

Navigating the choppy waters of the early startup ocean has recently led us to a promising new opportunity. Not a pivot per se, but a way to use our platform and technology to help enterprises solve a problem they all face in a way that builds on their greatest asset: their people.

We’re not quite ready to lift the veil on this. However, if your team shares links on slack, or if you’re in a leadership position and feel your employees are experts at sensing what’s out there, please reach out to me. I’d love to hear from bold organizations who are eager to embrace new approaches for understanding the world that matters to them.

What a Waverly Quote Deck looks like when you share it.
You can to browse through the selected quotes.

In the meantime, the platform keeps on moving forward by leaps and bounds. We recently released Quote Decks: a way for you to share a beautiful mobile-friendly deck that captures why you found an article interesting. Here’s one I created in a few taps from my iPhone. You can browse it from any device, just click on the image.

Again, let me personally thank you for being a part of the Waverly journey.



Recreational Bug Seeding as a Complement to Code Coverage

Many metrics have been proposed to evaluate the quality of a piece of code: nesting depth, cyclomatic complexity, relational cohesion… although my favorite remains WTFs per Minute.

Testing also plays a big role in software quality and is therefore also being measured. One of the most popular unit testing metrics is code coverage, which evaluates the fraction of lines of code that have been executed at least once during testing.

Code coverage is not a bad metric per se, but reaching 100% is not a guarantee that your code is bug-free. Far from it. So, how could we do better?

I’m sure countless Ph.D. theses have been written on this question, but I’d like to propose another idea in the vein of WTFs per Minute. Something not totally serious but not totally ridiculous either.

I call this approach Recreational Bug Seeding and it goes as such…

On a regular basis, you invite your software engineers to do a bug seeding session. Something like a hackathon, but where the goal is to introduce bugs.

During a session, software engineers are encouraged to go through the code — not the tests! — and to modify it in any way they want. Anything goes. They can add a character to a regular expression, change the start index of a loop, return early from a function, invert the clauses in an if-else. As long as the change should break things, it’s valid.

Once a change has been made, the bug seeder runs the suite of unit tests. If they all pass then the dev scores a point and brags about it on slack.

The total number of seeded bugs gives you an interesting indication of the ingenuity of your software engineers, but — assuming this ingenuity is constant over time — it also gives you a pretty good indication of how thorough your tests are.

If devs don’t put too much effort into thinking of weird things that could go wrong, then it’s going to be fairly easy for bug seeders to score points.

One obvious drawback of that method is that it costs engineering time. However, if done well, it might be fun engineering time — with cakes and all — which could have a positive impact on your company culture. As an engineer, I know I would have liked a bug seeding session every now and then. 😀

Waverly is still too small to do this, but I’d be really curious to know if anyone has tried something similar in a larger company. Please reach out if you have!

Update: Many of you pointed me to an automated version of this idea, mutation testing. Thanks!


User-Centered Design is Killing Innovation

If I had asked people what they wanted, they would have said faster horses.”

That Henry Ford quote is equally loved and despised by my friends in the startup community. Some see it as a celebration of the mythical designer genius. The next Steve Jobs who’s going to kick the world out of its passéist ways. Others see it as a celebration of authoritarian design in which egotistic designers ignore the people they pretend to help.

In that evergreen debate, user-centered design seems to be the flavor of the month. That approach to design proposes to start with a study of user characteristics, their environment, the tasks they do, and the workflow they adopt. Only once we’ve learned about our users and their need should we take our pen and start designing the product.

User-centered design sounds like a wise approach. Petulant kids invent spaceships that no one will use while us, serious designers, we take things slowly. We talk to people. We show humility. We are wise and mature.

And we suck the whole fun out of the party.

I was recently reading an article touting the merits of user-centered thinking and how it should be adopted by everyone in the software industry — not just designers. They gave the example of a dev who implemented the export feature users had asked for. Our petulant coder dives into the task, wraps it up and ships it… What a mistake. If only they had talked to users, they would have discovered that they wanted to export their work because the software crashed too frequently! Clearly, our petulant coder did not have a user-centric mindset. Clearly, they started with how instead of why.

Breaking news: people hate crashes. They also hate slow apps. They hate unresponsive UIs. You don’t need user-centricity to solve these problems, you need good monitoring and devs who know their stuff.

User-centric design encourages us to ask why, but in doing so it evades the most important question: how many times should you ask it.

Let’s imagine Henry Ford surveying his users with a user-centric mindset:

So, Mr. Cooper, why do you want new horseshoes?

Because my horse’s feet are hurting, Mr. Ford.

Welcome to an alternate reality where the Ford Corporation is the maker of the great Model T Horseshoe, trusted by farriers all over the world. You’ll object that Mr. Ford is not that stupid…

Why do you want to relieve your horse from its pain, Mr. Cooper?

Because my horse is too slow when its hooves hurt.

Now you’re in the dystopian future where everyone has a genetically engineered super-fast Ford thoroughbred. You can even have it in any color, as long as it’s black. But Henry Ford is not quite done yet:

And Mr. Cooper, why do you want your horse to run faster?

Because I need to be at my brother’s place for the Superbowl!

Armed with that knowledge, will Mr. Ford invent the car, will he invent the Netflix viewing party, or will he skip a full century and give us the Metaverse?

The more you ask why, the more you end up with big and generic problems. Big problems have a multitude of potential solutions. These solutions are novel and alien to people. Therefore, users cannot help you figure out which one is the best. You have to build it, place it in their hands, see if they like it or not, and, above all, you have to iterate.

Humility is critical for designers. You need it to kill the countless darlings you will have to kill in order to build a product people want. But dressing up this humility in the fancy clothes of user-centered design is too often used to mute creativity, to artificially slow down energetic exploration, and to turn design into a bureaucratic process that may feel comfortable but is definitely not going to solve humanity’s biggest problems.

So, here’s to the petulant kids. Build that thing.


Celebrating the Role of Academics in a Startup Ecosystem

Oh, how I miss Korea!

Yesterday I had the opportunity to present the Montreal AI Ecosystem and its research culture in a conference organized by the Korea Development Institute.

Here’s what I said, in a nutshell: I feel we’ve been doing a pretty good job of nurturing human-to-human relationships across the industrial / startup / academic boundary.

Personally, I’m grateful for our special vibe. I feel I regularly get the opportunity to have fruitful exchanges with academic researchers and graduate students even though I’m not a part of their world. These discussions typically flow in both directions: I’m equally excited to learn about their research than they are to listen to me and my startup struggles. Their insights and creativity give me a regular boost.

Historically, we’ve seen the role of academics in a startup ecosystem as purveyors of the initial idea. We need them to invent our deep tech, but then they can take a back seat while entrepreneurs convert their idea into a commercial product.

I don’t like this. I think it’s a reductive view. For me, the real value academics can bring to a startup ecosystem is through their unique blend of creativity and broad expertise. They have the rare ability to approach problems with wild ideas that are nonetheless technologically feasible. This is a skill you need again, and again, and again as you build a startup. You need this ten times more than you need a brilliant initial idea. Pivot is the name of the game, so creative resilience is the key. Academics have plenty of that.

To all of you researchers who help keep that special vibe alive: thank you!

More specifically, thanks to all of you who gave me a personal boost since I started Waverly: Yoshua Bengio, Blake Richards, Marc G. Bellemare, Graham Taylor, Anirudh Goyal, Dr. Sasha Luccioni, Dzmitry Bahdanau, Nicolas Chapados, Nicolas Le Roux, Joelle Pineau, Craig Reynolds, Vicky Kaspi, Kory W. Mathewson, Irina Rish, Eilif Benjamin Muller, Pedro O. Pinheiro, Anqi Xu, Xiang Zhang, Glen Berseth, Michiel van de Panne, Charles Onu, Edith Law, Max Welling, Michael McCool, James O’Brien, Eugene Fiume, Pierre Poulin, Hugo Larochelle.

BTW, I’m not saying it’s unique to Montreal — some of the researchers named above are from across the world — but I feel that spirit is alive and well here.


To the Founders Who Show Up

One of the thing you have to do again and again, as a startup founder, is to lay your dream raw, on the table, in front of a group of strong-willed people who will critique it. They may be advisors, investors, potential customers… But you will have to do this constantly.

You’ll have to do this no matter which state the company is in — fresh from a new release that is picking up steam, in a lull as you struggle to bring users back, as you’re undergoing a pivot and are still struggling to find the right words to talk about what you want to do…

You’ll have to do this no matter which emotional state you’re in.

These meetings are often scheduled weeks in advance and each of them could be the opportunity that unlocks the next stage for you.

You have to show up no matter what.

It’s hard. You can do it by building an armor that lets you hold strong when someone decides to take a stab at your dream. Or you can show up as your authentic, vulnerable self. Accepting that pain will be a part of the journey.

As a founder — as a human — I’ve learned that I can only be successful if I show up without artifice. I therefore lay my dream raw, for you to examine and critique. I’m there to listen and learn… and if your advice hurts I will accept the pain and leverage my support network to get back up.

If you sit across from a founder and are called to critique what they present, please be fully honest. Please tell it like you see it. That’s why we seek your advice. But please, also, bring in that human touch and recognize the challenge the person across the table might have to face.

And to the founders out there, kudos for showing up.


A New Type of Sudoku

On Waverly, one of my most idiosyncratic wave is Puzzles by Humans. It allowed me to discover a hidden world of creative puzzlers. Amongst them, I found a growing group fueling what is now being known as the golden age of sudoku. These so-called setters are inventing countless sudoku variants that can be mixed and matched to create puzzles which force solvers to come up with original deductions.

Leading the popularization of that new puzzling form is the excellent YouTube channel Cracking the Cryptic. It led me onto the (very difficult to navigate) German puzzle portal Logic Masters, which seems to be the birthplace of every new sudoku variant.

Some of these new types of puzzle rely on arithmetics, which I’m not a fan of because it requires solvers to memorize frequent sums. This feels like memorizing frequent definitions in crossword puzzles, which is not what I enjoy in problem solving.

However, there are some new clever variants that dont require you to memorize anything, just to put on your logician hat and prove some theorems. I couldn’t sit idly by the sidelines so… I invented one new such variant! Behold the…

Ant Sudoku

  • Standard sudoku rules apply.
    Digits can’t repeat on lines, columns or 3×3 regions.
  • An ant starts on each of the shaded cells.
  • Each ant must be able to reach at least one of the circled cells with the same letter.
  • Ants must not be able to reach any other circled cell.
  • An ant can move from a cell to an orthogonally adjacent cell if its digit is less or equal to the current digit plus 1.
    Ex: An ant on 4 can move to an orthogonally adjacent 5, 3, 2, or 1 but not to a 6, 7, 8 or 9.

You can play this puzzle online either on F-Puzzles or Penpa+. If you want you can also download a PDF version and print it.


If you get stuck here are some hints, just rot13 decode them:

  • Svaq pryyf fbzr bs gur nagf zhfg nofbyhgryl tb guebhtu.
  • Pna gjb cnguf rire gbhpu gurzfryirf?
  • Znxr fher lbh qba’g fgrc ba gur gbc sbhe.


Here’s the solution, don’t hesitate to give me feedback on Facebook or Twitter.


AI and Consciousness

NeurIPS, the most famous conference in AI these days, was born of the intentional collision of neuroscience and AI — a handful of researchers in both fields seeing value in getting inspired by one another.

My recent conversations with AI researchers, most notably with Yoshua Bengio, have thrown me on a collision course with another group of researchers, one often vilified by us, explorers of “hard” science: philosophers.

Indeed, one of Yoshua’s belief is that to increase the generality of AI we need to uncover more generic priors: general guidelines we could bake into our untrained models so that, as they start to observe data, they learn a more effective hierarchy of structures. Structures that allow them to apply their knowledge in more situations, or to adjust their model faster when new data is observed, or to be more robust when faced with contradictory data… In short, Yoshua (and many other researchers, myself included) believe that better and more generic priors could help tackle the challenges AI is facing these days.

In an infamous paper, Yoshua called one of these very generic (and elusive) priors The Consciousness Prior. Some researchers went up in arms at the use of such a loaded term, accusing him of the academic equivalent of clickbait.

In my case, however, it just made me aware that I had no clue what consciousness was.

In the last few months, through a random encounter with an excellent popularizer of analytical philosophy, I dove deep into the topic. I gained a better understanding of terms like qualia, consciousness, dualism, illusionism, etc. Words that philosophers use to approach questions that us, hard scientists, don’t even dare to ask.

Beyond my own improved understanding of some non-scientific (yet very important) questions, I discovered a community of thoughtful thinkers that are not as enamoured as I imagined with useless rhetorical debates.

I discovered, against my own biases, that philosophy offered a very valid approach to improving our understanding of the world.

The following article, that I discovered via Waverly, explores the difficult topic of emotions from a psychological and philosophical angle. Since we often talk about emotions when discussing Artificial General Intelligence, I felt the article might be interesting both to my AI friends and to my (soon to grow?) group of philosopher friends. Maybe we need a PhilIPS conference, creating an intentional collision between these two worlds? (I kinda like that name 😉)


Can AI Do Art? Are You Afraid It Could?

Two years ago I was sitting in a Belgian concert hall listening to the Brussels Philharmonic playing a series of original pieces. The composer? An AI created by Luxembourg company Aiva.

After the concert I mingled with the attendees. Most of the conversations were around this recurring question: Can AI really do art?

Despite the fact that we had just silently sat for more than an hour listening to very agreeable AI-made music, many found themselves passing a harsh judgement. Most comments were along that line: « That’s not art, it’s only a pastiche of the great composers ».

What stroke me, though, was not the conversations themselves, but the fact that we were all suddenly unified in our judgemental attitude. As is often the case when we pass judgement on someone else — or something else, in the case of AI — I felt we were collectively projecting our own fears. But fear of what?

I’d say it’s the fear of losing our supremacy on a trait that we strongly associate with our identity as human beings.

This would not be the first identity crisis caused by the relentless march of technology. Another example is illustrated by these words, uttered by world-champion Lee Sedol as he lost his match against AlphaGo. “I’m sorry for letting humanity down.”, he said, with tears in his voice.

But humans haven’t stopped playing Go since that famous defeat. On the contrary, they converted the AIs into allies in their pursuit to understand the game. Today, thanks to artificial intelligence, new Go openings are constantly being tested and mastered by humans.

Here’s another anecdote. In December 2018, famous cellist Yo -Yo Ma was speaking at the world’s largest conference on Artificial Intelligence. When asked the question about music and AI he answered something along those lines: « I don’t care, because whenever I’m listening to music I look for the intentions of the human behind it. »

In his recent critic of « Beethoven X », a project to complete Beethoven’s unfinished tenth symphony, composer Jan Swafford notes something similar: « The ability of a machine to do or outdo something humans do is interesting once atmost. We humans need to see the human doing it. »

Might it be that our fear comes from the fact that we see art as the artifact rather than as the intention of the human creating this artifact?

AI will definitely create music that you’ll find pleasing to listen to as you sit in a waiting room or as you drive your car. But, unless you can connect to the human behind that AI – to their intention, their struggles, their humanity – chances are you’ll soon forget about this music.

So can AI create art? To that I answer: who cares. It will never be able to disconnect me from my fellow humans and from the ways in which they try to communicate their humanity through the artifacts they create. That’s what I choose to call art.


Open Facebook to Researchers!

Amongst all the recent complaints against Facebook, the one I find the most problematic is the way in which internal employees have access to an exceptional experimental framework while researchers from outside the company are barred from it.

If Facebook is anything like Google, then its software engineers really are scientists constantly running counterfactual experiments. They deploy any new feature on a subset of users and measure if the proposed change has an impact when compared to a control group. This is hardcore science. It’s good to see companies embracing scientific practices to such an extent.

What is not so good, however, is that external researchers can’t do anything remotely close to that. Their options are limited to:

  • Using analytic tools that offer them an external view onto Facebook. An example is CrowdTangle, acquired by Facebook in 2016 and recently “regorganized”, leading to the departure of its founder and long-time advocate of more transparency, Brandon Silverman. [1]
  • Crowdsourcing data gathering to an army of willing volunteers using a browser plug-in, and sometimes having to stop because Facebook threatens to sue. [2]

So, not only does Facebook block external researchers from operating on the same footing as its internal engineers, it seems to be going out of its ways to make researcher’s lives harder.

There is no denying that Facebook has become a force that shapes society, but we’re mostly blind to the precise way in which it does it.

Does Facebook and its algorithms create filter bubbles? Polarization? Addiction? Infodemics? Doomscrolling? Social anxiety?

Maybe… Probably… I don’t know….

…but it’s precisely the fact that I don’t know and that I could know that is my biggest issue with Facebook.

We need to ask Facebook and all the other society-shaping tech giants to give researchers access to the tools it uses internally. This is the very first step towards the transparency we deserve — if not as individuals, at least as a society.

This post was inspired by this recent piece on researchers using CrowdTangle to study local news on Facebook. Especially by the fact that they had such a hard time to gather data and that they couldn’t derive causal relationships from their experiment.


Your Recommender System Is a Horny Teen

There is a story I like to tell about recommender systems… Someone on the YouTube team once told me that they ran an experimental recommender to decide which frame of a video should be used as a thumbnail. The goal being to maximize clicks. After letting that recommender learn from user’s behaviors, it converged to… Porn! Ok, not quite porn, but the recommender learned that the more skin was visible in a thumbnail the higher the likelihood of a click. Naturally the experiment was scrapped (thank God for human oversight), but it still goes to show that purely metric-driven recommender systems can land you in a very weird place…

That’s what I feel is happening with Amazon’s recommender system picking the ads to run on my Facebook stream. The top picks systematically look like sex toys or, as is the case in the example below, drugs. They are all excellent at triggering my curiosity — and I’m sure their metrics show a very high click-through rate in average user’s — but they are pretty bad at convincing me Amazon is a great company…

May be an image of text that says ' Sponsored Shop our selection of deals, best sellers, and interesting finds on Amazon amazon amazon Jack III 6 Pack Premium California White Sage... Shop Now Jack Richeso Assorted Ass'
Example of ads run by the Amazon Recommender System