Know Me, Don’t Profile Me

Picture the scene. You walk into your small neighborhood coffee shop in the morning. The barista smiles at you from above his espresso machine and mouths “Flat white?” You answer with a smile and moments later you’re sipping your favorite drink.

It makes you feel at home. It makes you feel you’re amongst friends. That’s the kind of experiences we love.

Now picture another scene. Your car started making weird noises so you drive it to the garage, a place you’ve never been to. The mechanic asks you a series of technical questions. You stumble through the answers in a way that makes it pretty clear you don’t know much about cars. A few hours later the mechanic calls you back with a long list of things that need to be fixed and asks you to approve an expensive bill.

Not such a cool experience, now, is it? Makes you wonder if the mechanic somehow figured out that cars were not your forte and is trying to slip you a couple of unnecessary fixes.

These two experiences illustrate the difference between someone who knows you, versus someone who’s trying to profile you.

The concepts of knowing you or profiling you are similar in many ways but they differ in one key aspect. Someone trying to know you better engages in candid conversations. They happily let you know what they’ve learned about you. Their behavior makes it obvious that they’re not trying to extract information without you knowing. They allow you to set boundaries and keep some things for yourself.

Someone who tries to profile you does all the opposite. They watch you without you knowing. They infer things about you from your behavior but they won’t let you know what they’ve learned. Profiling happens in the shadows. It doesn’t let you choose what you’re willing to share or not.

There’s a word that captures that key difference between knowing someone and profiling someone: transparency.

The reason transparency is so important is that it allows the emergence of a trusted relationship. The candid conversations you have with your barista install a level of confidence. They allow you to trust that they want to know you in order to serve you better rather than to serve their own interests. Sure, it’s all about conducting a business transaction, but it’s one where you can be more confident that the goal is not to take advantage of you.

Now let’s transpose the scene to the online world. When you shop on the web, the site you visit tries to present you with a personalized item selection. Do they achieve that personalization by knowing you or by profiling you?

With traditional recommender systems there’s very little transparency as to what the online vendor knows or doesn’t know about you. Their knowledge is built by observing the log of your interactions, not on a history of candid conversations. This makes it hard for trust to emerge.

The personalized online shopping experience looks like a barista who knows your favorite coffee, but it feels like a mechanic trying to take advantage of you.

The feeling of being profiled also exists on our social networks. The order in which the reels are presented on TikTok clearly reflects some knowledge of our preferences, but we’re never offered an opportunity to learn what the system knows about us. We don’t have ways to set boundaries on that knowledge. We don’t have any mechanism through which we could build a trustworthy relationship with our social platforms.

These platforms are not trying to trick us with an inflated garage bill, but they are trying to steal our attention. They are trying to get us to keep watching their content… And when we regret spending too much time on their feeds they give us a poor excuse: “You can shut down the app at any time.”

It doesn’t have to be this way. At Waverly, we’ve built a new type of recommender system. One that is all about transparency. A platform that aims to know you. Slowly, though candid conversations, by letting you set boundaries whenever you want, by forgetting anything you want it to forget. A platform that work for you, with your best interests at heart.


A Coding CEO

Sometimes people ask me: “As a CEO, don’t you have more important things to do than coding?”

And yes, there are a million things to do to get a tech startup like Waverly off the ground.

But if I’ve learned one thing from my previous experiences it’s that, until you reach product-market-fit, amongst the million things you need to do the most important one is…

…the product!

Our product is tech-centric. It lives and dies by its algorithm and by the experience we create for our users. What’s the best thing I can do in that context?

I could go to cocktails, talk about the product, do marketing, close partnerships. In fact, I’m doing it. It’s important, but it’s not as important as building a great product.

I could direct people, but our small team accomplishes wonder with little guidance. I correct course all the time, but it takes very little effort.

I could hire more people and run more experiments. However, our experimentation engine is bound by the speed at which our users can try new ideas. A bigger team would mean a higher burn with limited benefits — it may even distract us and slow us down.

I decided one of the best thing I could do was to code. Yes, I’ll revise this decision as we grow. I love all aspects of the CEO job. But for now I’m a coding CEO and I’m quite proud of my GitHub chart:


Waverly Monthly Update — September 2022

Hi !

I took a short writing break this summer, but I’m happy to be back!

The team was very busy in the last few months. We’ve been polishing Waverly to make it ready for our upcoming Apple AppStore release. We’ve settled on a date for the release and I’ll give you all the details as we get closer.

Above is a screenshot of one new feature from the latest release: you can now see your friends’ faces on cards. Curators — those who mark an article as fit/unfit for a wave — now appear on each card, with the first curator getting the coveted first checkmark.

I know a lot of people following this newsletter are Android users, or dream of connecting to Waverly from their computer. We’re not there yet, as our small team decided to optimize the experience on a single platform first. Still, don’t hesitate to nag me. It’s great to hear your enthusiasm, even if it makes me feel we can’t deliver everything we wish we could.

So, what about running a startup? As previously mentioned, we’re now firmly committed to building the best content delivery platform for professionals. Waverly is already being used by people and organizations who want to track trends, understand their market, see what their competitors are up to, etc. We feel it’s a place where our AI really shines and it allows us to execute towards our north star: an assistant you can trust and control using everyday language.

Our updated website reflects that positioning. Please take a look at it and send us your feedback.

As always, it’s a great pleasure to feel your support. Thanks again for being a part of the Waverly journey, and never hesitate to send me your feedback — I read every email. You’d like to meet in person? I’ll be at C2 and MTL Connect, let’s hang out!




Will Technique Die?

“You can’t have art without resistance in the materials”

The AI hype peaked three years ago, but in a shrewd move, AI was simply waiting around the corner to ambush us when we let our guard down. At least that’s what I can gather from two articles that popped up on my Waverly this morning.

The first, from O’Reilly, explores how AI is being used by programmers to speed-up their craft using a tool called Copilot. It asks whether AI will enable some form of “higher level compilation” that will make it unnecessary for programmers to learn how to code.

The second, from Wired, asks artists what they think of people using their names in DALL·E and Midjourney prompts:

“When they’re feeding work from living, working artists who are, you know, struggling as it is, that’s just mean-spirited,”

Yeah. They’re not happy…

From my experience coding recommender systems and doing research in AI, my take is slightly different. We’re inventing new tools that open up new territory. We’ll learn how to craft in this new space. Some techniques will become more useful, some will become less useful. It will suck for some people and be great for others.

The problem of the new technique — the “type a prompt and the AI will do the rest” technique — is that it’s easy to believe there’s no technique at all. That any kid could master that craft after spending 5 minutes on Midjourney or by typing a few words into Copilot.

Except that’s not true. Midjourney artists and Copilot coders soon learn that they need to understand how the AI “perceives” their prompt. They need to co-evolve a language with their new AI tool. Something like that already happened in the past, with Google. We all had to learn how to write good queries. We co-evolved a language with the search engine.

Learning this language — especially when it comes to writing code that will be maintained by others — will necessarily be rooted in the human’s deep understanding of what they are trying to achieve. If you don’t know how to organize software, which pieces to abstract out, which pieces to disentangle, then Copilot wont help you.

Will we ever get a computer that can program itself entirely? Something you could direct the same way you can direct a top human programmer today? Maybe, but that’s still very much in our AI-hyped future.

What we’ll get a lot of are these new tools, like Copilot and Midjourney, that completely change the techniques one can use to achieve a piece of code… or a piece of art!


What is Indexing and Why is it Useful for Recommender Systems?

This post contains an answer I gave to Kathryn Kyte who wrote this BBC piece about Waverly.

What does indexing an article mean in the context of a recommender system?

The best analogy I can think of is that of a business directory. A big book where you have, in alphabetical order, different services: Bricklayer, Mechanic, Plumber… Under each category, you have a list of businesses that offer this service. A given business can fall under two services if it does multiple things.

An index of articles is very similar. Instead of having an alphabetical list of services, you have an alphabetical list of characteristics that can apply to an article. These characteristics can be topics, people mentioned in an article, writing styles (journalistic, casual, fiction), etc.

If you look under a given entry of the index, say the topic Beneficial AI, you’ll find all the articles that seem to be talking about beneficial AI. At Waverly, our AI reads tens of thousands of articles a day and extract hundreds of characteristics about these articles. These characteristics, and the articles the point to, constitute the Waverly index.

It’s this index, combined with our ability to extract which characteristics seem important to you given the Waves you’ve written, that allows us to offer the first natural-language-driven recommender system.


Waverly Monthly Update — July 2022

Lots of exciting things to share in the last month…

First, we officially graduated from the Creative Destruction Lab (CDL)! There were 20 startups like ours at the starting line*, 8 months ago, and only 4 of us made it all the way to the end. What a journey! We have the whole CDL team and our mentors to thank for some of the big leaps we’ve made recently, including our new go-to-market strategy.

Our new direction is rooted in the feedback we got from our raving users — those who open Waverly multiple times a week and browse more than 500 articles a month. These users all come from the business world. They are consultants, analysts, or content creators who use Waverly to track trends, perform desk research and find inspiration on things that matter to them and their customers.

You’ll therefore see us doubling down on these use cases. We want Waverly to be the best modern platform for professionals who need to track information — about a specific topic, a domain, competitors… As long as you can express what you need to know in plain English, Waverly will be able to help you.

Typical market intelligence tools are complex and require extensive training. Not Waverly. Thanks to our reliance on natural language, Waverly will remain both powerful and easy to use. We’ve designed it from the ground up to be a modern platform that can be used by anyone. It is therefore ideal for spreading important knowledge throughout an organization and for gathering the feedback of your insightful employees.

Stay tuned, and thanks  for being a part of the Waverly journey.



The Sentience Question: Opening the AI Black Box

Opening the AI black box is not a hard as you might think.

At least not to get the basic knowledge you need to understand the difference between an AI language generator and a speaking human.

An AI generator looks at the words forming the beginning of a sentence and rolls a dice to select the next word. To do this, they use a table associating a different word to any dice roll.

Yes, AI generators need an unimaginably large number of tables to perform this feat — one for the beginning of any possible sentence. Fortunately, they have clever tricks to compress these tables and store them in memory.

At their heart, however, that’s how they operate:

  • Step 1. Fetch the table that corresponds to the beginning of the sentence.
  • Step 2. Roll a dice to look-up a word from that table, append that word to the sentence.
  • Step 3. Go back to step 1.
  • Step 4. Use the generated sentence to convince humans you’re sentient. 😉

Yes, it’s simplified. For example, modern AI language generators produce sentences letter-by-letter. You get the gist of it though, and it’s enough to understand just how different these systems are from your fellow humans.

We get fooled because AI is the very first non-human thing that can generate fluent language. Unconsciously we imagine their black box work the same way as our black box. We imagine that if an AI writes “I’m afraid of dying” it reflected on its existence and experienced a feeling of fear. After all, if a fellow human wrote that sentence, it would very likely be the expression of such an internal experience.

Same goes for a simpler sentence. If a human writes “peanut butter and jelly are delicious together” it’s likely because they tried that famous dish and experienced joy as the tastes mixed on their tongue. We know our computers dont have a tongue, so if an AI generator wrote that we wouldn’t assume it really experienced the joy of biting into a peanut butter and jelly sandwich.

For humans, language is most often the tip of a complex iceberg that finds its roots in our conscious experiences. For an AI, it’s the result of a series of dice rolls.

This article, written by experts in the field, goes deeper into that “cognitive illusion” of believing that fluent language means conscious experience.

In particular, it shows how this illusion often operates in reverse — in a way that can be much more damageable. Indeed, we often imagine that a human who doesn’t express themselves fluently is having a lesser conscious experience. That is, we imagine they’re less intelligent.

Now that’s a cognitive illusion worth fighting.


Waverly monthly update — June 2022

Where do insights come from? I care about this question more and more everyday. I care because that’s what Waverly has grown into: a machine for finding and refining the best insights.

Our most active users rely on Waverly to help them step outside of the beaten track. To seek content that will stimulate their imagination in a way that matters to them, to their work, to their customers.

So, where do insights come from? The seed of insights can come from anywhere. A Slack conversation, a video you stumbled upon, an old browser tab you randomly clicked on. Anything could trigger that little light bulb in your head.

But that’s only the seed, what about the insight itself? It comes from nurturing that seed. Finding that passage from an article that’s particularly relevant to you. Discussing it with your colleagues. Finding other related ideas and connecting them together.

You can now share anything to Waverly!

With Waverly, we want to embrace the fact that the seed of insights can come from anywhere. Our pioneers will soon discover, in the upcoming release, our Share to Waverly feature. A way to send whatever you find — an article, a podcast, a video — into Waverly so that you can nurture it and turn it into a valuable insight.

It’s not like any bookmarking feature: our goal is to give you the full power of Waverly’s recommender system. As a first step, our AI will automatically suggest a Wave to attach the link to. As we develop Waverly, though, we’ll add more and more ways for Waverly to help you and your community nurture these little seeds into insights that matter to you.

Stay tuned, and thanks  for being a part of the Waverly journey.



Waverly Monthly Update — May 2022

The thrill of the search. Building something you believe in, working towards a vision — an ideology recently said one of our mentors — is at once exciting and anxiety-inducing. You’ll hear most startup founders say as much.

Personally, I wouldn’t trade this for anything.

Navigating the choppy waters of the early startup ocean has recently led us to a promising new opportunity. Not a pivot per se, but a way to use our platform and technology to help enterprises solve a problem they all face in a way that builds on their greatest asset: their people.

We’re not quite ready to lift the veil on this. However, if your team shares links on slack, or if you’re in a leadership position and feel your employees are experts at sensing what’s out there, please reach out to me. I’d love to hear from bold organizations who are eager to embrace new approaches for understanding the world that matters to them.

What a Waverly Quote Deck looks like when you share it.
You can to browse through the selected quotes.

In the meantime, the platform keeps on moving forward by leaps and bounds. We recently released Quote Decks: a way for you to share a beautiful mobile-friendly deck that captures why you found an article interesting. Here’s one I created in a few taps from my iPhone. You can browse it from any device, just click on the image.

Again, let me personally thank you for being a part of the Waverly journey.



Recreational Bug Seeding as a Complement to Code Coverage

Many metrics have been proposed to evaluate the quality of a piece of code: nesting depth, cyclomatic complexity, relational cohesion… although my favorite remains WTFs per Minute.

Testing also plays a big role in software quality and is therefore also being measured. One of the most popular unit testing metrics is code coverage, which evaluates the fraction of lines of code that have been executed at least once during testing.

Code coverage is not a bad metric per se, but reaching 100% is not a guarantee that your code is bug-free. Far from it. So, how could we do better?

I’m sure countless Ph.D. theses have been written on this question, but I’d like to propose another idea in the vein of WTFs per Minute. Something not totally serious but not totally ridiculous either.

I call this approach Recreational Bug Seeding and it goes as such…

On a regular basis, you invite your software engineers to do a bug seeding session. Something like a hackathon, but where the goal is to introduce bugs.

During a session, software engineers are encouraged to go through the code — not the tests! — and to modify it in any way they want. Anything goes. They can add a character to a regular expression, change the start index of a loop, return early from a function, invert the clauses in an if-else. As long as the change should break things, it’s valid.

Once a change has been made, the bug seeder runs the suite of unit tests. If they all pass then the dev scores a point and brags about it on slack.

The total number of seeded bugs gives you an interesting indication of the ingenuity of your software engineers, but — assuming this ingenuity is constant over time — it also gives you a pretty good indication of how thorough your tests are.

If devs don’t put too much effort into thinking of weird things that could go wrong, then it’s going to be fairly easy for bug seeders to score points.

One obvious drawback of that method is that it costs engineering time. However, if done well, it might be fun engineering time — with cakes and all — which could have a positive impact on your company culture. As an engineer, I know I would have liked a bug seeding session every now and then. 😀

Waverly is still too small to do this, but I’d be really curious to know if anyone has tried something similar in a larger company. Please reach out if you have!

Update: Many of you pointed me to an automated version of this idea, mutation testing. Thanks!