Categories
Uncategorized

A New Type of Sudoku

On Waverly, one of my most idiosyncratic wave is Puzzles by Humans. It allowed me to discover a hidden world of creative puzzlers. Amongst them, I found a growing group fueling what is now being known as the golden age of sudoku. These so-called setters are inventing countless sudoku variants that can be mixed and matched to create puzzles which force solvers to come up with original deductions.

Leading the popularization of that new puzzling form is the excellent YouTube channel Cracking the Cryptic. It led me onto the (very difficult to navigate) German puzzle portal Logic Masters, which seems to be the birthplace of every new sudoku variant.

Some of these new types of puzzle rely on arithmetics, which I’m not a fan of because it requires solvers to memorize frequent sums. This feels like memorizing frequent definitions in crossword puzzles, which is not what I enjoy in problem solving.

However, there are some new clever variants that dont require you to memorize anything, just to put on your logician hat and prove some theorems. I couldn’t sit idly by the sidelines so… I invented one new such variant! Behold the…

Ant Sudoku

  • Standard sudoku rules apply.
    Digits can’t repeat on lines, columns or 3×3 regions.
  • An ant starts on each of the shaded cells.
  • Each ant must be able to reach at least one of the circled cells with the same letter.
  • Ants must not be able to reach any other circled cell.
  • An ant can move from a cell to an orthogonally adjacent cell if its digit is less or equal to the current digit plus 1.
    Ex: An ant on 4 can move to an orthogonally adjacent 5, 3, 2, or 1 but not to a 6, 7, 8 or 9.

You can play this puzzle online either on F-Puzzles or Penpa+. If you want you can also download a PDF version and print it.

Hints

If you get stuck here are some hints, just rot13 decode them:

  • Svaq pryyf fbzr bs gur nagf zhfg nofbyhgryl tb guebhtu.
  • Pna gjb cnguf rire gbhpu gurzfryirf?
  • Znxr fher lbh qba’g fgrc ba gur gbc sbhe.

Solution

Here’s the solution, don’t hesitate to give me feedback on Facebook or Twitter.

Categories
Uncategorized

AI and Consciousness

NeurIPS, the most famous conference in AI these days, was born of the intentional collision of neuroscience and AI — a handful of researchers in both fields seeing value in getting inspired by one another.

My recent conversations with AI researchers, most notably with Yoshua Bengio, have thrown me on a collision course with another group of researchers, one often vilified by us, explorers of “hard” science: philosophers.

Indeed, one of Yoshua’s belief is that to increase the generality of AI we need to uncover more generic priors: general guidelines we could bake into our untrained models so that, as they start to observe data, they learn a more effective hierarchy of structures. Structures that allow them to apply their knowledge in more situations, or to adjust their model faster when new data is observed, or to be more robust when faced with contradictory data… In short, Yoshua (and many other researchers, myself included) believe that better and more generic priors could help tackle the challenges AI is facing these days.

In an infamous paper, Yoshua called one of these very generic (and elusive) priors The Consciousness Prior. Some researchers went up in arms at the use of such a loaded term, accusing him of the academic equivalent of clickbait.

In my case, however, it just made me aware that I had no clue what consciousness was.

In the last few months, through a random encounter with an excellent popularizer of analytical philosophy, I dove deep into the topic. I gained a better understanding of terms like qualia, consciousness, dualism, illusionism, etc. Words that philosophers use to approach questions that us, hard scientists, don’t even dare to ask.

Beyond my own improved understanding of some non-scientific (yet very important) questions, I discovered a community of thoughtful thinkers that are not as enamoured as I imagined with useless rhetorical debates.

I discovered, against my own biases, that philosophy offered a very valid approach to improving our understanding of the world.

The following article, that I discovered via Waverly, explores the difficult topic of emotions from a psychological and philosophical angle. Since we often talk about emotions when discussing Artificial General Intelligence, I felt the article might be interesting both to my AI friends and to my (soon to grow?) group of philosopher friends. Maybe we need a PhilIPS conference, creating an intentional collision between these two worlds? (I kinda like that name 😉)

Categories
Uncategorized

Can AI Do Art? Are You Afraid It Could?

Two years ago I was sitting in a Belgian concert hall listening to the Brussels Philharmonic playing a series of original pieces. The composer? An AI created by Luxembourg company Aiva.

After the concert I mingled with the attendees. Most of the conversations were around this recurring question: Can AI really do art?

Despite the fact that we had just silently sat for more than an hour listening to very agreeable AI-made music, many found themselves passing a harsh judgement. Most comments were along that line: « That’s not art, it’s only a pastiche of the great composers ».

What stroke me, though, was not the conversations themselves, but the fact that we were all suddenly unified in our judgemental attitude. As is often the case when we pass judgement on someone else — or something else, in the case of AI — I felt we were collectively projecting our own fears. But fear of what?

I’d say it’s the fear of losing our supremacy on a trait that we strongly associate with our identity as human beings.

This would not be the first identity crisis caused by the relentless march of technology. Another example is illustrated by these words, uttered by world-champion Lee Sedol as he lost his match against AlphaGo. “I’m sorry for letting humanity down.”, he said, with tears in his voice.

But humans haven’t stopped playing Go since that famous defeat. On the contrary, they converted the AIs into allies in their pursuit to understand the game. Today, thanks to artificial intelligence, new Go openings are constantly being tested and mastered by humans.

Here’s another anecdote. In December 2018, famous cellist Yo -Yo Ma was speaking at the world’s largest conference on Artificial Intelligence. When asked the question about music and AI he answered something along those lines: « I don’t care, because whenever I’m listening to music I look for the intentions of the human behind it. »

In his recent critic of « Beethoven X », a project to complete Beethoven’s unfinished tenth symphony, composer Jan Swafford notes something similar: « The ability of a machine to do or outdo something humans do is interesting once atmost. We humans need to see the human doing it. »

Might it be that our fear comes from the fact that we see art as the artifact rather than as the intention of the human creating this artifact?

AI will definitely create music that you’ll find pleasing to listen to as you sit in a waiting room or as you drive your car. But, unless you can connect to the human behind that AI – to their intention, their struggles, their humanity – chances are you’ll soon forget about this music.

So can AI create art? To that I answer: who cares. It will never be able to disconnect me from my fellow humans and from the ways in which they try to communicate their humanity through the artifacts they create. That’s what I choose to call art.

Categories
Uncategorized

Open Facebook to Researchers!

Amongst all the recent complaints against Facebook, the one I find the most problematic is the way in which internal employees have access to an exceptional experimental framework while researchers from outside the company are barred from it.

If Facebook is anything like Google, then its software engineers really are scientists constantly running counterfactual experiments. They deploy any new feature on a subset of users and measure if the proposed change has an impact when compared to a control group. This is hardcore science. It’s good to see companies embracing scientific practices to such an extent.

What is not so good, however, is that external researchers can’t do anything remotely close to that. Their options are limited to:

  • Using analytic tools that offer them an external view onto Facebook. An example is CrowdTangle, acquired by Facebook in 2016 and recently “regorganized”, leading to the departure of its founder and long-time advocate of more transparency, Brandon Silverman. [1]
  • Crowdsourcing data gathering to an army of willing volunteers using a browser plug-in, and sometimes having to stop because Facebook threatens to sue. [2]

So, not only does Facebook block external researchers from operating on the same footing as its internal engineers, it seems to be going out of its ways to make researcher’s lives harder.

There is no denying that Facebook has become a force that shapes society, but we’re mostly blind to the precise way in which it does it.

Does Facebook and its algorithms create filter bubbles? Polarization? Addiction? Infodemics? Doomscrolling? Social anxiety?

Maybe… Probably… I don’t know….

…but it’s precisely the fact that I don’t know and that I could know that is my biggest issue with Facebook.

We need to ask Facebook and all the other society-shaping tech giants to give researchers access to the tools it uses internally. This is the very first step towards the transparency we deserve — if not as individuals, at least as a society.


This post was inspired by this recent piece on researchers using CrowdTangle to study local news on Facebook. Especially by the fact that they had such a hard time to gather data and that they couldn’t derive causal relationships from their experiment.

Categories
Uncategorized

Your Recommender System Is a Horny Teen

There is a story I like to tell about recommender systems… Someone on the YouTube team once told me that they ran an experimental recommender to decide which frame of a video should be used as a thumbnail. The goal being to maximize clicks. After letting that recommender learn from user’s behaviors, it converged to… Porn! Ok, not quite porn, but the recommender learned that the more skin was visible in a thumbnail the higher the likelihood of a click. Naturally the experiment was scrapped (thank God for human oversight), but it still goes to show that purely metric-driven recommender systems can land you in a very weird place…


That’s what I feel is happening with Amazon’s recommender system picking the ads to run on my Facebook stream. The top picks systematically look like sex toys or, as is the case in the example below, drugs. They are all excellent at triggering my curiosity — and I’m sure their metrics show a very high click-through rate in average user’s — but they are pretty bad at convincing me Amazon is a great company…

May be an image of text that says 'mazon.ca Sponsored Shop our selection of deals, best sellers, and interesting finds on Amazon amazon amazon Jack III 6 Pack Premium California White Sage... amazon.ca Shop Now Jack Richeso Assorted Ass amazon.ca'
Example of ads run by the Amazon Recommender System
Categories
Uncategorized

The Phrase “Social Network” Is Trapping Us

As we’re integrating more human-to-human interaction into Waverly I’m getting a bit anxious. Will we end up building yet another social network? If not, what are we building?


In a recent discussion with Matthieu Dugal on The Waverly Podcast (en français), he pointed out that, even though more and more people get informed through their social networks, most people use them to nurture their social ties. Social networks are a bit like virtual bars where some of the customers are having a casual drink while others are lecturing them in all seriousness.

Why do we end up with these combined online platforms? A look at how they grow helps answer that question. They typically start as purely social spaces, but evolve over time as they attract a more diverse crowd of content publishers.

Combining an informational and a social space into the same platform makes it hard to know how to react to different pieces of content. The Guardian and The New York Times might increase their readership by publishing on Instagram, but their presence on the platform also contributes to the confusion. It requires mental effort to figure out that we should react differently to a piece of journalism than to the anxiety-loaded message of an anti-vax friend. The former should be processed for the information it contains, whereas the latter is best met with words of compassion.

Followup question: Why do most modern online platforms start as purely social spaces? Maybe because the phrase social network occupies too much space in our collective imagination. Thanks to our limited vocabulary, we believe the only thing we can do collectively, online, is to reinforce our social ties.

Despite the popularity of the phrase social networks, it’s easy to find online spaces where people interact without trying to socialize. Wikipedia, for example, has an army of volunteers who update its pages. Some will become friends, but everyone understands that their primarily goal is to build a “written compendium that contains information on all branches of knowledge”. Other examples abound: Stack Overflow, GitHub, Quora…

What should we call these? Collaborative platforms? If so, can we all agree to push on that phrase really hard so that it gains a foothold in our collective mind? I’d really like to see a different kind of online spaces flourish, and I believe it wont happen unless we have some words to describe what we want to build.

In the meantime, that’s what I’ll do. I’ll build Waverly as a collaborative platform. I’ll make sure it feels like a space where communities gather around a joint mission — building healthier algorithms for all of us — rather that around their need to nurture social ties.

Categories
Uncategorized

Why It’s So Hard to Trust the Machines

If a careless driver runs a red light and kills a pedestrian you might read about it in your local newspaper. If a self-driving car does the same, you’ll see it in every international news outlet.

When it comes to trusting machines, people often seem to have a ridiculously high bar. We reject machines even when they are statistically safer than humans at performing a given task. In fact, most experts agree that self-driving cars will need to be significantly safer than human drivers in order to gain social acceptance.

Why is it so hard to trust the machines? Let me venture a hypothesis…

If a person makes a poor decision, we assume that this person is flawed in some way. If a machine makes a poor decision, we assume all similar machines are broken. A human driver running a red light means that this driver is careless. A self-driving car doing the same instantly means that all self-driving cars are potentially broken.

We consider humans as separate beings while we see machines as copies made from the same blueprint. We’re not wrong. If a mobile device suffers from a security flaw, it’s fair to assume that all similar devices suffer from the same flaw.

In fact, this replicated nature of technology gives us the powerful ability to “patch” all the machines at once. We use such patches regularly to make our devices safer. Yet, at the same time, perfect replication might be precisely what makes it impossibly hard to trust the machines. Where an individual human being gives us a natural boundary beyond which we won’t extend our trust (or distrust), replicated machines have no such boundary.

Should we give up replication and purposefully build each machine slightly differently to make it easier for us to trust them? This sounds like a stupid idea, but it may not be. In fact, injecting artificial diversity may be required to help speed up social acceptance of automation.

In general, we know that diversity makes systems more resilient at the cost of slightly reducing their effectiveness. Plant diverse crops and you reduce the potential negative outcome of a disease outbreak.

What I’m proposing today is that, as paradoxical as it may seem, diversity might also help us build systems that are easier to trust at the cost of slightly reducing their safety. Our tendency to maximize safety might be running at odds with our desire to deploy trustworthy systems at scale. Instead of building a fleet of identical replicas that are easy to patch, we might be better off inserting artificial “fracture lines” that make it harder for distrust to spread.

More generally, I’ve always seen diversity as a way for us to stay humble in the face of what we’re creating. Welcoming diverse point-of-views necessarily means we will not put all our efforts and energy behind the optimal idea. However, it also means that we acknowledge our inability to predict the future. We acknowledge that there are unforeseen events that might throw a wrench at any seemingly optimal idea and that we’re better off pursuing many different paths simultaneously, even if some of these paths appear to be less efficient.

Just how much diversity do we need if we are to build trust between humans and machines? Would having a hundred different self-driving car models be enough? Would each car need its random artificial DNA to ensure it makes decisions slightly differently from other cars? I don’t have the answer… But I do believe it is interesting to look at the lack of diversity in our replicated systems as a hurdle towards building trustworthy machines.

Note: I’ve experimented with a featured image for this post, using a quote from the post in the image. The photo is from Jason Leung on Unsplash.

Categories
Uncategorized

Solving Science

A topic I often come back to is how, in my opinion, scientists are not trying hard enough to solve the problems in the processes that drive modern science. I find that particularly sad given how some of the people I love and admire the most are scientists.

For me, like for many grad students, it started with a personal emotional crisis following the harsh comments of an anonymous reviewer #2. I was surprised at how a community of people who strived to make the world a better place was full of critics who didn’t seem to care that there was a human on the receiving end of their comment.

As I made my way through the academic ecosystem I started observing latent in-goup / out-group dynamics in tightly knit sub-communities. These dynamics made it really hard for a newcomer to propose alternative approaches that would challenge the views of these sub-communities. Again, as a starry-eyed idealistic researcher I got my fingers slapped, through reviews, in a way that felt very unfair.

My unease with these observations — and how strongly they clashed with my idealistic vision of science — turned me into a vocal advocate of greater experimentation in the academic processes.

At that point in my postdoc I got this advice from a successful prof: “If you keep worrying about the process you’ll never be a good researcher. Focus on the science.” She was right. In fact, my inability to stop caring about the process is partly why I gave up on the academic track…

Yet if researchers give up on the process, who will care?

Right now, it seems to be the funding agencies. The ones that gave us impact factors and h-index and a whole slew of bibliometric methods. They turned scientific funding into a game with well-defined rules… and as a result they turned (some) scientists into players. Even though, deep down, most scientists would rather just be doing good science.

I dont talk about this too much these days. For one, I’m out of the academic circuit (even though in my heart I very much still feel like a scientist 😊 ). But also because the last thing I want is to be confused with a proponent of anti-intellectualism. It’s quite the opposite: it’s because I love the spirit of science that I care about how it’s done.

What prompted this post was a discussion with Marie Lambert-Chan and Matthieu Dugal. Marie pointed me to this article. I couldn’t read it because of its paywall but the subtitle makes me hopeful: “For the first time prestigious funder has explicitly told academics they must not include metric when applying for grants.”

Categories
Uncategorized

Humans are not Pixels

Grateful that the world is made of more than pixels…

This week the entire Waverly team convened in Montréal. It was good. Really good. I wanted to share the story.

We started and grew Waverly during the pandemic. Which means we hired pixelated faces and then met these pixels daily in little Zoom squares.

Because it was the pandemic we decided to hire the best possible people (sticking to a single time zone & country). It means our small team now spans from Rimouski to Windsor.

In the past year we built a never seen before technology — the world’s first natural language based recommendation system — and we managed to package it in a POC mobile app. No small feat. Yet most of us had never been in the same room together.

We fixed that bug.

This week we spent time in our Montréal office coding together. We took walks across Montréal’s delightfully sunny plateau neighborhood to design the product. We ate arepas on picnic tables, we played board games that made us die in laughter, we mixed Aman’s (our Windsor software engineer) special Chai mix from raw ingredients…

We jumped out of the pixel world to be humans together.

This week I rediscovered a bunch of little things I would not have celebrated before but that made me realize the importance of apparently mundane moments. In the words of one of my favourite song from Bénabar: “Le bonheur ça s’trouve pas en lingots, mais en petite monnaie.” — you won’t find happiness in a gold bar, but in pocket change.

I want to say it again. I’m grateful the world is made of flesh and blood humans, not pixels. I’ll be starting next week full of renewed energy knowing not only that we’re building an important product for the world, but that we’re doing it as a team of humans I deeply care about. A team that reinforces, for me, this undying belief that people are good and that we need more tools to help that goodness shine.

❤️

Categories
Uncategorized

Healthy Social Media, Secrets of Pascal’s Triangle and Venus’ Tectonics

Welcome to this week’s Via Waverly, where I expose diverse and unexpected finds that were served to me by Waverly.

Principles of Healthy Social Media

I stumbled on this research by New Public, an organization that wants to reimagine the Internet as a public space. (Via Fast Company.) They asked power users of major Internet platforms questions like Does the platform encourage people to treat one another humanely?

Based on the answers, the researchers came up with 14 principles for healthy social media. Here are some of my favorites:

  • Inviting everyone to participate
  • Encouraging the humanization of others
  • Building bridges between groups
  • Promoting thoughtful conversation

Wave: 🕸️ Better Web

Secrets hidden in Pascal’s Triangle

You know Pascal’s triangle, right? If I asked you, apart from 1, which number is the most frequent in the triangle and how often it appears, what would you say?

If you’re like me, you’d probably guess something like: “I have no idea which number it is, but it probably appears infinitely many times.” Well, thanks to Terrence Tao and Waverly, I learned this week about the Singmaster’s conjecture which says that no number larget than 1 appears infinitely many times. In fact, the current record holder is 3003 and it appears 8 times.

I always love it when strange numbers like 3003 appear in a conjecture. Makes maths feel like a wonderful and unexplored world.

Wave: 🧮 Math Geekiness

Tectonic Plates on Venus

What’s one thing Earth has that no other planet has? Tectonic plates! At least that was what we thought until a couple of weeks ago where scientist found evidence that Venus surface moves around.

Wave: 🌋 Geological Mysteries

A Feel-Good Math Story

I immersed myself in this heartfelt tribute of a son to his mathematician father.

For me, the symbols are mathematical madeleines. They remind me of the pads of paper that were scattered around our house, each full of my father’s scribblings—his version of the sandpiper tracks that had delighted him as a child. When I was a child myself, I would watch him on the couch, deep in thought, scratching away with a mechanical pencil. At some point, I thought that I might like to have a life like that.

Dan Rockmore

There’s something about the struggle of intellectuals that moves my heart. I connect with their desire to do the greatest work, the slow realization that they might not get there, and their human condition rising from the depth of their soul and making them fall in love again with the mundane.

Wave: 🧮 Math Geekiness

Rally, A New Privacy-First Platform

Mozilla just introduced Rally, a novel data sharing platform that puts privacy above everything else.

Today, we’re announcing the Mozilla Rally platform. Built for the browser with privacy and transparency at its core, Rally puts users in control of their data and empowers them to contribute their browsing data to crowdfund projects for a better Internet and a better society.

The goal seems to be to enable technology policy research by academics, which often do not have access to the data they need —this data being trapped in the walled gardens of online services. This objective reminds me quite a bit of data trusts, although the article doesn’t mention them.

Wave: ⚖️ Policies for people