Categories
Uncategorized

Celebrating the Role of Academics in a Startup Ecosystem

Oh, how I miss Korea!

Yesterday I had the opportunity to present the Montreal AI Ecosystem and its research culture in a conference organized by the Korea Development Institute.

Here’s what I said, in a nutshell: I feel we’ve been doing a pretty good job of nurturing human-to-human relationships across the industrial / startup / academic boundary.

Personally, I’m grateful for our special vibe. I feel I regularly get the opportunity to have fruitful exchanges with academic researchers and graduate students even though I’m not a part of their world. These discussions typically flow in both directions: I’m equally excited to learn about their research than they are to listen to me and my startup struggles. Their insights and creativity give me a regular boost.

Historically, we’ve seen the role of academics in a startup ecosystem as purveyors of the initial idea. We need them to invent our deep tech, but then they can take a back seat while entrepreneurs convert their idea into a commercial product.

I don’t like this. I think it’s a reductive view. For me, the real value academics can bring to a startup ecosystem is through their unique blend of creativity and broad expertise. They have the rare ability to approach problems with wild ideas that are nonetheless technologically feasible. This is a skill you need again, and again, and again as you build a startup. You need this ten times more than you need a brilliant initial idea. Pivot is the name of the game, so creative resilience is the key. Academics have plenty of that.

To all of you researchers who help keep that special vibe alive: thank you!

More specifically, thanks to all of you who gave me a personal boost since I started Waverly: Yoshua Bengio, Blake Richards, Marc G. Bellemare, Graham Taylor, Anirudh Goyal, Dr. Sasha Luccioni, Dzmitry Bahdanau, Nicolas Chapados, Nicolas Le Roux, Joelle Pineau, Craig Reynolds, Vicky Kaspi, Kory W. Mathewson, Irina Rish, Eilif Benjamin Muller, Pedro O. Pinheiro, Anqi Xu, Xiang Zhang, Glen Berseth, Michiel van de Panne, Charles Onu, Edith Law, Max Welling, Michael McCool, James O’Brien, Eugene Fiume, Pierre Poulin, Hugo Larochelle.

BTW, I’m not saying it’s unique to Montreal — some of the researchers named above are from across the world — but I feel that spirit is alive and well here.

Categories
Uncategorized

To the Founders Who Show Up

One of the thing you have to do again and again, as a startup founder, is to lay your dream raw, on the table, in front of a group of strong-willed people who will critique it. They may be advisors, investors, potential customers… But you will have to do this constantly.

You’ll have to do this no matter which state the company is in — fresh from a new release that is picking up steam, in a lull as you struggle to bring users back, as you’re undergoing a pivot and are still struggling to find the right words to talk about what you want to do…

You’ll have to do this no matter which emotional state you’re in.

These meetings are often scheduled weeks in advance and each of them could be the opportunity that unlocks the next stage for you.

You have to show up no matter what.

It’s hard. You can do it by building an armor that lets you hold strong when someone decides to take a stab at your dream. Or you can show up as your authentic, vulnerable self. Accepting that pain will be a part of the journey.

As a founder — as a human — I’ve learned that I can only be successful if I show up without artifice. I therefore lay my dream raw, for you to examine and critique. I’m there to listen and learn… and if your advice hurts I will accept the pain and leverage my support network to get back up.

If you sit across from a founder and are called to critique what they present, please be fully honest. Please tell it like you see it. That’s why we seek your advice. But please, also, bring in that human touch and recognize the challenge the person across the table might have to face.

And to the founders out there, kudos for showing up.

Categories
Uncategorized

A New Type of Sudoku

On Waverly, one of my most idiosyncratic wave is Puzzles by Humans. It allowed me to discover a hidden world of creative puzzlers. Amongst them, I found a growing group fueling what is now being known as the golden age of sudoku. These so-called setters are inventing countless sudoku variants that can be mixed and matched to create puzzles which force solvers to come up with original deductions.

Leading the popularization of that new puzzling form is the excellent YouTube channel Cracking the Cryptic. It led me onto the (very difficult to navigate) German puzzle portal Logic Masters, which seems to be the birthplace of every new sudoku variant.

Some of these new types of puzzle rely on arithmetics, which I’m not a fan of because it requires solvers to memorize frequent sums. This feels like memorizing frequent definitions in crossword puzzles, which is not what I enjoy in problem solving.

However, there are some new clever variants that dont require you to memorize anything, just to put on your logician hat and prove some theorems. I couldn’t sit idly by the sidelines so… I invented one new such variant! Behold the…

Ant Sudoku

  • Standard sudoku rules apply.
    Digits can’t repeat on lines, columns or 3×3 regions.
  • An ant starts on each of the shaded cells.
  • Each ant must be able to reach at least one of the circled cells with the same letter.
  • Ants must not be able to reach any other circled cell.
  • An ant can move from a cell to an orthogonally adjacent cell if its digit is less or equal to the current digit plus 1.
    Ex: An ant on 4 can move to an orthogonally adjacent 5, 3, 2, or 1 but not to a 6, 7, 8 or 9.

You can play this puzzle online either on F-Puzzles or Penpa+. If you want you can also download a PDF version and print it.

Hints

If you get stuck here are some hints, just rot13 decode them:

  • Svaq pryyf fbzr bs gur nagf zhfg nofbyhgryl tb guebhtu.
  • Pna gjb cnguf rire gbhpu gurzfryirf?
  • Znxr fher lbh qba’g fgrc ba gur gbc sbhe.

Solution

Here’s the solution, don’t hesitate to give me feedback on Facebook or Twitter.

Categories
Uncategorized

AI and Consciousness

NeurIPS, the most famous conference in AI these days, was born of the intentional collision of neuroscience and AI — a handful of researchers in both fields seeing value in getting inspired by one another.

My recent conversations with AI researchers, most notably with Yoshua Bengio, have thrown me on a collision course with another group of researchers, one often vilified by us, explorers of “hard” science: philosophers.

Indeed, one of Yoshua’s belief is that to increase the generality of AI we need to uncover more generic priors: general guidelines we could bake into our untrained models so that, as they start to observe data, they learn a more effective hierarchy of structures. Structures that allow them to apply their knowledge in more situations, or to adjust their model faster when new data is observed, or to be more robust when faced with contradictory data… In short, Yoshua (and many other researchers, myself included) believe that better and more generic priors could help tackle the challenges AI is facing these days.

In an infamous paper, Yoshua called one of these very generic (and elusive) priors The Consciousness Prior. Some researchers went up in arms at the use of such a loaded term, accusing him of the academic equivalent of clickbait.

In my case, however, it just made me aware that I had no clue what consciousness was.

In the last few months, through a random encounter with an excellent popularizer of analytical philosophy, I dove deep into the topic. I gained a better understanding of terms like qualia, consciousness, dualism, illusionism, etc. Words that philosophers use to approach questions that us, hard scientists, don’t even dare to ask.

Beyond my own improved understanding of some non-scientific (yet very important) questions, I discovered a community of thoughtful thinkers that are not as enamoured as I imagined with useless rhetorical debates.

I discovered, against my own biases, that philosophy offered a very valid approach to improving our understanding of the world.

The following article, that I discovered via Waverly, explores the difficult topic of emotions from a psychological and philosophical angle. Since we often talk about emotions when discussing Artificial General Intelligence, I felt the article might be interesting both to my AI friends and to my (soon to grow?) group of philosopher friends. Maybe we need a PhilIPS conference, creating an intentional collision between these two worlds? (I kinda like that name 😉)

Categories
Uncategorized

Can AI Do Art? Are You Afraid It Could?

Two years ago I was sitting in a Belgian concert hall listening to the Brussels Philharmonic playing a series of original pieces. The composer? An AI created by Luxembourg company Aiva.

After the concert I mingled with the attendees. Most of the conversations were around this recurring question: Can AI really do art?

Despite the fact that we had just silently sat for more than an hour listening to very agreeable AI-made music, many found themselves passing a harsh judgement. Most comments were along that line: « That’s not art, it’s only a pastiche of the great composers ».

What stroke me, though, was not the conversations themselves, but the fact that we were all suddenly unified in our judgemental attitude. As is often the case when we pass judgement on someone else — or something else, in the case of AI — I felt we were collectively projecting our own fears. But fear of what?

I’d say it’s the fear of losing our supremacy on a trait that we strongly associate with our identity as human beings.

This would not be the first identity crisis caused by the relentless march of technology. Another example is illustrated by these words, uttered by world-champion Lee Sedol as he lost his match against AlphaGo. “I’m sorry for letting humanity down.”, he said, with tears in his voice.

But humans haven’t stopped playing Go since that famous defeat. On the contrary, they converted the AIs into allies in their pursuit to understand the game. Today, thanks to artificial intelligence, new Go openings are constantly being tested and mastered by humans.

Here’s another anecdote. In December 2018, famous cellist Yo -Yo Ma was speaking at the world’s largest conference on Artificial Intelligence. When asked the question about music and AI he answered something along those lines: « I don’t care, because whenever I’m listening to music I look for the intentions of the human behind it. »

In his recent critic of « Beethoven X », a project to complete Beethoven’s unfinished tenth symphony, composer Jan Swafford notes something similar: « The ability of a machine to do or outdo something humans do is interesting once atmost. We humans need to see the human doing it. »

Might it be that our fear comes from the fact that we see art as the artifact rather than as the intention of the human creating this artifact?

AI will definitely create music that you’ll find pleasing to listen to as you sit in a waiting room or as you drive your car. But, unless you can connect to the human behind that AI – to their intention, their struggles, their humanity – chances are you’ll soon forget about this music.

So can AI create art? To that I answer: who cares. It will never be able to disconnect me from my fellow humans and from the ways in which they try to communicate their humanity through the artifacts they create. That’s what I choose to call art.

Categories
Uncategorized

Open Facebook to Researchers!

Amongst all the recent complaints against Facebook, the one I find the most problematic is the way in which internal employees have access to an exceptional experimental framework while researchers from outside the company are barred from it.

If Facebook is anything like Google, then its software engineers really are scientists constantly running counterfactual experiments. They deploy any new feature on a subset of users and measure if the proposed change has an impact when compared to a control group. This is hardcore science. It’s good to see companies embracing scientific practices to such an extent.

What is not so good, however, is that external researchers can’t do anything remotely close to that. Their options are limited to:

  • Using analytic tools that offer them an external view onto Facebook. An example is CrowdTangle, acquired by Facebook in 2016 and recently “regorganized”, leading to the departure of its founder and long-time advocate of more transparency, Brandon Silverman. [1]
  • Crowdsourcing data gathering to an army of willing volunteers using a browser plug-in, and sometimes having to stop because Facebook threatens to sue. [2]

So, not only does Facebook block external researchers from operating on the same footing as its internal engineers, it seems to be going out of its ways to make researcher’s lives harder.

There is no denying that Facebook has become a force that shapes society, but we’re mostly blind to the precise way in which it does it.

Does Facebook and its algorithms create filter bubbles? Polarization? Addiction? Infodemics? Doomscrolling? Social anxiety?

Maybe… Probably… I don’t know….

…but it’s precisely the fact that I don’t know and that I could know that is my biggest issue with Facebook.

We need to ask Facebook and all the other society-shaping tech giants to give researchers access to the tools it uses internally. This is the very first step towards the transparency we deserve — if not as individuals, at least as a society.


This post was inspired by this recent piece on researchers using CrowdTangle to study local news on Facebook. Especially by the fact that they had such a hard time to gather data and that they couldn’t derive causal relationships from their experiment.

Categories
Uncategorized

Your Recommender System Is a Horny Teen

There is a story I like to tell about recommender systems… Someone on the YouTube team once told me that they ran an experimental recommender to decide which frame of a video should be used as a thumbnail. The goal being to maximize clicks. After letting that recommender learn from user’s behaviors, it converged to… Porn! Ok, not quite porn, but the recommender learned that the more skin was visible in a thumbnail the higher the likelihood of a click. Naturally the experiment was scrapped (thank God for human oversight), but it still goes to show that purely metric-driven recommender systems can land you in a very weird place…


That’s what I feel is happening with Amazon’s recommender system picking the ads to run on my Facebook stream. The top picks systematically look like sex toys or, as is the case in the example below, drugs. They are all excellent at triggering my curiosity — and I’m sure their metrics show a very high click-through rate in average user’s — but they are pretty bad at convincing me Amazon is a great company…

May be an image of text that says 'mazon.ca Sponsored Shop our selection of deals, best sellers, and interesting finds on Amazon amazon amazon Jack III 6 Pack Premium California White Sage... amazon.ca Shop Now Jack Richeso Assorted Ass amazon.ca'
Example of ads run by the Amazon Recommender System
Categories
Uncategorized

The Phrase “Social Network” Is Trapping Us

As we’re integrating more human-to-human interaction into Waverly I’m getting a bit anxious. Will we end up building yet another social network? If not, what are we building?


In a recent discussion with Matthieu Dugal on The Waverly Podcast (en français), he pointed out that, even though more and more people get informed through their social networks, most people use them to nurture their social ties. Social networks are a bit like virtual bars where some of the customers are having a casual drink while others are lecturing them in all seriousness.

Why do we end up with these combined online platforms? A look at how they grow helps answer that question. They typically start as purely social spaces, but evolve over time as they attract a more diverse crowd of content publishers.

Combining an informational and a social space into the same platform makes it hard to know how to react to different pieces of content. The Guardian and The New York Times might increase their readership by publishing on Instagram, but their presence on the platform also contributes to the confusion. It requires mental effort to figure out that we should react differently to a piece of journalism than to the anxiety-loaded message of an anti-vax friend. The former should be processed for the information it contains, whereas the latter is best met with words of compassion.

Followup question: Why do most modern online platforms start as purely social spaces? Maybe because the phrase social network occupies too much space in our collective imagination. Thanks to our limited vocabulary, we believe the only thing we can do collectively, online, is to reinforce our social ties.

Despite the popularity of the phrase social networks, it’s easy to find online spaces where people interact without trying to socialize. Wikipedia, for example, has an army of volunteers who update its pages. Some will become friends, but everyone understands that their primarily goal is to build a “written compendium that contains information on all branches of knowledge”. Other examples abound: Stack Overflow, GitHub, Quora…

What should we call these? Collaborative platforms? If so, can we all agree to push on that phrase really hard so that it gains a foothold in our collective mind? I’d really like to see a different kind of online spaces flourish, and I believe it wont happen unless we have some words to describe what we want to build.

In the meantime, that’s what I’ll do. I’ll build Waverly as a collaborative platform. I’ll make sure it feels like a space where communities gather around a joint mission — building healthier algorithms for all of us — rather that around their need to nurture social ties.

Categories
Uncategorized

Why It’s So Hard to Trust the Machines

If a careless driver runs a red light and kills a pedestrian you might read about it in your local newspaper. If a self-driving car does the same, you’ll see it in every international news outlet.

When it comes to trusting machines, people often seem to have a ridiculously high bar. We reject machines even when they are statistically safer than humans at performing a given task. In fact, most experts agree that self-driving cars will need to be significantly safer than human drivers in order to gain social acceptance.

Why is it so hard to trust the machines? Let me venture a hypothesis…

If a person makes a poor decision, we assume that this person is flawed in some way. If a machine makes a poor decision, we assume all similar machines are broken. A human driver running a red light means that this driver is careless. A self-driving car doing the same instantly means that all self-driving cars are potentially broken.

We consider humans as separate beings while we see machines as copies made from the same blueprint. We’re not wrong. If a mobile device suffers from a security flaw, it’s fair to assume that all similar devices suffer from the same flaw.

In fact, this replicated nature of technology gives us the powerful ability to “patch” all the machines at once. We use such patches regularly to make our devices safer. Yet, at the same time, perfect replication might be precisely what makes it impossibly hard to trust the machines. Where an individual human being gives us a natural boundary beyond which we won’t extend our trust (or distrust), replicated machines have no such boundary.

Should we give up replication and purposefully build each machine slightly differently to make it easier for us to trust them? This sounds like a stupid idea, but it may not be. In fact, injecting artificial diversity may be required to help speed up social acceptance of automation.

In general, we know that diversity makes systems more resilient at the cost of slightly reducing their effectiveness. Plant diverse crops and you reduce the potential negative outcome of a disease outbreak.

What I’m proposing today is that, as paradoxical as it may seem, diversity might also help us build systems that are easier to trust at the cost of slightly reducing their safety. Our tendency to maximize safety might be running at odds with our desire to deploy trustworthy systems at scale. Instead of building a fleet of identical replicas that are easy to patch, we might be better off inserting artificial “fracture lines” that make it harder for distrust to spread.

More generally, I’ve always seen diversity as a way for us to stay humble in the face of what we’re creating. Welcoming diverse point-of-views necessarily means we will not put all our efforts and energy behind the optimal idea. However, it also means that we acknowledge our inability to predict the future. We acknowledge that there are unforeseen events that might throw a wrench at any seemingly optimal idea and that we’re better off pursuing many different paths simultaneously, even if some of these paths appear to be less efficient.

Just how much diversity do we need if we are to build trust between humans and machines? Would having a hundred different self-driving car models be enough? Would each car need its random artificial DNA to ensure it makes decisions slightly differently from other cars? I don’t have the answer… But I do believe it is interesting to look at the lack of diversity in our replicated systems as a hurdle towards building trustworthy machines.

Note: I’ve experimented with a featured image for this post, using a quote from the post in the image. The photo is from Jason Leung on Unsplash.

Categories
Uncategorized

Solving Science

A topic I often come back to is how, in my opinion, scientists are not trying hard enough to solve the problems in the processes that drive modern science. I find that particularly sad given how some of the people I love and admire the most are scientists.

For me, like for many grad students, it started with a personal emotional crisis following the harsh comments of an anonymous reviewer #2. I was surprised at how a community of people who strived to make the world a better place was full of critics who didn’t seem to care that there was a human on the receiving end of their comment.

As I made my way through the academic ecosystem I started observing latent in-goup / out-group dynamics in tightly knit sub-communities. These dynamics made it really hard for a newcomer to propose alternative approaches that would challenge the views of these sub-communities. Again, as a starry-eyed idealistic researcher I got my fingers slapped, through reviews, in a way that felt very unfair.

My unease with these observations — and how strongly they clashed with my idealistic vision of science — turned me into a vocal advocate of greater experimentation in the academic processes.

At that point in my postdoc I got this advice from a successful prof: “If you keep worrying about the process you’ll never be a good researcher. Focus on the science.” She was right. In fact, my inability to stop caring about the process is partly why I gave up on the academic track…

Yet if researchers give up on the process, who will care?

Right now, it seems to be the funding agencies. The ones that gave us impact factors and h-index and a whole slew of bibliometric methods. They turned scientific funding into a game with well-defined rules… and as a result they turned (some) scientists into players. Even though, deep down, most scientists would rather just be doing good science.

I dont talk about this too much these days. For one, I’m out of the academic circuit (even though in my heart I very much still feel like a scientist 😊 ). But also because the last thing I want is to be confused with a proponent of anti-intellectualism. It’s quite the opposite: it’s because I love the spirit of science that I care about how it’s done.

What prompted this post was a discussion with Marie Lambert-Chan and Matthieu Dugal. Marie pointed me to this article. I couldn’t read it because of its paywall but the subtitle makes me hopeful: “For the first time prestigious funder has explicitly told academics they must not include metric when applying for grants.”