Speaking turns in your videocalls

Context

I’ve been co-hosting a speaking group about polyamory in Paris for the past few years, in which we used to organize the debate by writing down speech requests on a piece of paper, and distributing them in order. The groups were between 60 and 100 people in average.

However, you have probably noticed that there’s a tiny pandemic going on. Those meetups had thus to stop without notice, leaving a big void in our social activities. Quickly enough, some of the co-hosts migrated online, on a videoconferencing system that I will not name but which starts with a Z… I don’t know if you have ever attended a videocall with 50 people, but with the latency and the background noises it’s impossible to just have an informal chat. So we tried to recreate the structure we had in our goups, since it worked well for us: speech requests, written down on a list, distributed in order.

We soon had to face a few issues: asking for a turn in this piece of software is not as easy at it sounds. At first we had to watch everybody’s thumbnails to check if somebody was raising their hand, but on large videocalls there can be several pages of thumbnails. Then we told attendees that they could ask for a turn in the text chat, and it just added one more place to look at for us 😉 Then the feature to « raise hand » by clicking a button was added, which works well when you can find it, so not everybody uses it. Once the request has been noticed, we write it down on a Google doc shared between hosts, then we announce the turn and remove the line from the doc. And on top of that, we checked our speaking time counter to avoid monologues.
It works, but it’s quite a bit of work, it requires many pairs of eyes looking at screens and the text chat, etc.

Making life simpler

I thought there was an opportunity to simplify all that with software. The goal would be to allow participants to request a speaking turn, that would automatically be added to a list that hosts could see and reorder. And it would be removed when the turn is over. Oh, and also manage the issue of speech duration, so that we don’t have to handle the counter on the side. And since our counter produces statistics relative to gender, we want to keep that feature in the new app.

Alright, we know what we need to do, we just have to do it. I started writing at the end of year 2020, and after a failed start I’m happy to say that we’ve been using it for 3 meetings, and that it works rather well 🙂

The result

The app is at https://speakinglist.net. You can go create an event, and to simulate participants you can for example open a private browser window or test with other co-hosts. There’s also a how-to page that you can look at.

The admin interface that you’ll end up on when creating an event has been designed to be displayed on a computer, while the participant interface has been designed to be used on a phone. This way the participants can have the videocall on their computer, and in hand a button to request speaking turns. It is also possible to display the participant interface on the computer of course, it’ll work identically.

Statistics!

The fact that participants declare their social categories (gender and race for now, optional and selectable by organizers) allows for the generation of interesting statistics on the distribution of speech between sides of the power structure. It also lets organizers reorder the waiting list to make sure it’s not always the same group of people who speak (hello white males, we see you). By objectively measuring the speaking time, we can realize there’s an issue, which is the first step to solving it. For example, in our groups we have found that even though there are less men present, they usually speak more often and longer than women and non-binary folks in proportion to their number. I have also noticed that when we tell them at the beginning of the event that we measure it and that we’d like them to make an effort, there is less difference. As we say in engineering, « you can’t optimize what you can’t measure ».

It’s Geeky time

For me, writing a program for a non-profit organization is always an opportunity to learn new tech and software libraries that I don’t know yet. Here, the main challenge is real-time. When we give a turn to someone, it has to be instantly reflected in their interface. For that we prefer a push method to a poll method, heavier in resources but more importantly much slower.

In the world of the web, this means websocket, that I never got to play with yet. Yay! 🙂 I took this opportunity to learn a backend framework that natively handles asynchronous operations. I started with FastAPI, that I’ve been wanting to test for a while, and we’ll see later why I didn’t keep it.

I don’t have so many opportunities to test new tech, so I try to cram as many interesting things as I can in them. I’ve also tried GraphQL as the API protocol and I must say I’m very happy with it. On top of it, GraphQL already contains a push mechanism, called subscriptions, which is a very good fit for me. However, FastAPI uses a version of Starlette that does not handle GraphQL’s subscriptions, so I dropped FastAPI to use Starlette directly.

Async is beautiful when it works, but not everything is ready for native async just yet in the Python world. I wanted to use the SQLAlchemy ORM, because the database library that Starlette offers does not do ORM and I do like it. Unfortunately, SQLAlchemy does not handle async yet (it should arrive in the next version, 1.4). So I used it in normal blocking mode, thinking that database connections are pretty fast since they are on the same server. It should all work fiiiiine.

It didn’t. Well, it did until there were more than 5 simultaneous connections, at which point we exhausted SQLAlchemy’s connection pool, which thus blocked the request until connections were freed. And this, in an async mode, never happens: when it’s blocked it’s blocked 🙂 To sum it up, on the first real-world use, the program fell over like a big sea lion drunk on bavarian beer. Should I have stress tested the app? Absolutely.

I more or less worked around it by raising the connection pool to 50 and crossing my fingers hoping there there is never more than 50 simultaneous database requests. I stress-tested it, it works fine with several hundreds of simultaneous users. If we ever do meetups that big, I think we’ll have other problems first. But yeah, I’d be happier when SQLAlchemy handles async natively. Until 1.4 is released, it’ll have to do.

On the frontend side, it’s only classics. React with Customize-CRA, Material-UI and Apollo for GraphQL, all written in Typescript. I must say that I am very pleasantly surprised with Apollo, it allowed me to entirely avoid Redux!

The app is obviously under a Free Software license, the AGPL. The documentation isn’t very detailed but it’s basically composed of an SQL server, the Python web app server Uvicorn (for async), Nginx in front of it all, and Redis in the backend to handle message queuing. The source code is here, and I provide a few configuration examples for deployment.

To conclude

It now works rather well. Everybody does not use the app during events, but we can manually add reluctant participants and handle their requests like the others on the waiting list. The group seems happy about it, I’m happy about it, I think it’s mature enough to be used by others.

Feel free to tell me what you think of it, here in the comments or by email. If you want to help, I think that the app could use a graphic designer’s skills for a new logo, colors, fonts, etc. It’s currently translated into French and English, but I’m sure the English translation could really be improved. You can also add other languages if you feel like it. I accept contributions in the form of code too, of course, if you can make sense of mine 😉

I am looking for feedback, please tell me if you use it, with which group sizes, and what you think of it. I wish you good videocalls!

Reviews are hard

It’s a vast subject, but one thing is certain: reviewing other people’s code is hard. Because good mentoring require technical and non-technical skills (such as patience).

I would like to dive directly into a specific detail of code reviews. It’s an iterative process: author submits code for review, reviewer make suggestions, author amends or pushes more code, reviewer make different or more suggestions, and so on.

In Git, « more code » takes the form or one or more commits appended to the Pull Request (or Merge Request if you use Gitlab, for simplicity I’ll just use « Pull Request » in this piece). And « amended code » means overwriting existing commits and force-pushing, which makes the old commits disappear.

As a reviewer, it can be very annoying because what I first look for in an update is whether my suggestions have been implemented or not, and how. That’s why authors are sometimes encouraged to push new commits in their Pull Requests and never overwrite existing ones. It makes the reviewer’s job way easier, because the UI can just show the new commit and they’ll know what’s changed.

But having this policy has drawbacks. When the Pull Request is merged (by fast-forward or not) in can leave awkward commits in the history, like « implement suggestions », « fix according to review », « review fixes again », etc… And merging the PR by squashing it isn’t always relevant, sometimes I do want to have several commits, because they address different parts of the problem.

How can we solve this? Well, it would be nice if I could see the difference between the current state and the last time I reviewed regardless of whether the author has amended their commit or not. And for that, I need to have a local copy of that commit. Fortunately that’s one of the things that git is very good at: you just make a local branch that tracks the PR’s branch, and when the code is changed you make another one. And then you can diff between those branches and see what changed.

OK, sounds simple enough. I have an itch, let me scratch it.

I present you git-pr-branch! A « small » utility that will create branches from Pull Requests, and do a few things with them. You’ll be able to automatically create the PR-based branches that I just explained about. You’ll also be able to display a nice listing of the branches, their associated PR, the PR status (open or closed), and the PR URL to clickety click. And since this can end up in quite a lot of branches, there’s also a sub-command to clean all that up, and delete branches whose PR is closed.

Ironically, it’s hosted on Gitlab but at the moment it only works on Github and Pagure. I’ll add Gitlab support if I end up working more with Gitlab (something tells me it’s likely to happen in the near future), but you can also send me a patch if you want it sooner.

While writing this side project, I discovered the fantastic python library attrs. It’s really awesome, I encourage you to try it out. (as always, my side projects are a good opportunity to try out new libraries or frameworks that I discover 😉 )

The python packages are on PyPI, and for the lazy Fedora and Mageia users out there I’ve made a COPR repository that you can enable on Fedora 32 and Mageia 7. Once installed, just run git-pr-branch (or git pr-branch) to discover the available commands.

Feel free to tell me what you think of the tool. Do you like the idea? Are you going to use it in your reviewer workflow? Did I bother writing code again when there is an obvious and better tool to do it? Let me know! 🙂

[EDIT] I’ve made a COPR repo, added the link.
[EDIT2] It now works with Pagure too, added the reference.

My experience of Flock 2017

Flock, the annual Fedora contributor’s conference, is now over. It took place in Cape Cod this year (near Boston, MA), and it was great once again.

It started with a keynote by our project leader Matt, who insisted on Fedora’s place in the diffusion of innovation. We are targeting the inovators and the early adopters, right until the « chasm » (or « tipping point ») before the early majority adoption. This means 2 things:

  • on one side, we must not be so bleeding edge that we would only reach the innovators
  • on the other side, we must keep innovating constantly, otherwise we’re not relevant to our targeted people anymore.

As a consequence, we must not be afraid to break things sometimes, if that’s serving the purpose of innovation.

A lot of the talks and workshops were about two aspects of the distribution that are under heavy development right now:

  • modularity: the possibility of having different layers of the distribution moving at different speed, for exemple an almost static base system with a frequently updated web stack on top of it.
  • continuous integration: the possibility of automatically running distro-wide tests as soon as a change is introduced in a package, to detect breakage early rather than in an alpha or beta phase.

Seeing where the distribution is going is always interesting, not only in itself but also because it reveals where my contributions would be most useful.

As always, Flock is an opportunity for me to meet and talk to the people I work with all year long, to share opinions and have hallway talks on where our different projects are going (I had very interesting discussions with Jeremy Cline about fedmsg, for example), and to learn the new tools that all the cool kids are using and may make my workflow easier and more productive.

It’s also a great opportunity to help friends on things I can do, and to share knowledge. This year was the first one when I didn’t give a talk about HyperKitty, I guess that means it’s now mainstream 🙂

Instead, I gave a workshop on Fedora Hubs, our collaboration center for the Fedora community. If you don’t know what Fedora Hubs is, I suggest you check out Mizmo’s blogpost and Hubs’ project page. The purpose of the workshop was to teach attendees how to write a basic but useful widget in Fedora Hubs. I wrote all the workshop as an online tutorial, for multiple reasons:

  • People can go through it at their own pace
  • My time is freed up to walk between the trainees, answer their questions and help them directly
  • Attendees can go back to it after Flock if they need to or if they haven’t completed it in time
  • It can be re-used outside of Flock (for exemple, by you right now 😉 )

I believe it’s a better way to teach people (see Khan Academy founder’s talk on TED): the teacher’s time is better used answering questions and having direct interactions with attendees, rather than at doing non-interactive things like talking.

There were about 10 people in the workshop, and 4 of them completed the tutorial in time, which is pretty good considering the conditions (other talks and workshops going on at the same time, bandwidth problems, etc.)

Also, I’m getting more and more interested in the teaching / mentoring aspect of software engineering. I like to do it, and I get good feedback when I do. That’s clearly a path to explore for me, although it’s still a bit stressful (but that’s usually a good sign, it means I’m taking it seriously). I don’t want to switch to that entirely, but having some more on my workplate would be nice, I think. The Outreachy program is very appealing to me, it would align perfectly with my other social commitments. I remember there’s also an NGO that offers software training for refugees in Paris, I’ll investigate that too.

 

The workshop on Fedora Hubs at Flock 2017 will be awesome

TL;DR: come to the Hubs workshop at Flock! 🙂

This is a shameless plug, I admit.

In a couple weeks, a fair number of people from the Fedora community will gather near Boston for the annual Flock conference. We’ll be able to update each other and work together face-to-face, which does not happen so often in Free Software.

For some months I’ve been working on the Fedora Hubs project, a web interface to make communication and collaboration easier for Fedora contributors. I really has the potential to change the game for a lot of us who still find some internal processes a bit tedious, and to greatly help new contributors.

The Fedora Hubs page is a personalized user or group page composed of many widgets which can individually inform you, remind you or help you tackle any part of you contributor’s life in the Fedora project. And it updates in realtime.

I’ll be giving a workshop on Wednesday 30th at 2:00PM to introduce developers to Hubs widgets. In half an hour, I’ll show you how to make a basic widget that will be already directly useful to you if you’re a packager. Then you’ll be able to join us in the following hackfest and contribute to Hubs. Maybe you have a great idea of a widget that would simplify your workflow. If so, that will be the perfect time to design and/or write it.

You need to know Python, and be familiar with basic web infrastructure technologies: HTML and CSS, requests and responses, etc. No Javascript knowledge needed at that point, but if you want to make a complex widget you’ll probably need to know how to write some JS (jQuery or React). The Hubs team will be around to help and guide you.

The script of the workshop is here: https://docs.pagure.org/fedora-hubs-widget-workshop/. Feel free to test it out and tell me if something goes wrong in your environment. You can also play with our devel Hubs instance, that will probably give you some ideas for the hackfest.

Remember folks: Hubs is a great tool, it will (hopefully) be central to contributors’ worflows throughout the Fedora project, and it’s the perfect time to design and write the widgets that will be useful for everyone. I hope to see you there! 🙂

React.js is pretty cool

These days I’ve been working on Fedora Hubs, it’s a Python (Flask) application, with a React.js frontend. I know Python quite well now, but it’s the first time I dabble in the React.js framework. I must say I’m pretty impressed. It solves a lot of the issues I’ve had with dynamic web development these last years. And it manages to make writing Javascript almost enjoyable, which is not a small feat! 😉

I’m still wrestling with Webpack and ES6, but I’ll get there eventually. React is really a great way to build UIs. Plus some people are writing the Bootstrap components in React, so this is very promising.