What I learned this month: Github Actions and pre-commit

This is yet again another attempt to reboot the dev part of this blog. I’m not successful, but at least I’m persistent 😉


I’m kicking off a new serie of posts. Every month, I plan on writing about the new stuff that I’ve discovered in the broader field of software development. It’s an attempt to share the knowledge that I may have gained during that time, and also to show the world that you can be a somewhat experienced software developer and still be discovering new stuff every month.

I have observed that I tend to discover the latest hype about at the time it stops being cool. If you’re like me, we can make up for this latency by sharing ideas. Also, it requires quite a bit of time to play with software that isn’t yet ready for anything besides its original intended use case. So, let’s not waste our time.

I don’t plan on writing well researched articles, because aiming too high is a sure way for me to fail at building a habit, and that’s one of the main goals here. I’ll probably be a bit terse, with only the main links and my impressions. I hope you’ll understand. (and if you don’t, I’m not making you read this 😉 )

The two tools that I’ve selected for this month’s article are Github Actions and pre-commit.

Github Actions

D’oh, you might say. It’s been around for a while but I didn’t have a chance to use it yet, because in Fedora we run our CI on a Jenkins instance that the good CentOS folks provide us with.

But for some projects, it does not make much sense to run our unit tests on specific Fedora versions, and it’s always good to have an alternative. I’ve set up Github Actions to run CI on a couple projects that I maintain, such as fedora-messaging and flask-healthz, to test-drive it. The Python SIG in Fedora also provides a Github Action to run tox on a Fedora container, which is nice.

Another use: since Dependabot has eaten by Github, they have removed the feature to auto-merge dependency updates according to their semver classification. And I did not enjoy manually approving patch updates to my dependencies, since what I basically did was to make sure CI passed. That’s a bot’s job. So I’ve set up a Github Action that either merges patch and minor updates (but not backwards-incompatible major versions), or just approves them, waiting for Mergify to do the actual merge.

It’s nice, it’s fast, I like it. Maybe I’ll end up keeping our CentOS CI Jenkins instance only for integration tests.


I discovered pre-commit recently and I think it has potential. It’s a well known fact of software development that the sooner you catch a bug, the lesser that bug costs you. Find a bug during development (and even while typing the code, thanks to linters embedded in your editor) and you’ll be much better off than if you found it after production deployment.

The point of pre-commit is to run checks before the code is committed to Git. That means fast checks such as linters and formatters, and probably now your entire unit tests suite, but your mileage may vary. As for me, I usually run black, isort and flake8 as part of Visual Studio Code, so I catch issues even sooner. However, I’m not the only one to work on my projects, and for those not using advanced text editors it is a nice safety net to run the checks before committing.

I’ve converted a couple projects to pre-commit, we’ll see how it goes.


It’s all for this first edition. Hopefully I’ll see you next month with more findings. In the meantime, happy hacking! 🙂

Speaking turns in your videocalls


I’ve been co-hosting a speaking group about polyamory in Paris for the past few years, in which we used to organize the debate by writing down speech requests on a piece of paper, and distributing them in order. The groups were between 60 and 100 people in average.

However, you have probably noticed that there’s a tiny pandemic going on. Those meetups had thus to stop without notice, leaving a big void in our social activities. Quickly enough, some of the co-hosts migrated online, on a videoconferencing system that I will not name but which starts with a Z… I don’t know if you have ever attended a videocall with 50 people, but with the latency and the background noises it’s impossible to just have an informal chat. So we tried to recreate the structure we had in our goups, since it worked well for us: speech requests, written down on a list, distributed in order.

We soon had to face a few issues: asking for a turn in this piece of software is not as easy at it sounds. At first we had to watch everybody’s thumbnails to check if somebody was raising their hand, but on large videocalls there can be several pages of thumbnails. Then we told attendees that they could ask for a turn in the text chat, and it just added one more place to look at for us 😉 Then the feature to « raise hand » by clicking a button was added, which works well when you can find it, so not everybody uses it. Once the request has been noticed, we write it down on a Google doc shared between hosts, then we announce the turn and remove the line from the doc. And on top of that, we checked our speaking time counter to avoid monologues.
It works, but it’s quite a bit of work, it requires many pairs of eyes looking at screens and the text chat, etc.

Making life simpler

I thought there was an opportunity to simplify all that with software. The goal would be to allow participants to request a speaking turn, that would automatically be added to a list that hosts could see and reorder. And it would be removed when the turn is over. Oh, and also manage the issue of speech duration, so that we don’t have to handle the counter on the side. And since our counter produces statistics relative to gender, we want to keep that feature in the new app.

Alright, we know what we need to do, we just have to do it. I started writing at the end of year 2020, and after a failed start I’m happy to say that we’ve been using it for 3 meetings, and that it works rather well 🙂

The result

The app is at https://speakinglist.net. You can go create an event, and to simulate participants you can for example open a private browser window or test with other co-hosts. There’s also a how-to page that you can look at.

The admin interface that you’ll end up on when creating an event has been designed to be displayed on a computer, while the participant interface has been designed to be used on a phone. This way the participants can have the videocall on their computer, and in hand a button to request speaking turns. It is also possible to display the participant interface on the computer of course, it’ll work identically.


The fact that participants declare their social categories (gender and race for now, optional and selectable by organizers) allows for the generation of interesting statistics on the distribution of speech between sides of the power structure. It also lets organizers reorder the waiting list to make sure it’s not always the same group of people who speak (hello white males, we see you). By objectively measuring the speaking time, we can realize there’s an issue, which is the first step to solving it. For example, in our groups we have found that even though there are less men present, they usually speak more often and longer than women and non-binary folks in proportion to their number. I have also noticed that when we tell them at the beginning of the event that we measure it and that we’d like them to make an effort, there is less difference. As we say in engineering, « you can’t optimize what you can’t measure ».

It’s Geeky time

For me, writing a program for a non-profit organization is always an opportunity to learn new tech and software libraries that I don’t know yet. Here, the main challenge is real-time. When we give a turn to someone, it has to be instantly reflected in their interface. For that we prefer a push method to a poll method, heavier in resources but more importantly much slower.

In the world of the web, this means websocket, that I never got to play with yet. Yay! 🙂 I took this opportunity to learn a backend framework that natively handles asynchronous operations. I started with FastAPI, that I’ve been wanting to test for a while, and we’ll see later why I didn’t keep it.

I don’t have so many opportunities to test new tech, so I try to cram as many interesting things as I can in them. I’ve also tried GraphQL as the API protocol and I must say I’m very happy with it. On top of it, GraphQL already contains a push mechanism, called subscriptions, which is a very good fit for me. However, FastAPI uses a version of Starlette that does not handle GraphQL’s subscriptions, so I dropped FastAPI to use Starlette directly.

Async is beautiful when it works, but not everything is ready for native async just yet in the Python world. I wanted to use the SQLAlchemy ORM, because the database library that Starlette offers does not do ORM and I do like it. Unfortunately, SQLAlchemy does not handle async yet (it should arrive in the next version, 1.4). So I used it in normal blocking mode, thinking that database connections are pretty fast since they are on the same server. It should all work fiiiiine.

It didn’t. Well, it did until there were more than 5 simultaneous connections, at which point we exhausted SQLAlchemy’s connection pool, which thus blocked the request until connections were freed. And this, in an async mode, never happens: when it’s blocked it’s blocked 🙂 To sum it up, on the first real-world use, the program fell over like a big sea lion drunk on bavarian beer. Should I have stress tested the app? Absolutely.

I more or less worked around it by raising the connection pool to 50 and crossing my fingers hoping there there is never more than 50 simultaneous database requests. I stress-tested it, it works fine with several hundreds of simultaneous users. If we ever do meetups that big, I think we’ll have other problems first. But yeah, I’d be happier when SQLAlchemy handles async natively. Until 1.4 is released, it’ll have to do.

On the frontend side, it’s only classics. React with Customize-CRA, Material-UI and Apollo for GraphQL, all written in Typescript. I must say that I am very pleasantly surprised with Apollo, it allowed me to entirely avoid Redux!

The app is obviously under a Free Software license, the AGPL. The documentation isn’t very detailed but it’s basically composed of an SQL server, the Python web app server Uvicorn (for async), Nginx in front of it all, and Redis in the backend to handle message queuing. The source code is here, and I provide a few configuration examples for deployment.

To conclude

It now works rather well. Everybody does not use the app during events, but we can manually add reluctant participants and handle their requests like the others on the waiting list. The group seems happy about it, I’m happy about it, I think it’s mature enough to be used by others.

Feel free to tell me what you think of it, here in the comments or by email. If you want to help, I think that the app could use a graphic designer’s skills for a new logo, colors, fonts, etc. It’s currently translated into French and English, but I’m sure the English translation could really be improved. You can also add other languages if you feel like it. I accept contributions in the form of code too, of course, if you can make sense of mine 😉

I am looking for feedback, please tell me if you use it, with which group sizes, and what you think of it. I wish you good videocalls!

Des tours de parole dans vos visioconfs

Le contexte

Je co-anime un groupe de parole à Paris autour de la polyamorie depuis quelques années, groupe dans lequel on avait l’habitude d’organiser le débat en notant les demandes de prise de parole sur un bout de papier, et en distribuant la parole dans l’ordre au fur et à mesure. Les groupes allaient de 60 à 100 personnes environ.

Or, il ne vous a pas échappé qu’il y a en ce moment une petite pandémie. Ces réunions-là se sont donc arrêtées du jour au lendemain, laissant un gros vide dans nos activités associatives et sociales. Assez rapidement, certain⋅es orgas ont migré en ligne, sur un logiciel de visioconférence que je ne citerai pas mais qui commence par Z… Je ne sais pas si vous avez déjà participé à une visioconférence à 50 personnes, mais avec la latence et les bruits de fond, c’est absolument impossible de mener une discussion informelle. On a donc essayé de recréer la structure qu’on avait dans nos groupes et qui marchait bien : demande de prise de parole, inscription sur une liste, distribution des tours.

Mais voilà, demander la parole sur ce logiciel c’est pas aussi évident. Au début il fallait regarder toutes les vignettes des gens pour regarder qui lève la main, mais il y a plusieurs pages de vignettes quand on est nombreux/ses. Puis on a dit qu’on pouvait demander sur le chat textuel, et ça nous a juste ajouté un nouvel endroit à regarder 😉 Ensuite le logiciel a ajouté la possibilité de « lever la main » en appuyant sur un bouton, ce qui marche bien mais faut le trouver, le bouton, donc tout le monde ne l’utilise pas. Une fois que la demande est repérée, on l’inscrit sur un document Google partagé entre orgas, puis on distribue la parole en enlevant la ligne du document. Enfin, on gardait un œil sur notre compteur de temps de parole, pour éviter les monologues.
Ça marche, mais c’est fastidieux, ça demande plein de paires d’yeux qui scrutent les écrans et le chat textuel, etc.

Se simplifier la vie

Je me suis dit que c’était l’occasion de simplifier ça avec un programme. Le but serait de permettre aux participant⋅es de demander un tour de parole, qui s’inscrirait automatiquement dans notre liste côté orga. Et que le passage de parole supprime automatiquement le nom de la liste. Oh, et aussi que ça gère en même temps la question du temps de parole, pour qu’on ait pas le compteur à gérer à côté. Et comme notre compteur trie et produit des stats en fonction du genre, il faudrait garder cette possibilité dans la nouvelle appli.

Bon, on a le cahier des charges, y’a plus qu’à. J’ai commencé à fin 2020, et après un démarrage un peu raté je suis assez content de dire qu’on en est à notre 3e utilisation et que ça marche plutôt bien 🙂

Le résultat

Le site est à l’adresse : https://speakinglist.net. Vous pouvez aller y créer votre évènement, et pour simuler des participant⋅es vous pouvez par exemple ouvrir une fenêtre de navigation privée dans votre navigateur, ou tester avec d’autres orgas.

L’interface d’administration sur laquelle vous arrivez en créant un évènement est prévue pour être affichée sur un ordinateur, alors que l’interface pour participant⋅e est prévue pour être utilisée sur un téléphone. Comme ça les participant⋅es ont a la visioconf sur l’ordi, et dans la main le bouton pour demander un tour de parole. On peut aussi, bien sûr, préférer afficher aussi son interface de participant⋅e sur l’ordi, et ça marchera de la même façon.

Lestat ? Lestat le Vampire ?

Non, les stats ! (désolé)
Le fait que les participant⋅es déclarent leurs catégories sociales (genre et race pour l’instant, au choix des orgas) permet de faire des stats intéressantes sur la répartition de la parole entre dominants et dominé⋅es. Et aussi de réorganiser la liste de parole pour que ce ne soient pas toujours les mêmes qui parlent (coucou les mecs blancs on vous voit). En mesurant objectivement le temps de parole, on peut prendre conscience du problème, ce qui est la première étape pour le résoudre. Par exemple, dans nos débats on se rend compte que même si les mecs sont moins nombreux, ils parlent la plupart du temps plus souvent et plus longtemps que les femmes et les personnes non-binaires en proportion de leur nombre. J’ai aussi remarqué que quand on leur dit, en début d’évènement, qu’on mesure cela et qu’on aimerait qu’ils fassent un effort là-dessus, la différence est moindre. Comme on dit en ingénierie, « on ne peut optimiser que ce que l’on peut mesurer ».

Geekerie !

Pour moi, faire un programme dans un cadre associatif est toujours l’occasion d’essayer des technos et des bibliothèques logicielles que je ne connais pas encore. Ici, le défi principal est le temps-réel. Quand on donne la parole à quelqu’un, ça doit se refléter instantanément sur son interface. Pour cela on va préférer le push au polling, plus gourmand en ressources et surtout beaucoup plus lent.

Ça, dans le monde du web, ça veut dire websocket, que je n’avais encore jamais utilisé. Yay ! 🙂 J’en ai profité pour sélectionner pour le backend un framework qui gère nativement l’asynchrone. J’ai commencé avec FastAPI, que je voulais tester depuis longtemps, et on va voir plus tard pourquoi je ne l’ai pas gardé.

Des occasions de tester de nouvelles technos, j’en ai pas 10 par an, donc j’essaye de caser le maximum de trucs intéressants dedans. J’en ai profité pour essayer GraphQL comme protocole d’API, et je dois dire que je suis très content. En plus, GraphQL contient déjà un mécanisme pour le push, appelé subscriptions, ce qui est très adapté à mon cas. Par contre, FastAPI utilise une version de Starlette qui ne gère pas bien les subscriptions dans GraphQL, j’ai donc abandonné FastAPI pour utiliser directement Starlette.

L’asynchrone, c’est beau quand ça marche, mais tout n’est pas encore prévu pour fonctionner nativement avec. Je voulais notamment utiliser l’ORM de SQLAlchemy, parce que la bibliothèque de base de données proposée avec Starlette ne fait pas d’ORM et que j’aime bien ça. Malheureusement, SQLAlchemy ne fait pour l’instant pas d’asynchrone (ça devrait arriver dans la prochaine version, la 1.4). Donc je l’ai utilisée en mode normal (bloquant) en me disant que les connexions à la base de données sont rapides, puisqu’elles sont sur le même serveur. Alleeeeeez, ça va passeeeeer.

C’est pas passé. Enfin, ça a marché, mais dès qu’il y a eu plus de 5 connexions simultanées, on a épuisé le pool de connexions de SQLAlchemy, qui a donc bloqué la demande de nouvelles connexions en attendant qu’une se libère. Et ça, en asynchrone, ça n’arrive jamais : quand c’est bloqué c’est bloqué 🙂 Donc en gros, à la première utilisation en conditions réelles, le programme s’est vautré comme une grosse otarie bourrée à la bière bavaroise. Est-ce que j’aurais dû faire un stress test de l’appli ? Absolument.

J’ai plus ou moins contourné ça en augmentant la taille du pool à 50 connexions et en croisant les doigts pour qu’on ait jamais 50 demandes à la base de données exactement en même temps. J’ai stress-testé, on passe tranquille avec plusieurs centaines d’utilisateurs simultanés. Si un jour on fait des visioconfs débat avec plusieurs centaines de participant⋅es, je crois qu’on aura d’autres problèmes avant celui-là. Mais bon, je serai quand même plus rassuré quand SQLAlchemy gèrera l’asynchrone nativement. En attendant la version finale de la 1.4, ça va tenir comme ça.

Côté front-end, du classique, React avec Customize-CRA, Material-UI et Apollo pour le GraphQL, le tout en Typescript. Je dois dire que je suis très agréablement surpris par Apollo, ça m’a permis d’éviter Redux complètement !

Il va sans dire que l’appli est sous licence libre AGPL. La documentation d’installation n’est pas hyper détaillée mais c’est en gros à base de base de données SQL, serveur d’application Python Uvicorn (pour l’asynchrone), Nginx devant tout ça, et Redis en backend pour les files d’attente de messages. Le code source est ici, et je fournis quelques exemples de fichiers de conf pour le déploiement.


Là, ça marche plutôt bien. Tout le monde n’utilise pas l’appli dans les évènements, mais on peut ajouter manuellement les participant⋅es réfractaires et les traiter comme les autres dans la liste d’attente. Le groupe est content, je suis content, je pense que c’est suffisamment mûr pour être utilisé par d’autres.

N’hésitez pas à me dire ce que vous en pensez, ici en commentaire ou par mail ! Si vous voulez aider, je pense que l’appli pourrait gagner à ce qu’un⋅e graphiste se penche dessus, fasse un vrai logo, choisisse des couleurs, des typos sympa, etc. Ça fonctionne actuellement en français et en anglais, mais si vous voulez ajouter votre langue c’est aussi tout à fait possible. J’accepte aussi les contributions sous forme de code bien sûr, si vous vous y retrouvez dans le mien 😉

Je suis très preneur de retours, dites-moi si vous l’utilisez, sur quelles tailles de groupes, et ce que vous en pensez. Bonnes visios !

Reviews are hard

It’s a vast subject, but one thing is certain: reviewing other people’s code is hard. Because good mentoring require technical and non-technical skills (such as patience).

I would like to dive directly into a specific detail of code reviews. It’s an iterative process: author submits code for review, reviewer make suggestions, author amends or pushes more code, reviewer make different or more suggestions, and so on.

In Git, « more code » takes the form or one or more commits appended to the Pull Request (or Merge Request if you use Gitlab, for simplicity I’ll just use « Pull Request » in this piece). And « amended code » means overwriting existing commits and force-pushing, which makes the old commits disappear.

As a reviewer, it can be very annoying because what I first look for in an update is whether my suggestions have been implemented or not, and how. That’s why authors are sometimes encouraged to push new commits in their Pull Requests and never overwrite existing ones. It makes the reviewer’s job way easier, because the UI can just show the new commit and they’ll know what’s changed.

But having this policy has drawbacks. When the Pull Request is merged (by fast-forward or not) in can leave awkward commits in the history, like « implement suggestions », « fix according to review », « review fixes again », etc… And merging the PR by squashing it isn’t always relevant, sometimes I do want to have several commits, because they address different parts of the problem.

How can we solve this? Well, it would be nice if I could see the difference between the current state and the last time I reviewed regardless of whether the author has amended their commit or not. And for that, I need to have a local copy of that commit. Fortunately that’s one of the things that git is very good at: you just make a local branch that tracks the PR’s branch, and when the code is changed you make another one. And then you can diff between those branches and see what changed.

OK, sounds simple enough. I have an itch, let me scratch it.

I present you git-pr-branch! A « small » utility that will create branches from Pull Requests, and do a few things with them. You’ll be able to automatically create the PR-based branches that I just explained about. You’ll also be able to display a nice listing of the branches, their associated PR, the PR status (open or closed), and the PR URL to clickety click. And since this can end up in quite a lot of branches, there’s also a sub-command to clean all that up, and delete branches whose PR is closed.

Ironically, it’s hosted on Gitlab but at the moment it only works on Github and Pagure. I’ll add Gitlab support if I end up working more with Gitlab (something tells me it’s likely to happen in the near future), but you can also send me a patch if you want it sooner.

While writing this side project, I discovered the fantastic python library attrs. It’s really awesome, I encourage you to try it out. (as always, my side projects are a good opportunity to try out new libraries or frameworks that I discover 😉 )

The python packages are on PyPI, and for the lazy Fedora and Mageia users out there I’ve made a COPR repository that you can enable on Fedora 32 and Mageia 7. Once installed, just run git-pr-branch (or git pr-branch) to discover the available commands.

Feel free to tell me what you think of the tool. Do you like the idea? Are you going to use it in your reviewer workflow? Did I bother writing code again when there is an obvious and better tool to do it? Let me know! 🙂

[EDIT] I’ve made a COPR repo, added the link.
[EDIT2] It now works with Pagure too, added the reference.

Les hommes parlent trop

Sous ce titre un peu provocateur, je voudrais vous parler d’une application toute simple pour mesurer le temps de parole. Contexte :

  • Je suis informaticien, j’écris des programmes.
  • Je fais partie de l’équipe d’orga des Cafés Poly à Paris, qui organisent des soirées discussion-débat autour de la polyamorie (ou « polyamour » )

Lors des cafés polys, nous avions l’impression que les hommes parlaient significativement plus que les femmes et prenaient plus souvent la parole. C’est cohérent avec les observations et les analyses féministes, donc nous n’étions pas très surpris⋅es, mais nous voulions pouvoir le mesurer pour prendre des décisions sur cette problématique.

J’avais entendu parler plus tôt de la page Are Men Talking Too Much, qui est un simple double-compteur de temps de parole (avec deux boutons). Ça aurait pu nous convenir mais nous voulions des infos supplémentaires :

  • comment se répartissent les prises de parole ? Si une femme parle longtemps elle va peut-être masquer le fait que les hommes prennent plus souvent la parole
  • nous voudrions aussi surveiller le temps de parole des orgas, pour ne pas trop parler, et pour ne pas avoir non plus la problématique d’hommes qui parlent plus au sein de l’organisation.
  • nous voulions limiter le temps de parole à 2 minutes 30 pour que tout le monde puisse s’exprimer, or les compteurs de cette application sont uniquement cumulatifs (on ne voit pas le temps de parole de l’intervention en cours).

J’ai donc commencé par essayer de reprendre le code source existant et de lui ajouter les fonctionnalités manquantes, mais la façon dont le temps est compté rendait cela impossible. J’ai donc finalement développé ma propre application, que vous pouvez trouver à https://www.mentalktoomuch.info/. Elle a les caractéristiques suivantes :

  • Mesure du temps de parole par catégorie de genre (ce que faisait déjà l’autre)
  • Affichage de la durée de l’intervention en cours et de la limite de temps de parole (par défaut 2:30 mais c’est réglable)
  • Catégorisation par statut d’organisateurice ou non
  • Affichage des statistiques incluant les prises de parole et le nombre d’intervenants
  • Export des données brutes pour faire votre propre tambouille statistique dans un tableur
  • Fonctionnement hors-ligne : vous pouvez l’utiliser dans la cave d’un bar sans réseau si vous avez affiché la page avant, ou si vous l’avez « ajoutée à l’écran d’accueil » sur votre mobile (c’est une appli web progressive).

Nous mesurons les temps de parole en café poly depuis environ un an et demi avec cette appli, et nous constatons effectivement que les hommes parlent en moyenne plus longtemps et prennent plus souvent la parole, alors qu’ils ne sont pas plus nombreux en présence. Depuis septembre 2019, nous publions les statistiques sur la page de l’évènement, dans un souci de transparence, et pour faire prendre conscience du phénomène.

Le logiciel est évidemment libre (AGPLv3), vous pouvez donc jeter un œil au code, l’adapter à vos besoins, vous en inspirer pour construire autre chose, etc. N’hésitez pas à me signaler les anomalies que vous pourriez trouver.

L’appli n’a rien de spécifique aux cafés poly, elle est donc utilisable ailleurs. Si vous décidez de l’utiliser dans un autre contexte, ça m’intéresse toujours de le savoir, pour penser à votre cas d’usage quand j’ajouterai des fonctionnalités (si je le fais un jour 😉 ). Donc n’hésitez pas à m’écrire, même si ce n’est en rien une obligation bien sûr. J’espère que ça vous sera utile ! 🙂

Lien vers l’appli : https://www.mentalktoomuch.info/

My experience of Flock 2017

Flock, the annual Fedora contributor’s conference, is now over. It took place in Cape Cod this year (near Boston, MA), and it was great once again.

It started with a keynote by our project leader Matt, who insisted on Fedora’s place in the diffusion of innovation. We are targeting the inovators and the early adopters, right until the « chasm » (or « tipping point ») before the early majority adoption. This means 2 things:

  • on one side, we must not be so bleeding edge that we would only reach the innovators
  • on the other side, we must keep innovating constantly, otherwise we’re not relevant to our targeted people anymore.

As a consequence, we must not be afraid to break things sometimes, if that’s serving the purpose of innovation.

A lot of the talks and workshops were about two aspects of the distribution that are under heavy development right now:

  • modularity: the possibility of having different layers of the distribution moving at different speed, for exemple an almost static base system with a frequently updated web stack on top of it.
  • continuous integration: the possibility of automatically running distro-wide tests as soon as a change is introduced in a package, to detect breakage early rather than in an alpha or beta phase.

Seeing where the distribution is going is always interesting, not only in itself but also because it reveals where my contributions would be most useful.

As always, Flock is an opportunity for me to meet and talk to the people I work with all year long, to share opinions and have hallway talks on where our different projects are going (I had very interesting discussions with Jeremy Cline about fedmsg, for example), and to learn the new tools that all the cool kids are using and may make my workflow easier and more productive.

It’s also a great opportunity to help friends on things I can do, and to share knowledge. This year was the first one when I didn’t give a talk about HyperKitty, I guess that means it’s now mainstream 🙂

Instead, I gave a workshop on Fedora Hubs, our collaboration center for the Fedora community. If you don’t know what Fedora Hubs is, I suggest you check out Mizmo’s blogpost and Hubs’ project page. The purpose of the workshop was to teach attendees how to write a basic but useful widget in Fedora Hubs. I wrote all the workshop as an online tutorial, for multiple reasons:

  • People can go through it at their own pace
  • My time is freed up to walk between the trainees, answer their questions and help them directly
  • Attendees can go back to it after Flock if they need to or if they haven’t completed it in time
  • It can be re-used outside of Flock (for exemple, by you right now 😉 )

I believe it’s a better way to teach people (see Khan Academy founder’s talk on TED): the teacher’s time is better used answering questions and having direct interactions with attendees, rather than at doing non-interactive things like talking.

There were about 10 people in the workshop, and 4 of them completed the tutorial in time, which is pretty good considering the conditions (other talks and workshops going on at the same time, bandwidth problems, etc.)

Also, I’m getting more and more interested in the teaching / mentoring aspect of software engineering. I like to do it, and I get good feedback when I do. That’s clearly a path to explore for me, although it’s still a bit stressful (but that’s usually a good sign, it means I’m taking it seriously). I don’t want to switch to that entirely, but having some more on my workplate would be nice, I think. The Outreachy program is very appealing to me, it would align perfectly with my other social commitments. I remember there’s also an NGO that offers software training for refugees in Paris, I’ll investigate that too.


The workshop on Fedora Hubs at Flock 2017 will be awesome

TL;DR: come to the Hubs workshop at Flock! 🙂

This is a shameless plug, I admit.

In a couple weeks, a fair number of people from the Fedora community will gather near Boston for the annual Flock conference. We’ll be able to update each other and work together face-to-face, which does not happen so often in Free Software.

For some months I’ve been working on the Fedora Hubs project, a web interface to make communication and collaboration easier for Fedora contributors. I really has the potential to change the game for a lot of us who still find some internal processes a bit tedious, and to greatly help new contributors.

The Fedora Hubs page is a personalized user or group page composed of many widgets which can individually inform you, remind you or help you tackle any part of you contributor’s life in the Fedora project. And it updates in realtime.

I’ll be giving a workshop on Wednesday 30th at 2:00PM to introduce developers to Hubs widgets. In half an hour, I’ll show you how to make a basic widget that will be already directly useful to you if you’re a packager. Then you’ll be able to join us in the following hackfest and contribute to Hubs. Maybe you have a great idea of a widget that would simplify your workflow. If so, that will be the perfect time to design and/or write it.

You need to know Python, and be familiar with basic web infrastructure technologies: HTML and CSS, requests and responses, etc. No Javascript knowledge needed at that point, but if you want to make a complex widget you’ll probably need to know how to write some JS (jQuery or React). The Hubs team will be around to help and guide you.

The script of the workshop is here: https://docs.pagure.org/fedora-hubs-widget-workshop/. Feel free to test it out and tell me if something goes wrong in your environment. You can also play with our devel Hubs instance, that will probably give you some ideas for the hackfest.

Remember folks: Hubs is a great tool, it will (hopefully) be central to contributors’ worflows throughout the Fedora project, and it’s the perfect time to design and write the widgets that will be useful for everyone. I hope to see you there! 🙂

React.js is pretty cool

These days I’ve been working on Fedora Hubs, it’s a Python (Flask) application, with a React.js frontend. I know Python quite well now, but it’s the first time I dabble in the React.js framework. I must say I’m pretty impressed. It solves a lot of the issues I’ve had with dynamic web development these last years. And it manages to make writing Javascript almost enjoyable, which is not a small feat! 😉

I’m still wrestling with Webpack and ES6, but I’ll get there eventually. React is really a great way to build UIs. Plus some people are writing the Bootstrap components in React, so this is very promising.