502 stories

5 Great Phabricator features that inspired GitLab

1 Comment

Innovation often happens because competition sparks new ideas. We unpack how Phabricator inspired GitLab to add new features.

Turning back time a bit, what exactly is Phabricator? Built on the concept of web-based applications, Phabricator enables developers to collaborate with code reviews, repository browser, change monitoring, bug tracking and wiki. On May 29, 2021, Phacility, the maintainer and sponsor of Phabricator announced end-of-life and stopped maintaining Phabricator.

GitLab co-founder and CEO, Sid Sijbrandij gives credit to Phabricator on HackerNews:

Phabricator was an inspiration to me when starting GitLab. It is shutting down now. Many of its features were years ahead of its time and there was a lot of humor in the product. As a tribute to it shall we add Clowcoptarize as a way to merge? This would be an opt in option introduced in GitLab 14.0.

It got me curious: What are these inspirations Sid is referring to? Let's dive into GitLab's history together and see what we can learn.

Tip: Features in the GitLab documentation often have a Version History box. You can use the issue URLs to dive deeper into feature proposals, discussions, etc.

Review workflows

A typical engineering workflow is as follows: The engineering manager assigns a new issue as a task to a developer. The developer works in their preferred IDE – local in VS Code or in the Gitpod cloud environment. Changes happen in a new feature branch in Git, which gets pushed to the remote Git server for collaboration.

The Git branch is not ready yet and stays hidden in a potentially long list of branches. To keep better track of their feature branches, developers could copy-paste the branch name or URL into the related issue - which I did 10 years ago. The concept of a "diff linked to a task for review" in Phabricator, likewise a "Git branch with commits linked to Merge Requests" in GitLab was not invented yet.

Phabricator inspired GitLab to create a default workflow for reviews. The Phabricator workflow makes the review more dominant and squashes all changes into a single commit after the review is approved. There are upsides and downsides to automatically squashing commits. Squashing the commits could mean losing information from review history and create more discussion. Depending on the application architecture, the frequency of changes, and debugging requirements, this can be a good thing or a bad thing. GitLab allows you to choose to squash commits before merging a MR and/or specifying the default project settings around squashing commits.

Phabricator treated a MR (or what they call "diff tasks") as the single source of truth for tracking changes and the review history. We felt this was a great idea, and replicated the process of a "diff task" in Phabricator in GitLab MRs. One of the major upsides to GitLab's version is that collaboration and discussion that happened in issues and epics is still available even after the change is merged.

Draft MR (or "diff tasks")

Many times when a MR is created in GitLab, the branch requires additional work before it is ready to be merged. Phabricator introduced a formal "Draft" / "Not Yet Ready for Review" state in 2013 for "diff tasks", which helped keep track of work in this state. GitLab added WIP MRs in 2016, which we then renamed to draft merge requests in 2020. While WIP may make sense to some people, acronyms can exclude newcomers. We found Draft is more recognizable. To avoid confusion, GitLab deprecated WIP and moved forward with draft merge requests.

Keep history in MRs for future debugging

The commit history in GitLab is enriched with links to the MR and the corresponding Git review history. In case of a production emergency, having everything documented allows for faster research and debugging.

GitLab stores the MR short URL with <namespace>/<project>!1234 in the merge commit message. Check the history of a demo project for the Kubernetes agent to see how the merge commit is rendered.

GitLab history with MR commit links GitLab commit history includes link to the MR.

This raw information is stored in the Git repository, whereas the MR itself stays in GitLab's database backend. You can verify this by cloning a repository and inspecting the history with this command:

$ git log

git log MR metadata MR metadata included in output from git log command.

Code coverage in MRs

Code coverage reports provide insight into how many lines of the source code are covered with unit tests. Reaching 100% test coverage is a developer myth - visualizing a decrease or increase can help monitor a trend in code quality. Phabricator implemented support for various languages with unit test engines and parsing the output, for example in Golang.

With many different languages and report output formats, integrating code coverage reports into GitLab MRs was challenging. GitLab launched the first iteration of code coverage reports in 2016, which generated the reports with CI/CD jobs and used GitLab pages to publish the HTML reports.

In this first iteration, the test coverage is parsed with a regular expression from the CI/CD job output, specified in the project settings or with the coverage keyword inside the CI/CD job configuration. We can see this in the job view inside the MR widget and as a coverage badge for the project. See the test coverage history by navigating into Analytics > Repository.

Test coverage as project badge in GitLab The test coverage badge in a GitLab project.

JUnit XML test reports were introduced as common format specification and added as an MR widget in 2018. The test reports runs in the background, using CI/CD artifacts to upload the XML reports from the runner to the server, where the MR/pipeline view visualizes the coverage reports in a tab.

The generic JUnit integration also helped with customization requests to unit tests, updated CLI commands, or changed coverage report outputs to parse. GitLab provides CI/CD template examples

The missing piece for GitLab was having inline code coverage remarks inside MR diffs. It took about five years for Sid's initial proposal for inline code coverage remarks to be implemented. In 2020, inline code coverage remarks were released in GitLab 13.5.

Test Coverage with Rust in GitLab How inline code coverage works in GitLab.

Check out this MR to practice verifying the test coverage. Make sure to select the inline diff view.

Automated workflows and integrated CI

Phabricator provides Herald as an automated task runner and rule engine to listen for changes. Herald can also be used to ensure protected branches and approval rules to enforce a strong permission model in development workflows. There are more examples in this HackerNews post from 2016 and somehow, I feel like an explorer seeing many great GitLab features in similar ways. 🦊

GitLab CI/CD pipeline schedules remind me of the task runner, similarly to webhooks and the REST API being instrumented from CI/CD jobs. The pipeline schedules are also a great way to periodically regenerate caches and rebuild container images for cloud native deployments.

Harbormaster is Phabricator's integration for CI. It's not built from multiple tools in the DevOps stack, but is instead fully integrated in the product.

The first version of GitLab CI was created in November 2012. In 2015, a GitLab team member came up with the idea of combining SCM with CI and the all-in-one DevOps platform was born. Built-in CI/CD inspired for more features and fostered a better way to innovate together. The new pipeline editor is just one example of a streamlined way to configure CI/CD pipelines in GitLab.

Let's throwback to 2017 and watch as we demonstrate how to take an idea to production in GitLab, using GKE:

Work boards for issue management

Work needs to be organized. Phabricator led the way with a board which allowed users to filter tasks and provide a more detailed view into planning and project management.

Phabricator work boards Inside Phabricator work boards.

GitLab users will recognize the similar look between Phabricator's work boards and GitLab issue boards. In GitLab 14.1, we built on existing epic tracking and labeling to create Epic boards to keep teams organized and measure progress.

In Phabricator, users can drag and drop between columns, which automatically changes the work status for a particular task. This feature inspired the boards in GitLab to automatically change the labels in a defined workflow by dragging and dropping between columns. Users can go a level deeper with scoped labels to switch between workflow states:

  • workflow::design
  • workflow::planning breakdown
  • workflow::ready for development
  • workflow::in dev
  • workflow::verification

The GitLab engineering handbook documents the different workflows.

Epic boards in GitLab Take a look at the Epic boards in GitLab.

Put it all together

In Phabricator, a diff task (in GitLab they're MRs) in the "review" state is linked to another task specifying the requirements. The UX needs to be clear so the relationship between the diffs can be accessed and understood. Unless necessary, the user shouldn't have to navigate manually. The context of the change review defines possible links to labels, states, dependent issues, diff tasks (MRs), and more.

GitLab links related issues. If an issue is mentioned in a MR, or vice versa, GitLab automatically links them. The user also has the option to have the issue close automatically once a change is merged. Read a blog post from 2016 to learn more about how issues and MRs can relate to each other in GitLab.

Linked issues and MRs in GitLab Linked issues and related MRs in GitLab.

UX work is challenging, and we continue to iterate to improve workflows in GitLab. For example, in GitLab 13.8, we reduced the number of clicks it takes to download a CI/CD job artifact from the MR.

Did we miss a feature Phabricator inspired?

While writing this blog post, my research revealed more gems. For example, I found a proposal to add visual graphs for issue dependencies in the HN thread.

Which features from Phabricator are missing in GitLab? Let us know in the comments, create a new feature proposal or start your contribution journey in a new MR right away!

Cover image by Johannes Plenio on Unsplash

Read the whole story
41 days ago
A nice gesture. To be fair, there is a community supported fork called phorge (of course).
Ojai, CA, US
Share this story

Nobody gives a hoot about groupthink


People want what’s bad for them; managers want what’s bad for the organisation.

Many in software development like to pretend that management is an elite kind of human being. The kind who does their job every day with a laser focus on rational organisational improvement. People who come in every day and work with an inhuman disregard for their career and standing. After all, the root cause of all software flaws lies in the code and coders, right?

Unfortunately, the buck always stops at management. And, most of the time, a person can only care so much about the organisation as a whole.

In the immortal words of Deming:

Nobody gives a hoot about profits.

A recurring problem specific to software development is that many of the popular features that people keep asking for have a detrimental effect on the quality of their actual work.

Namely: managers and stakeholders frequently prioritise fashion and ‘being current’ over objective improvements in software quality or user experience. Because those are the priorities of those who buy software.

Two relatively common ‘fashions’ today are real-time collaboration and shared data repositories of one kind or another.

Both increase productivity in the naive sense. We work more; everybody is more active; the group feels more cohesive.

The downside is that they also both tend to reduce the quality of the work and increase busywork.

Real-time collaboration

One of the oldest observed phenomena in psychology is groupthink and other forms of group influence on behaviour and decision-making.

A common implementation style for real-time collaboration is the Google Docs model:

  • The list of those present in the document is visible to all collaborators.
  • Everybody’s current activity is visible to all.
  • Everybody’s notes are visible to all, with attribution
  • Each person’s contribution is generally identifiable.

The consequences of this design should be obvious. The group’s opinion will converge on that of the highest authority present.

As soon as an authority of any kind makes their opinion known, the group will shift in that direction. Even the most rational will tweak their responses after that. After all, who wants to risk going up against an authority? Interns will hesitate to comment. All objections will be a little bit more qualified or toned down.

Generally speaking, if you are writing a document and want to get the most out of a group’s feedback, each contributor should be able to form their opinions independently and give their responses without fear of social or community repercussions.

It’s one of the basic precepts of the book The Wisdom of Crowds, but it applies to any context where you want to leverage the expertise of a heterogeneous group.

This style of collaboration also increases busywork: everything is moving and changing everywhere all the time. Comments fly by. You change what you’re writing half a dozen times in response to what appears. You write a series of notes, noticing later that six other people said the same thing on another page and that somebody added three pages to that same effect towards the end.

Real-time is exciting because it’s busy, much busier than any other form of online collaboration.

But stakeholders never ask for collaboration features based on the dynamic aggregation of independent contributions. Cause that isn’t the fashion. Google’s crap is the fashion. So, we get real-time collaboration and busywork everywhere.

Shared data repositories

Another Google-inspired user experience atrocity that has become commonplace is the shared drive.

Did you know that once-upon-a-time Information Architecture was considered a specialised field of study? Did you know that organising information is considered such a complicated endeavour that there is a massive field dedicated to it?

Organising information so that it’s easy for a group of people to find the documents they need is very hard.

Not that you’d know it from how most companies work. Almost everybody builds their internal library of documents as an improvised layer of garbage and weeds. Like a junkyard, converted to a garden, that overgrew with weeds, and was converted back to a junkyard. If you’re extremely unlucky, one of your employees will be obsessive-compulsive enough to restructure everything. They’ll clean the place up and reorganise it in a manner that makes sense to them. They’ll transform an inscrutable multi-layered archaeological site into a maze. Your shared drive is now a labyrinth made of trash and random horticultural curiosities.

Let’s fix it with a wiki! (Yay. Yet another layer of opinions and obfuscation.)

You could hire somebody with the expertise to organise everything but then you’d miss out on the constant busywork and sense of camaraderie created by a hellscape of abandoned Google Drive folders.

The alternative is to solve it the same way we did with email: shared data, individual organisation.

You don’t need to know how your colleagues organise their email. You only need to know that they get it and respond. The same applies to most work documents. In Personal Information Management (PIM) this is often called “the user-subjective approach”.

From The Science of Managing Our Digital Stuff by Bergman and Whittaker:

Because information consumer differ from each other in multiple ways, the information professional is restricted to exploiting only public (i.e. user-independent) attributes when organizing information. PIM systems, in contrast, are unique in that the person who stores the information and decides on its organization is the same as the person who later retrieves it. (p. 182)


An early PIM study demonstrated the critical role of subjective attributes, inspiring the development of the user-subjective approach. Kwasnick (1991) analyzed the descriptions of eight faculty members who were asked to describe how they organize their personal documents. She found that a minority (30 percent) of the attributes were document-related (e.g. author, form, topic, title). In contrast the majority (70 percent) related to the interactions between the user and the information in the document, in particular how the user perceived and acted upon that information (e.g., situational attributes, disposition, time, cognitive state). Thus, users base their natural organization more on subjective attributes than on general public ones.

If you ever wondered why so many people use their email to organise all of their work, this is why. It’s the only user-subjective information management tool at their disposal.

Most companies would benefit from standardising on a user-subjective information management system but they don’t. Nobody asks for this kind of system so what we get are Google Drives, Dropboxes, shared Notion projects, or weed-like shared Roam spaces.

Meanwhile, individuals cope by overloading their email or by clinging onto software like Zotero or nvAlt that looked ancient even when it was new. Some will just manage copies of everything in their private Dropboxes. Others will cling to DEVONThink for dear life.

Nobody expects their employer to ask for or provide a tool that genuinely works. Because that never happens. To return to the Deming quote at the start:

Nobody gives a hoot about profits.

Read the whole story
42 days ago
Slack will never replace (my) email.
Ojai, CA, US
52 days ago
"People want what’s bad for them; managers want what’s bad for the organisation."

"If you ever wondered why so many people use their email to organise all of their work, this is why. It’s the only user-subjective information management tool at their disposal."
Boulder, CO
Share this story

webshit weekly

1 Comment

An annotated digest of the top "Hacker" "News" posts for the last week of October, 2020.

I reverse engineered McDonalds’ internal API
October 22, 2020 (comments)
An investigative journalist unveils the truth. Hackernews incorrects one another on fast food technology, then speculates about how to add more computers to improve the situation.

YouTube-dl has received a DMCA takedown from RIAA
October 23, 2020 (comments)
The RIAA causes outrage and fury worldwide by listing Icona Pop in the same set as Justin Timberlake and Taylor Swift. Hackernews wrestles with their value judgments; their firm stance as bootlickers for megacorporations has finally crashed headlong into their equally firm belief that programmers should never be held to any legal or moral standards. What results is a wide-ranging display of profound confusion, as Hackernews realizes they don't have clear definitions of literally any of the words involved in internet video, copyright law, the American legal process, or website hosting.

I am an Uighur who faced China’s concentration camps
October 24, 2020 (comments)
The Chinese government continues its war against literally everyone. Hackernews suggests withholding a small amount of money as a suitable punishment for genocide, but other Hackernews sternly insist that the only correct response is withholding a larger amount of money. Facing up to the fact that the Chinese government is unrepentantly evil at a massive scale proves to be too difficult for Hackernews, so the return to their accustomed base state by whatabouting other countries instead. At some point, for some reason, Hackernews starts arguing about Trump, because although America is apparently no better than the Chinese government, it's still evidently expected that America will have to fix it. The spectre of such a horrific intervention, which would almost certainly lead to war at an unspeakable level of ferocity, could simply be avoided if the Chinese people would depose and imprison every official of the Chinese Communist Party.

I am seriously considering going back to desktop computers
October 25, 2020 (comments)
Some rando is under the impression that there is a material difference in the engineering quality of laptop and desktop computer. Hackernews isn't, but they mostly fall into the same stupid false dichotomy. Hundreds of comments are mashed into keyboards debating the specific temperature and clock frequencies of processors on various computers. Nobody seems to realize that you're allowed to use both, even though a sizeable percentage of them already do.

How journalists use youtube-dl
October 26, 2020 (comments)
A lobbyist tries to respin a popular pornography-archiving tool as the bedrock of human freedom. Hackernews chimes in to report how important the porn tool is to police, which is the first time in my life I have even considered supporting an RIAA action. Hackernews makes a long list of reasons they might want to download a video from the internet, all of which boil down to "because I want to watch it" or "because I might eventually want to watch it." There is nothing interesting about this discussion, so there are only a few hundred comments, but the article defends their favorite pornography archiver, so there are over sixteen hundred votes for the story.

Google's new logos are bad
October 27, 2020 (comments)
A trash blog bikesheds some favicons. The article is so utterly devoid of insight or interest that I would be angry about the electricity wasted in displaying it. Since that power was renewably generated via solar panels, I must conclude that the dipshits who wrote, edited, and published this worthless drivel owe a refund to the Sun. Hackernews, however, is deeply moved by this piece, and is outraged that their telephone buttons are different colors than they were before. Some of the more devoted Google aficionados attempt to construct fanfiction to imbue these meaningless changes with deep import.

I Violated a Code of Conduct
October 28, 2020 (comments)
Some assholes bully a nerd over Zoom. Hackernews begins foaming at the mouth about codes of conduct, as usual, and immediately seize this example of a bad one poorly enforced to dunk on the entire concept of being held accountable by anyone for any purpose ever.

My Resignation from the Intercept
October 29, 2020 (comments)
Glenn Greenwald wigs completely the fuck out because some coworkers didn't like his ten-thousand-word thinkpiece about Hunter Biden chatlogs. Hackernews regards this as the death of journalism. They write fifteen hundred comments, almost all of which contain a very simple and easily-fixed reason that journalism has died. The rest are recommendations regarding which podcasts are the best ones to uncritically consume at face value.

From McDonald's to Google
October 30, 2020 (comments)
A computer nerd had a bad job, but now has a better job, and posts a story to that effect on "Hacker" "News". One Hackernews immediately demands answers regarding a perceived gap in this narrative résumé, so the computer nerd arrives in the comments to defend it. Later on, another subset of Hackernews get together to whine about companies' attempts to broaden their hiring demographics, since this is apparently some kind of threat to Hackernews.

Sean Connery has died
October 31, 2020 (comments)
A celebrity has died. Hackernews makes a list of everything the celebrity ever did. No technology is discussed.

Read the whole story
327 days ago
I'm kinda sad the story about our move to GitLab, which hit #1 this past week for a bit, didn't make it.
Ojai, CA, US
326 days ago
I was so hoping.
Share this story

Obnam2 - a new backup system

1 Comment

This may be the stupidest thing I will ever have done, but I intend to have fun while doing it.

I’m writing another implementation of a backup system. It is called Obnam (“obligatory name”), just like the previous one that I retired three years ago.

The shape of the new system is roughly as follows:

  • Client/server, with HTTPS (not SFTP like Obnam1). A smart server stores chunks of data but doesn’t look into them, the client has all the interesting logic (encryption, compression, de-duplication, etc).
  • Written in Rust (not Python like Obnam1).

Long term I’m aiming at something like this:

  • Easy to install: available as a Debian package in an APT repository. (I’d appreciate help with other forms of packages.)
  • Easy to configure: only need to configure things that are inherently specific to a client, when sensible defaults are impossible.
  • Easy to run: making a backup is a single command line that’s always the same.
  • Detects corruption: if a file in the repository is modified or deleted, the software notices it automatically.
  • Repository is encrypted: all data stored in the repository is encrypted with a key known only to the client.
  • Fast backups and restores: when a client and server both have sufficient CPU, RAM, and disk bandwidth, the software makes a backup or restores a backup over a gigabit Ethernet using at least 50% of the network bandwidth.
  • Snapshots: Each backup is an independent snapshot: it can be deleted without affecting any other snapshot.
  • Deduplication: Identical chunks of data are stored only once in the backup repository.
  • Compressed: Data stored in the backup repository is compressed.
  • Large numbers of live data files: The system must handle at least ten million files of live data. (Preferably much more, but I want some concrete number to start with.)
  • Live data in the terabyte range: The system must handle a terabyte of live data. (Again, preferably more.)
  • Many clients: The system must handle a thousand total clients and one hundred clients using the server concurrently, on one physical server.
  • Shared repository: The system should allow people who don’t trust each other to share a repository without fearing that their own data leaks, or even its existence leaks, to anyone.
  • Shared backups: People who do trust each other should be able to share backed up data in the repository.

I am primarily writing this for myself, in my free time, but it’d be nice if it was useful to others, or they’d like to contribute.

I’ve written a simplistic prototype, where the backup program reads data from stdin, breaks it into chunks, and uploads chunks to the server, unless they’re already there, and the corresponding restore program downloads the chunks and writes them to stdout.

What little code there is, is on

If you’re interested in helping, or using, the new Obnam, please get in touch. Email is OK, although GitLab issues or merge requests are preferred. However, please be patient: this is a side project, and I may take a while to respond.

Read the whole story
343 days ago
Lars is back on his bullshit^W backup system again :)
Ojai, CA, US
Share this story

China blocks Wikimedia Foundation’s accreditation to World Intellectual Property Organization

1 Share

China yesterday blocked the Wikimedia Foundation’s application for observer status at the World Intellectual Property Organization (WIPO), the United Nations (UN) organization that develops international treaties on copyright, IP, trademarks, patents and related issues. As a result of the block, the Foundation’s application for observer status has been suspended and will be reconsidered at a future WIPO meeting in 2021.

China was the only country to raise objections to the accreditation of the Wikimedia Foundation as an official observer. Their last-minute objections claimed Wikimedia’s application was incomplete, and suggested that the Wikimedia Foundation was carrying out political activities via the volunteer-led Wikimedia Taiwan chapter. The United Kingdom and the United States voiced support for the Foundation’s application.

WIPO’s work, which shapes international laws and policies that affect the sharing of free knowledge, impacts Wikipedia’s ability to provide hundreds of millions of people with information in their own languages. The Wikimedia Foundation’s absence from these meetings further separates those people from global events that shape their access to knowledge.

“The Wikimedia Foundation operates Wikipedia, one of the most popular sources of information for people around the world. Our organization can provide insights into global issues surrounding intellectual property, copyright law, and treaties addressed by WIPO that ensure access to free knowledge and information,” said Amanda Keton, General Counsel of the Wikimedia Foundation. “The objection by the Chinese delegation limits Wikimedia’s ability to engage with WIPO and interferes with the Foundation’s mission to strengthen access to free knowledge everywhere. We urge WIPO members, including China, to withdraw their objection and approve our application.”

A wide range of international and non-profit organizations as well as private companies are official observers of WIPO proceedings and debates. These outside groups offer technical expertise, on-the-ground experience, and diversity of opinions to help WIPO with its global mandate.

“The Wikimedia Foundation calls on the member states of WIPO to reconsider our application for observer status and encourages other UN member states to voice their support for civil society inclusion and international cooperation,” said Keton.

The Wikimedia Foundation provides the essential infrastructure for free knowledge and advocates for a world in which every single human being can freely share in the sum of all knowledge.

Read the whole story
365 days ago
Ojai, CA, US
Share this story

From Gerrit to Gitlab: join the discussion

1 Share

By Tyler Cipriani, Manager, Editing

There is a lot of Wikimedia code canonically hosted by the Wikimedia Gerrit install. Gerrit is a web-based git repository collaboration tool that allows users to submit, comment on, update, and merge code into its hosted repositories. 

Gerrit’s workflow and user experience are unique when compared to other popular code review systems like GitHub, Bitbucket, and GitLab. Gerrit’s method of integration is focused on continuous integration of stacked patchsets that may be rearranged and merged independently. In Gerrit there is no concept of feature branches where all work on a feature is completed before it’s merged to a mainline branch—the only branch developers need to worry about is the mainline branch. The consequence of this is that each commit is a distinct unit of change that may be merged with the mainline branch at any time. 

The primary unit of change for GitHub and other review systems is the pull request. Thanks to the proliferation of GitHub, pull requests (synonymous with “merge requests”) have become the defacto standard for integration. The type of continuous integration used by Gerrit can allow for more rapid iteration of closely aligned teams but might be hostile to new contributors.

Following an announcement in 2011, in early 2012 Wikimedia moved from Subversion to Git and chose Gerrit as the code review platform. The following summer a consultation resulted in affirming that Wikimedia development was staying on Gerrit “for the time being”. Since 2012, new Open Source tools for git-based code review have continued to evolve. Integrated self-service continuous integration, easy repository creation and browsing, and pull requests are used for development in large Open Source projects and help define user expectations about what a code review should do.

Gerrit’s user interface has improved — particularly with the upgrade from version 2 to version 3 — but Gerrit is still lacking some of the friendly features of many of the modern code review tools like easy feature branch creation, first-class self-service continuous integration, and first-class repository navigation. Meanwhile, the best parts of Gerrit’s code review system — draft comments, approvals, and explicit approvers — have made their way into other review systems. Gerrit’s unique patchset workflow has a lot of advantages over the pull request model, but, maybe, that alone is not a compelling enough reason to avoid alternatives.

Enter GitLab

Earlier this year, as part of the evaluation of continuous integration tooling, the Wikimedia Foundation’s Release Engineering team reviewed GitLab’s MIT-licensed community edition (CE) offering and found that it met many of the needs for our continuous integration system—things like support for self-service pre- and post-merge testing, a useful ACL system for reviewers, multiple CI executors supporting physical hosts and Kubernetes clusters, support for our existing git repositories, and more.

GitLab has been adopted by comparable Open Source entities like Debian,, KDE, Inkscape, Fedora, and the GNOME project.

GitLab is a modern code review system that seems capable of handling our advanced CI workflows. A move to GitLab could provide our contributors with a friendly and open code review experience that respects the principles of freedom and open source.


As shepherds of the code review system, the Release Engineering team reached the stage of evaluations where we need to gather feedback on the proposal to move from Gerrit to GitLab. The Wikimedia Gerrit install is used in diverse ways by over 2,500 projects. To reach an equitable decision about whether or not GitLab is the future for our code hosting, we need the feedback of the technical community.

On 2 September 2020, we announced the beginning of the GitLab consultation period. We invite all technical contributors with opinions about code review to speak their mind on the consultation talk page.

From now until the end of September 2020 a working group composed of individuals from across our technical communities will be collecting and responding on the consultation talk page. Following this consultation period, the working group will review the feedback it has received, and it will produce a summary, recommendation, and supporting deliverables.

It’s difficult to make decisions collaboratively, but those decisions are stronger for their efforts. Please take the time to add a topic or add to the discussion — our decision can only be as strong as our participation.

About this post

Featured image credit: Vulpes vulpes Mallnitz 01, Uoaei1, CC BY-SA 4.0

Read the whole story
367 days ago
Ojai, CA, US
Share this story
Next Page of Stories