500 stories

webshit weekly

1 Comment

An annotated digest of the top "Hacker" "News" posts for the last week of October, 2020.

I reverse engineered McDonalds’ internal API
October 22, 2020 (comments)
An investigative journalist unveils the truth. Hackernews incorrects one another on fast food technology, then speculates about how to add more computers to improve the situation.

YouTube-dl has received a DMCA takedown from RIAA
October 23, 2020 (comments)
The RIAA causes outrage and fury worldwide by listing Icona Pop in the same set as Justin Timberlake and Taylor Swift. Hackernews wrestles with their value judgments; their firm stance as bootlickers for megacorporations has finally crashed headlong into their equally firm belief that programmers should never be held to any legal or moral standards. What results is a wide-ranging display of profound confusion, as Hackernews realizes they don't have clear definitions of literally any of the words involved in internet video, copyright law, the American legal process, or website hosting.

I am an Uighur who faced China’s concentration camps
October 24, 2020 (comments)
The Chinese government continues its war against literally everyone. Hackernews suggests withholding a small amount of money as a suitable punishment for genocide, but other Hackernews sternly insist that the only correct response is withholding a larger amount of money. Facing up to the fact that the Chinese government is unrepentantly evil at a massive scale proves to be too difficult for Hackernews, so the return to their accustomed base state by whatabouting other countries instead. At some point, for some reason, Hackernews starts arguing about Trump, because although America is apparently no better than the Chinese government, it's still evidently expected that America will have to fix it. The spectre of such a horrific intervention, which would almost certainly lead to war at an unspeakable level of ferocity, could simply be avoided if the Chinese people would depose and imprison every official of the Chinese Communist Party.

I am seriously considering going back to desktop computers
October 25, 2020 (comments)
Some rando is under the impression that there is a material difference in the engineering quality of laptop and desktop computer. Hackernews isn't, but they mostly fall into the same stupid false dichotomy. Hundreds of comments are mashed into keyboards debating the specific temperature and clock frequencies of processors on various computers. Nobody seems to realize that you're allowed to use both, even though a sizeable percentage of them already do.

How journalists use youtube-dl
October 26, 2020 (comments)
A lobbyist tries to respin a popular pornography-archiving tool as the bedrock of human freedom. Hackernews chimes in to report how important the porn tool is to police, which is the first time in my life I have even considered supporting an RIAA action. Hackernews makes a long list of reasons they might want to download a video from the internet, all of which boil down to "because I want to watch it" or "because I might eventually want to watch it." There is nothing interesting about this discussion, so there are only a few hundred comments, but the article defends their favorite pornography archiver, so there are over sixteen hundred votes for the story.

Google's new logos are bad
October 27, 2020 (comments)
A trash blog bikesheds some favicons. The article is so utterly devoid of insight or interest that I would be angry about the electricity wasted in displaying it. Since that power was renewably generated via solar panels, I must conclude that the dipshits who wrote, edited, and published this worthless drivel owe a refund to the Sun. Hackernews, however, is deeply moved by this piece, and is outraged that their telephone buttons are different colors than they were before. Some of the more devoted Google aficionados attempt to construct fanfiction to imbue these meaningless changes with deep import.

I Violated a Code of Conduct
October 28, 2020 (comments)
Some assholes bully a nerd over Zoom. Hackernews begins foaming at the mouth about codes of conduct, as usual, and immediately seize this example of a bad one poorly enforced to dunk on the entire concept of being held accountable by anyone for any purpose ever.

My Resignation from the Intercept
October 29, 2020 (comments)
Glenn Greenwald wigs completely the fuck out because some coworkers didn't like his ten-thousand-word thinkpiece about Hunter Biden chatlogs. Hackernews regards this as the death of journalism. They write fifteen hundred comments, almost all of which contain a very simple and easily-fixed reason that journalism has died. The rest are recommendations regarding which podcasts are the best ones to uncritically consume at face value.

From McDonald's to Google
October 30, 2020 (comments)
A computer nerd had a bad job, but now has a better job, and posts a story to that effect on "Hacker" "News". One Hackernews immediately demands answers regarding a perceived gap in this narrative résumé, so the computer nerd arrives in the comments to defend it. Later on, another subset of Hackernews get together to whine about companies' attempts to broaden their hiring demographics, since this is apparently some kind of threat to Hackernews.

Sean Connery has died
October 31, 2020 (comments)
A celebrity has died. Hackernews makes a list of everything the celebrity ever did. No technology is discussed.

Read the whole story
235 days ago
I'm kinda sad the story about our move to GitLab, which hit #1 this past week for a bit, didn't make it.
Ojai, CA, US
235 days ago
I was so hoping.
Share this story

Obnam2 - a new backup system

1 Comment

This may be the stupidest thing I will ever have done, but I intend to have fun while doing it.

I’m writing another implementation of a backup system. It is called Obnam (“obligatory name”), just like the previous one that I retired three years ago.

The shape of the new system is roughly as follows:

  • Client/server, with HTTPS (not SFTP like Obnam1). A smart server stores chunks of data but doesn’t look into them, the client has all the interesting logic (encryption, compression, de-duplication, etc).
  • Written in Rust (not Python like Obnam1).

Long term I’m aiming at something like this:

  • Easy to install: available as a Debian package in an APT repository. (I’d appreciate help with other forms of packages.)
  • Easy to configure: only need to configure things that are inherently specific to a client, when sensible defaults are impossible.
  • Easy to run: making a backup is a single command line that’s always the same.
  • Detects corruption: if a file in the repository is modified or deleted, the software notices it automatically.
  • Repository is encrypted: all data stored in the repository is encrypted with a key known only to the client.
  • Fast backups and restores: when a client and server both have sufficient CPU, RAM, and disk bandwidth, the software makes a backup or restores a backup over a gigabit Ethernet using at least 50% of the network bandwidth.
  • Snapshots: Each backup is an independent snapshot: it can be deleted without affecting any other snapshot.
  • Deduplication: Identical chunks of data are stored only once in the backup repository.
  • Compressed: Data stored in the backup repository is compressed.
  • Large numbers of live data files: The system must handle at least ten million files of live data. (Preferably much more, but I want some concrete number to start with.)
  • Live data in the terabyte range: The system must handle a terabyte of live data. (Again, preferably more.)
  • Many clients: The system must handle a thousand total clients and one hundred clients using the server concurrently, on one physical server.
  • Shared repository: The system should allow people who don’t trust each other to share a repository without fearing that their own data leaks, or even its existence leaks, to anyone.
  • Shared backups: People who do trust each other should be able to share backed up data in the repository.

I am primarily writing this for myself, in my free time, but it’d be nice if it was useful to others, or they’d like to contribute.

I’ve written a simplistic prototype, where the backup program reads data from stdin, breaks it into chunks, and uploads chunks to the server, unless they’re already there, and the corresponding restore program downloads the chunks and writes them to stdout.

What little code there is, is on

If you’re interested in helping, or using, the new Obnam, please get in touch. Email is OK, although GitLab issues or merge requests are preferred. However, please be patient: this is a side project, and I may take a while to respond.

Read the whole story
251 days ago
Lars is back on his bullshit^W backup system again :)
Ojai, CA, US
Share this story

China blocks Wikimedia Foundation’s accreditation to World Intellectual Property Organization

1 Share

China yesterday blocked the Wikimedia Foundation’s application for observer status at the World Intellectual Property Organization (WIPO), the United Nations (UN) organization that develops international treaties on copyright, IP, trademarks, patents and related issues. As a result of the block, the Foundation’s application for observer status has been suspended and will be reconsidered at a future WIPO meeting in 2021.

China was the only country to raise objections to the accreditation of the Wikimedia Foundation as an official observer. Their last-minute objections claimed Wikimedia’s application was incomplete, and suggested that the Wikimedia Foundation was carrying out political activities via the volunteer-led Wikimedia Taiwan chapter. The United Kingdom and the United States voiced support for the Foundation’s application.

WIPO’s work, which shapes international laws and policies that affect the sharing of free knowledge, impacts Wikipedia’s ability to provide hundreds of millions of people with information in their own languages. The Wikimedia Foundation’s absence from these meetings further separates those people from global events that shape their access to knowledge.

“The Wikimedia Foundation operates Wikipedia, one of the most popular sources of information for people around the world. Our organization can provide insights into global issues surrounding intellectual property, copyright law, and treaties addressed by WIPO that ensure access to free knowledge and information,” said Amanda Keton, General Counsel of the Wikimedia Foundation. “The objection by the Chinese delegation limits Wikimedia’s ability to engage with WIPO and interferes with the Foundation’s mission to strengthen access to free knowledge everywhere. We urge WIPO members, including China, to withdraw their objection and approve our application.”

A wide range of international and non-profit organizations as well as private companies are official observers of WIPO proceedings and debates. These outside groups offer technical expertise, on-the-ground experience, and diversity of opinions to help WIPO with its global mandate.

“The Wikimedia Foundation calls on the member states of WIPO to reconsider our application for observer status and encourages other UN member states to voice their support for civil society inclusion and international cooperation,” said Keton.

The Wikimedia Foundation provides the essential infrastructure for free knowledge and advocates for a world in which every single human being can freely share in the sum of all knowledge.

Read the whole story
273 days ago
Ojai, CA, US
Share this story

From Gerrit to Gitlab: join the discussion

1 Share

By Tyler Cipriani, Manager, Editing

There is a lot of Wikimedia code canonically hosted by the Wikimedia Gerrit install. Gerrit is a web-based git repository collaboration tool that allows users to submit, comment on, update, and merge code into its hosted repositories. 

Gerrit’s workflow and user experience are unique when compared to other popular code review systems like GitHub, Bitbucket, and GitLab. Gerrit’s method of integration is focused on continuous integration of stacked patchsets that may be rearranged and merged independently. In Gerrit there is no concept of feature branches where all work on a feature is completed before it’s merged to a mainline branch—the only branch developers need to worry about is the mainline branch. The consequence of this is that each commit is a distinct unit of change that may be merged with the mainline branch at any time. 

The primary unit of change for GitHub and other review systems is the pull request. Thanks to the proliferation of GitHub, pull requests (synonymous with “merge requests”) have become the defacto standard for integration. The type of continuous integration used by Gerrit can allow for more rapid iteration of closely aligned teams but might be hostile to new contributors.

Following an announcement in 2011, in early 2012 Wikimedia moved from Subversion to Git and chose Gerrit as the code review platform. The following summer a consultation resulted in affirming that Wikimedia development was staying on Gerrit “for the time being”. Since 2012, new Open Source tools for git-based code review have continued to evolve. Integrated self-service continuous integration, easy repository creation and browsing, and pull requests are used for development in large Open Source projects and help define user expectations about what a code review should do.

Gerrit’s user interface has improved — particularly with the upgrade from version 2 to version 3 — but Gerrit is still lacking some of the friendly features of many of the modern code review tools like easy feature branch creation, first-class self-service continuous integration, and first-class repository navigation. Meanwhile, the best parts of Gerrit’s code review system — draft comments, approvals, and explicit approvers — have made their way into other review systems. Gerrit’s unique patchset workflow has a lot of advantages over the pull request model, but, maybe, that alone is not a compelling enough reason to avoid alternatives.

Enter GitLab

Earlier this year, as part of the evaluation of continuous integration tooling, the Wikimedia Foundation’s Release Engineering team reviewed GitLab’s MIT-licensed community edition (CE) offering and found that it met many of the needs for our continuous integration system—things like support for self-service pre- and post-merge testing, a useful ACL system for reviewers, multiple CI executors supporting physical hosts and Kubernetes clusters, support for our existing git repositories, and more.

GitLab has been adopted by comparable Open Source entities like Debian,, KDE, Inkscape, Fedora, and the GNOME project.

GitLab is a modern code review system that seems capable of handling our advanced CI workflows. A move to GitLab could provide our contributors with a friendly and open code review experience that respects the principles of freedom and open source.


As shepherds of the code review system, the Release Engineering team reached the stage of evaluations where we need to gather feedback on the proposal to move from Gerrit to GitLab. The Wikimedia Gerrit install is used in diverse ways by over 2,500 projects. To reach an equitable decision about whether or not GitLab is the future for our code hosting, we need the feedback of the technical community.

On 2 September 2020, we announced the beginning of the GitLab consultation period. We invite all technical contributors with opinions about code review to speak their mind on the consultation talk page.

From now until the end of September 2020 a working group composed of individuals from across our technical communities will be collecting and responding on the consultation talk page. Following this consultation period, the working group will review the feedback it has received, and it will produce a summary, recommendation, and supporting deliverables.

It’s difficult to make decisions collaboratively, but those decisions are stronger for their efforts. Please take the time to add a topic or add to the discussion — our decision can only be as strong as our participation.

About this post

Featured image credit: Vulpes vulpes Mallnitz 01, Uoaei1, CC BY-SA 4.0

Read the whole story
275 days ago
Ojai, CA, US
Share this story

Thursday, May 14, 2020 - the world computer: a marginally coherent bathtub rant

1 Share

Thursday, May 14, 2020

the world computer: a marginally coherent bathtub rant

I was pondering Amazon just now, as I sat in the bathtub sweating profusely and reading an installment of The Murderbot Diaries on an old e-ink Kindle in a sandwich baggy.

I started thinking about how I bought a DRM-free edition of the book somewhere besides Amazon and jumped through several hoops to get it in a readable format on the Kindle (a device given to me by a former employer so I could participate in a book club for reading the blend of self-help, technical propaganda, and management porn that the class of people who go through startup incubators pretty much swim in).

And then I thought: For fucksake, the sheer futility of this kind of exercise, when we as people who read books all more or less live inside the machinery constructed by Amazon. I mean, sure, I have a copy of a book that I can stash for later and read on some other gadget, which has some practical value. But if you think of it as some minor act of resistance to the bullshit status quo… I mean, it feels good, I indulge in this kind of theatrics all the time, but fundamentally Amazon still owns publishing and for fractally similar reasons total assholes still control most of the code on pretty much every device on the planet.

From one reasonable but doomed point of view, the Kindle is a special-purpose computer I own. But that elides a whole lot of its essential nature, doesn’t it? What the Kindle really is: A fragment of Amazon’s computer that happens to be physically located in my house, interfaced with both my credit card balance and my brain.

And then I thought: We’re over the threshold. It’s not so much that there are a lot of computers. 20 years ago there were a lot of computers. Now it’s more like there’s one massive computer and we’re all inside it. We’ve collapsed into the state where cyberspace isn’t just a meaningful concept; it’s very nearly coterminous with human existence.

The same thought from a different angle: I was reading a thread about this pretty interesting piece of desktop software, and someone said:

This does look intriguing, but I can’t help but be disinterested in it because it doesn’t look like you can share and collaborate over the Internet.

And I thought: Right. This is where we are. Abstractions like “a kind of file that this software can read” have become implementation details for the technical class. Even for the technical class, what doesn’t open onto the network is essentially dead. And in an age and architecture when scale and corporate platform availability (Android, iOS, Facebook) are prerequisites for meaningful participation, “the network” means what’s wholly owned. The network’s the computer, the computer is the megacorporation.

But that understates the case. The meta-megacorporation is the network is the computer. Amazon doesn’t own the whole machine, or Microsoft, or Apple, or Facebook, or Google, or the governments of [the United States, China, Russia, …]. Vast territories are delineated within the network, but their boundaries are permeable and ill-defined. It’s impossible to cleanly disentangle client hardware from operating systems from databases from protocols from supply chains from datacenters. Just as it’s impossible to disentangle computation from the flow of money, the flow of goods, the flow of surveillance, the software-riddled cognitive state of populations. Scale permeates everything, even scale.

So: There’s a computer and most of us live there now.

p1k3 / 2020 / 5 / 14
tags: topics/amazon, topics/business, topics/murderbot, topics/reading, topics/sfnal, topics/technical

Read the whole story
405 days ago
Ojai, CA, US
Share this story

Some Economic Responses to the Coronavirus Recession

1 Comment and 2 Shares

In response to the spread of COVID-19, the economy appears to be in recession. Lawmakers are scrambling to respond to the recession with many ideas floating around about how best to do so. Below, I outline a few responses that I think would be wise.

Welfare Expansion

In a normal recession, the proper goal of a policy response is pretty simple: increase output and employment by directly or indirectly increasing aggregate demand. Many proposed responses to this recession have fallen into this typical model. But, strictly speaking, this model does not really make sense for our current situation.

Insofar as we are attempting to limit the spread of a virus, we should not be trying to increase employment and output. The only work that should be done is work that can be done in isolation or work that is absolutely necessary. All other work and its associated output should cease.

This is an unfortunate feature of the current recession, but it also simplifies the policy debate considerably. Since we are not interested in job creation, we do not need to have the usual disagreements about how best to do that. Since we do not want very many people working, we do not need to debate about whether the low employment level is because employers aren’t hiring or because the unemployed aren’t seeking work.

Instead, the only question we need to answer for individuals is how to keep them financially solvent during the disruption. In a well-constructed society with a good welfare state, the system would already be set up to handle these kinds of disruptions. The shocks people are going to face in the coming months — mainly unemployment and leave for sickness and caregiving — are shocks that already hit people all the time in our society. We should already have good benefits for those shocks. But we don’t. So we’ll have to build them on the fly probably on a temporary basis.

The first thing we should do is expand unemployment benefits. The benefit duration for ordinary unemployment benefits should be increased to one year and the benefit amount should be increased to 100 percent of prior earnings up to $8,333 per month. In addition to the changes to ordinary unemployment benefits, a new basic unemployment benefit should be established that is equal to at least the one-person poverty line, which is $1,063 in the 48 contiguous states. Any unemployed, non-elderly adult who is not eligible for ordinary unemployment benefits would be eligible for the basic unemployment benefit.

The second thing we should do is create a sickness allowance for those who need to take leave from a job because they are inflicted by the virus. The allowance duration would be one month and the benefit amount would be 100 percent of prior earnings up to $8,333 per month. Employers can pay the allowance themselves and be reimbursed by the government. If they do not, then the employee can get the benefit directly from the government.

The third thing we should so is create a family leave benefit for those that need to take leave to care for family members due to various disruptions such as school closures. The benefit duration would be three months long but would otherwise be structured like the sickness allowance.

These benefits would directly address the kinds of disruptions people will face that will cause them to become financially insolvent. Insofar as this may not be enough and people might fall through the cracks in other ways, it would also be wise to send cash out indiscriminately, such as through a $1,000 per month universal cash benefit.

Bailouts for Equity

Many companies will require bailouts to stay afloat due to massive contractions in revenue, e.g. airlines and hotels. These companies should be bailed out with cash from the government but only in exchange for new stock issued by the companies. This is what we did with General Motors. It is what any other investor would require of these companies. Structuring the bailout as cash for equity ensures that, once the recession has passed and the companies bounce back, the public gets the benefit of their investment. It would also establish substantial public ownership that we should hold on to permanently.

Social Wealth Fund for Stocks

The stock market is collapsing as the prices of the shares of all companies are declining dramatically. Investors are selling off their stock at increasingly lower prices and putting their money into cash and treasury bonds. This has driven the interest rates on treasury bonds to their lowest levels ever seen.

The federal government should respond to this situation by selling trillions of dollars of new treasury bonds and then plowing the cash from those sales into stock purchases. Similarly, the federal reserve should expand its balance sheet by creating new money and buying corporate stock with it. The stock purchased through these two mechanisms should then be placed into a social wealth fund. By buying these stocks at rock-bottom prices with money borrowed (or created) at rock-bottom interest rates, the federal government will help stabilize financial markets while also ensuring that the public reaps the windfall when the stock market climbs back up after the recession is over.

Read the whole story
465 days ago
"The federal government should respond to this situation by selling trillions of dollars of new treasury bonds and then plowing the cash from those sales into stock purchases. Similarly, the federal reserve should expand its balance sheet by creating new money and buying corporate stock with it. The stock purchased through these two mechanisms should then be placed into a social wealth fund. By buying these stocks at rock-bottom prices with money borrowed (or created) at rock-bottom interest rates, the federal government will help stabilize financial markets while also ensuring that the public reaps the windfall when the stock market climbs back up after the recession is over."
Ojai, CA, US
Share this story
Next Page of Stories