Rust and SecureDrop

I'll be attending RustConf in Montreal this week so I wanted to take a moment to describe how we're using Rust in SecureDrop.

Historically SecureDrop has always been a primarily Python project accompanied by bash scripts and then HTML/CSS/JS for the web interface. Aaron Swartz's first commit to SecureDrop was in June 2011, predating Rust 0.1 by just six months.

Fast-forward a decade, the Rust toolchain was added to SecureDrop builds in June 2021 because it was needed for the Python cryptography package.

Less than a year later we were seriously discussing writing and shipping our own Rust code. This took the form of two proposals that are now public:

  1. Support Rust as a first-class language
  2. Replace pretty_bad_protocol with Sequoia for PGP operations

The first proposal discusses the advantages and disadvantages of Rust versus Python, and then outlines three different scenarios for when it's appropriate to use Rust and how to evaluate it. The second proposal is a concrete plan that attempts to put the first proposal into action.

Both proposals were accepted by the team and SecureDrop-with-a-Rust-bridge-to-Sequoia shipped in November 2023, which I've previously written about.

Earlier this year we shipped our second Rust project, a rewrite of SecureDrop Workstation's proxy component. (We'll write a separate blog post about this...eventually.)

I would personally describe the attitude of people who work on SecureDrop as significantly caring about security and correctness, and recognizing that Rust is good at that. While there's a reasonable amount of caution about adopting new and potentially unproven technologies because of the churn required if it doesn't work out, I think everyone would agree that Rust has passed that threshold.

The undecided question is what the system architecture, including implementation language, will be for the next-generation SecureDrop server. I think Rust would be a good choice, but it still needs to be discussed and agreed upon.

At RustConf you should be able to find me wearing a SecureDrop/Freedom of the Press Foundation shirt or something Wikipedia related. I'm happy to talk about SecureDrop and share our experience; I'm especially interested in learning about training teams on Rust and well, anything else I can pick up.


gerrit-grr 5.0

I recently released 5.0.0 of my grr tool that makes working with Gerrit easier. It is an alternative to git-review; I personally think grr is more straightforward to use, but I also haven't used git-review since 2014 when I first got frustrated enough to create grr.

I had let it bitrot since I wasn't doing as much code review and it seemed like Wikimedia was going to move away from Gerrit but both of those have changed now!

There's no user-facing changes, it was just a lot of internal refactoring that required breaking API changes, and then removing that API entirely because it was blocking further refactoring. Rewriting grr in Rust was one of my first Rust projects and the code really showed that. It's now cleaned up and I also dropped all the dependencies that linked to C code (OpenSSL/libgit2) to make builds easier and more portable.

Fundamentally, grr is a small wrapper around Gerrit's native functionality. I don't think it's widely known that submitting a patch to Gerrit is as straightforward as git push origin HEAD:refs/for/<branch>. And if you want to set a custom topic, you can append %topic=<...>.

So that's exactly what grr executes; but much shorter. To submit a patch just run: grr. (If you need to push to the non-primary branch, use grr <branch>.) And to pull down a patch for review, run grr fetch <id>. There's some more commands documented in the README.

Going back to the %topic= example, grr provides a --topic <...> option that turns into that. If you want to immediately set a Code-Review +2 vote when uploading (e.g. to automatically trigger CI to merge), you can provide --code-review=+2 (turns into %l=Code-Review+2).

To install, run cargo install --locked gerrit-grr. Pre-built binaries and an OCI image are also available.


How many Rs are in the word strawberry?

There's a viral Mastodon post that demonstrates a supposed deficiency in large language models (LLMs), specifically ChatGPT, because it apparently can't count the number of letters in the English word "strawberry".

> How many R's are in the word strawberry

ChatGPT: There are two "R's" in the word "strawberry."

> Are you sure?

ChatGPT: Yes, I'm sure. There are two "R's" in the word "strawberry."

[...]

> Count the Rs in the word straw and the word berry separately.

ChatGPT: Sure. Let's break it down:

* The word "straw" has 1 "R."
* The word "berry" has 2 "R's."

So, together they have 3 "R's." But in the combined word "strawberry," there are still only 2 "R's."

At first glance, that's a big yikes. (I was able to reproduce the inital result using ChatGPT 4o.)

But I get annoyed by this kind of post for two main reasons: 1) if you ask slightly differently, ChatGPT can easily get this question right, and 2) it's a missed opportunity to explain how LLMs work, specifically with regards to tokenization.

Let's try again#

Here's the prompt I used with ChatGPT 4o:

> Write and then run a Python program to determine how many R's are in the word strawberry

ChatGPT: The word "strawberry" contains 3 occurrences of the letter 'R'. 

It's not really well known that ChatGPT has a feature to execute Python code in a sandbox and use the result in answers.

The Python code it generated was:

# Python program to count the number of 'R's in the word 'strawberry'

word = "strawberry"
count_Rs = word.lower().count('r')  # Convert the word to lowercase and count 'r'
count_Rs

The code is pretty simple, and I think we'd all agree that this is a place where Python code can quite accurately answer this question. Having the LLM generate the code and execute it nicely sidesteps the underlying issue.

Of course, the problem is you have to prompt it to use Python in the first place, knowing that this is a subject where an LLM without code execution would fail. Which brings me to...

Tokenization#

When you provide a prompt for an LLM, it breaks down each word into tokens. OpenAI provides a visualization tool for this; you can see that using the GPT-4 tokenizer, the string "strawberry" is broken down into "str", "aw" and "berry", which get transformed to numerical values of [496, 675, 15717]. Or in other words, the LLM doesn't have access to individual characters, it just sees the tokens.

In this case, you can try to bypass the tokenizer by using spaces, so each letter is now its own word and tokenized separately:

> How many r characters are in the letters s t r a w b e r r y

ChatGPT: The word "strawberry" contains 3 instances of the letter "r."

Success, though I've had mixed success with approach this in the past.

Closing thoughts#

There are plenty of problems, both technically and ethically, with LLMs and I don't want to be (or even appear to be) an apologist for them. But I think it's important that we all understand how LLMs work to actually tackle and address those problems.


Building a less terrible URL shortener

The demise of goo.gl is a good opportunity to write about how we built a less terrible URL shortener for Wikimedia projects: w.wiki. (I actually started writing this blog post in 2016 and never got back to it, oops.)

URL shorteners are generally a bad idea for a few main reasons:

  1. They obfuscate the actual link destination, making it harder to figure out where a link will take you.
  2. If they disappear or are shut down, the link is broken, even if the destination is fully functional.
  3. They often collect extra tracking/analytics information.

But there are also legitimate reasons to want to shorten a URL, including use in printed media where it's easier for people to type a shorter URL. Or circumstances where there are restrictive character limits like tweets and IRC topics. The latter often affects non-ASCII languages even more when limits are measured in bytes instead of Unicode characters.

At the end of the day, there was still considerable demand for a URL shortener, so we figured we could provide one that was well, less terrible. Following a RfC, we adopted Tim's proposal, and a plan to avoid the aforementioned flaws:

  1. Limit shortening to Wikimedia-controlled domains, so you have a general sense of where you'd end up. (Other generic URL shorteners are banned on Wikimedia sites because they bypass our domain-based spam blocking.)
  2. Proactively provide dumps as a guarantee that if the service ever disappeared, people could still map URLs to their targets. You can find them on dumps.wikimedia.org and they're mirrored to the Internet Archive.
  3. Intentionally avoid any extra tracking and metrics collection. It is still included in Wikimedia's general webrequest logs, but there is no dedicated, extra tracking for short URLs besides what every request gets.

Anyone can create short URLs for any approved domain, subject to some rate limits and anti-abuse mechanisms via a special page or the API.

All of this is open source and usable by any MediaWiki wiki by installing the UrlShortener extension. (Since this launched, additional functionality was added to use multiple character sets and generate QR codes.)

The dumps are nice for other purposes too, I use them to provide basic statistics on how many URLs have been shortened.

I still tend to have a mildly negative opinion about people using our URL shortner, but hey, it could be worse, at least they're not using goo.gl.


Side quest: creating a "main" tool

I like Simon Willison's framing of using large language models (aka LLMs, aka "AI") to enable side quests of things you wouldn't normally do.

Could I have done this without LLM assistance? Yes, but not nearly as quickly. And this was not a task on my critical path for the day—it was a sidequest at best and honestly more of a distraction.

So, yesterday's side quest: writing a tool that checks out the default branch of a Git repository, regardless of what it's named.

Context: most of my work these days happens on GitHub, which involves creating PRs off the main branch, which means I'm frequently going back to it, via git checkout main and then usually a git pull to fast-forward the branch.

But just to make things a little more interesting, the SecureDrop server Git repository's main branch is named develop, which entirely screws with muscle memory and autocomplete. Not to mention all the older projects that still use a master branch.

For a while now I've wanted a tool that just checks out the main branch, regardless of what it's actually named, and optionally pulls it and stashes pending changes.

I asked Claude 3.5 Sonnet for exactly that:

I want a Rust program named "main" that primarily checks out the main branch of a Git repository (or master if it's called that).

I want to invoke it as:

  • main - just checkout the main branch
  • main stash - stash changes, then checkout main, then pop the stash
  • main pull - checkout main and then git pull
  • main stash pull or main pull stash - stash changes, checkout main, then pull, then pop the stash

It was mostly there, except it hardcoded the main and master branches intead of looking it up via Git. I asked:

Is there a smarter way to determine the main branch? What if it's called something other than main or master?

And it adjusted to checking git symbolic-ref refs/remotes/origin/HEAD, which I didn't know about.

I cleaned up the argument handling a little bit, added --version and published it on Salsa.

It took me about 5-10 minutes for this whole process, which according to xkcd is an efficiency positive (saves 1 second, but I do it ~5 times a day) over 5 years.

It probably would've taken me 2-3x as long without using an LLM, but honestly, I'm not sure I would've ever overcome the laziness to write something so small.

Anyways, so far I haven't really gotten around to writing about my experiences and feelings about LLMs yet, so here's literally the smallest piece of work to kick that off.


Making it easier for Toolforge tools to surface replag

A number of tools hosted on Toolforge rely on the replicated MediaWiki databases, dubbed "Wiki Replicas".

Every so often these servers have replication lag, which affects the data returned as well as the performance of the queries. And when this happens, users get confused and start reporting bugs that aren't solvable.

This actually used to be way worse during the Toolserver era (sometimes replag would be on the scale of months!), and users were well educated to the potential problems. Most tools would display a banner if there was lag and there were even bots that would update an on-wiki template every hour.

A lot of these practices have been lost since the move to Toolforge since replag has been basically zero the whole time. Now that more database maintenance is happening (yay), replag is happening slightly more often.

So to make it easier for tool authors to display replag status to users with a minimal amount of effort, I've developed a new tool: replag-embed.toolforge.org

It provides an iframe that automatically displays a small banner if there's more than 30 seconds of lag and nothing otherwise.

As an example, as I write this, the current replag for commons.wikimedia.org looks like:

The replica database (s4) is currently lagged by 1762.9987 seconds (00:29:22), you may see outdated results or slowness. See the replag tool for more details.

Of course, you can use CSS to style it differently if you'd like.

I've integrated this into my Wiki streaks tool, where the banner appears/disappears depending on what wiki you select and whether it's lagged. The actual code required to do this was pretty simple.

replag-embed is written in Rust of course, (source code) and leverages in-memory caching to quickly serve responses.

Currently I'd consider this tool to be beta quality - I think it is promising and ready for other people to give it a try, but know there are probably some kinks that need to be worked out.

The Phabricator task tracking this work is T321640; comments there would be appreciated if you try it out.