As part of Boba Quest 2025 🧋, I'm trying and reviewing a new boba shop each week.
This week I tried Bobba Hubba for the first time. It's across the street from the Jackson Heights–Roosevelt Avenue station,
so it's reachable by the 7, E, F, M, and R trains, plus the Q70 SBS if you're itching for boba right after landing at LGA (or right before you leave).
For some reason Google Maps
has the address as 4032 75th St in Flushing, but according to The New York Times's neighborhood map, it's
on the border of Jackson Heights and Elmhurst.
I didn't realize until I walked in, but it also doubles as a gaming cafe! And they had a few empty claw machines too.
Anyways, let's get to the boba! Bobba Hubba had a sign advertising their top 4 drinks, so I ordered from the top of the list: a brown sugar milk tea. I wasn't asked about my sugar or ice preferences, so it
was the default.
- Boba: 2/4 the boba was flavored well but too chewy.
- Tea: 4/4 I think it's really hard to mess up brown sugar milk tea, especially because I'm not the person to complain if something is too sweet.
- Bonus: 0/1 maybe it was just late and dark outside, but the ambiance inside was not really a place I'd stick around. I didn't see anything about loyalty programs.
Total: 6/10. If I was in the area and craving boba, I wouldn't mind popping in again, but I wouldn't go out of my way to stop by.
P.S. There's now a dedicated RSS feed for all Boba Quest entries. I'll aim to regularly post them on Wednesdays.
As part of Boba Quest 2025 🧋, I'm trying and reviewing a new boba shop each week.
To kick off Boba Quest 2025 🧋, I'm reviewing I'Milky — my go-to in Long Island City (Queens). It's at
22-43 Jackson Ave., right next to Trader Joes and reachable by the 7, E, M and G trains via the Court Square station. It's incredibly small,
there's just barely enough room for 6-7 people to order and wait, but not hang around.
I ordered a jasmine green milk tea with boba, 80% sugar and 30% ice (I would normally get more ice, but it was literally below freezing outside).
- Boba: 3/4 the boba is solid, it was a good consistency and sweet. I was impressed that the boba was still at near-ideal consistency despite me waiting 20 minutes to drink it.
- Tea: 4/4 the tea is really excellent, it's basically my benchmark of what I expect jasmine milk tea to be.
- Bonus: 0/1 it's a pretty tiny place; there's no formal reward system, but if you buy a $50 giftcard, you get 1 drink free (there's also a 10% student discount Mon-Fri).
Total: 7/10.
For completeness, I'll mention that they have both dairy-free (fruit teas) and caffeine-free (milks) options. While not the best boba I've ever had in NYC, I'Milky is quite good and rather consistent. Assuming they do a similar
Easter promotion as last year, I'll review them again in April.
I'm a big fan of boba aka bubble tea aka pearl milk tea aka tapioca. Aside from it tasting good, I enjoy all the different varieties and unique combinations you can get depending on where you go.
Unsurpringly New York City has a pretty rich boba scene, especially in the areas with a higher Asian population. There are at least five boba shops within five minutes walking from my apartment; it's wonderful.
Over the past three years I've halfheartedly explored a bunch of new boba shops, writing informal reviews in a note on my phone, but never properly kept track.
So this year, I'm officially kicking off Boba Quest 2025 🧋: trying and reviewing a different boba shop each week, totaling 50 new places over the year. I've published boba reviews before,
but I'm going to do it a bit differently this time.
Boba shops will be graded on a 10-point scale:
- 4 points for the boba, both texture and flavor
- 4 points for the tea's flavor
- 1 bonus point for having something extra, like a privacy-perserving stamp card system or nice ambiance, etc.
The tenth and final point is for consistency: if a place earns a 9/10 on the first visit, I'll go back and if it's fantastic again, then I'll bump the score to a perfect 10/10. While it won't be part of the rating system, I'll
also try to call out when places provide accomodations like lactose-free milk.
I prefer milk teas, usually going for a jasmine green milk tea. I'll probably order something like that, but may choose to order the shop's specialty, if it advertises one.
One important point is that I am not explicitly trying to find the best boba in NYC (though I expect I will); I want to have accurate and usable ratings for a large number of places. Crucially this also means trying bad places, which
I will do.
If you have recommendations for places to try, let me know! I'll prioritize those, and then mom & pop shops, and then the various chains. I'll also try for some geographic diversity too, as I've yet to have boba in The Bronx or
Staten Island. The first review will drop tomorrow, stay tuned.
I'll be attending RustConf in Montreal this week so I wanted to take a moment to describe how we're using Rust in SecureDrop.
Historically SecureDrop has always been a primarily Python project accompanied by bash scripts and then HTML/CSS/JS for the web interface. Aaron Swartz's first commit to SecureDrop was in June 2011, predating Rust 0.1 by just six months.
Fast-forward a decade, the Rust toolchain was added to SecureDrop builds in June 2021 because it was needed for the Python cryptography package.
Less than a year later we were seriously discussing writing and shipping our own Rust code. This took
the form of two proposals that are now public:
- Support Rust as a first-class language
- Replace pretty_bad_protocol with Sequoia for PGP operations
The first proposal discusses the advantages and disadvantages of Rust versus Python, and then outlines three different scenarios for when it's appropriate to use Rust and how to evaluate it. The second proposal is a concrete plan that attempts to put the first proposal into action.
Both proposals were accepted by the team and SecureDrop-with-a-Rust-bridge-to-Sequoia shipped in November 2023, which I've previously written about.
Earlier this year we shipped our second Rust project, a rewrite of SecureDrop Workstation's proxy component. (We'll write a separate blog post about this...eventually.)
I would personally describe the attitude of people who work on SecureDrop as significantly caring about security and correctness, and recognizing that Rust is good at that. While there's a reasonable amount of caution about adopting new and potentially unproven technologies because of the churn required if it doesn't work out, I think everyone would agree that Rust has passed that threshold.
The undecided question is what the system architecture, including implementation language, will be for the next-generation SecureDrop server. I think Rust would be a good choice, but it still needs to be discussed and agreed upon.
At RustConf you should be able to find me wearing a SecureDrop/Freedom of the Press Foundation shirt or something Wikipedia related. I'm happy to talk about SecureDrop and share our experience; I'm especially interested in learning about training teams on Rust and well, anything else I can pick up.
I recently released 5.0.0 of my grr
tool that makes working
with Gerrit easier. It is an alternative to git-review
;
I personally think grr
is more straightforward to use, but I also haven't used git-review
since 2014 when I first got frustrated enough to create grr
.
I had let it bitrot since I wasn't doing as much code review and it seemed like Wikimedia
was going to move away from Gerrit but both
of those have changed now!
There's no user-facing changes, it was just a lot of internal refactoring
that required breaking API changes, and then removing that API entirely because it was blocking further
refactoring. Rewriting grr
in Rust was one of my first Rust projects and the code really showed that. It's now cleaned up and I also dropped all the dependencies that linked to C code (OpenSSL/libgit2) to make builds
easier and more portable.
Fundamentally, grr
is a small wrapper around Gerrit's native functionality. I don't think it's widely known that submitting a patch to Gerrit is as straightforward as git push origin HEAD:refs/for/<branch>
. And if you want to set a custom topic, you can append %topic=<...>
.
So that's exactly what grr
executes; but much shorter. To submit a patch just run: grr
. (If you need to push to the non-primary branch, use grr <branch>
.) And to pull down a patch for review, run grr fetch <id>
. There's some more commands documented in the README.
Going back to the %topic=
example, grr
provides a --topic <...>
option that turns into that. If you want to immediately set a Code-Review +2 vote when uploading (e.g. to automatically trigger CI
to merge), you can provide --code-review=+2
(turns into %l=Code-Review+2
).
To install, run cargo install --locked gerrit-grr
. Pre-built binaries and an OCI image are also available.
There's a viral Mastodon post
that demonstrates a supposed deficiency in large language models (LLMs), specifically ChatGPT,
because it apparently can't count the number of letters in the English word
"strawberry".
> How many R's are in the word strawberry
ChatGPT: There are two "R's" in the word "strawberry."
> Are you sure?
ChatGPT: Yes, I'm sure. There are two "R's" in the word "strawberry."
[...]
> Count the Rs in the word straw and the word berry separately.
ChatGPT: Sure. Let's break it down:
* The word "straw" has 1 "R."
* The word "berry" has 2 "R's."
So, together they have 3 "R's." But in the combined word "strawberry," there are still only 2 "R's."
At first glance, that's a big yikes. (I was able to reproduce the inital
result using ChatGPT 4o.)
But I get annoyed by this kind of post for
two main reasons: 1) if you ask slightly differently, ChatGPT can easily get
this question right, and 2) it's a missed opportunity to explain how LLMs work,
specifically with regards to tokenization.
Here's the prompt I used with ChatGPT 4o:
> Write and then run a Python program to determine how many R's are in the word strawberry
ChatGPT: The word "strawberry" contains 3 occurrences of the letter 'R'.
It's not really well known that ChatGPT has a feature to execute Python code
in a sandbox and use the result in answers.
The Python code it generated was:
# Python program to count the number of 'R's in the word 'strawberry'
word = "strawberry"
count_Rs = word.lower().count('r') # Convert the word to lowercase and count 'r'
count_Rs
The code is pretty simple, and I think we'd all agree that this is
a place where Python code can quite accurately answer this question. Having
the LLM generate the code and execute it nicely sidesteps the underlying issue.
Of course, the problem is you have to prompt it to use Python in the first
place, knowing that this is a subject where an LLM without code execution
would fail. Which brings me to...
When you provide a prompt for an LLM, it breaks down each word into tokens.
OpenAI provides a visualization tool
for this; you can see that using the GPT-4 tokenizer, the string "strawberry"
is broken down into "str", "aw" and "berry", which get transformed to
numerical values of [496, 675, 15717]
. Or in other words, the LLM doesn't
have access to individual characters, it just sees the tokens.
In this case, you can try to bypass the tokenizer by using spaces, so each
letter is now its own word and tokenized separately:
> How many r characters are in the letters s t r a w b e r r y
ChatGPT: The word "strawberry" contains 3 instances of the letter "r."
Success, though I've had mixed success with approach this in the past.
There are plenty of problems, both technically and ethically, with LLMs and
I don't want to be (or even appear to be) an apologist for them. But I think
it's important that we all understand how LLMs work to actually tackle and
address those problems.