What is the Crisis?

The following is my answer to a question asked of board candidates in the current ASBS election. I'd like to be able to discuss what I posted without making it look like I'm canvassing or something. You can read what other candidates said on Meta-Wiki.

Q. One candidate stated: "WMF has been going through, for many months now, an important crisis with huge stakes." I think many would agree with this overall statement; however, different people would describe the "crisis" differently. A Trustee's perception of what the problems are will play a huge role in how they approach the position. Can you please share your understanding the events of recent months, and what Board-level problems they expose? Pete F (talk) 17:55, 1 March 2016 (UTC)

There is a significant amount of mistrust and fear within the organization and the movement as a whole. The staff do not trust the board. The community does not trust the board. The community does not trust the staff. The staff are afraid of the community. The board does not even trust itself!

I believe I will have a unique perspective on this from other candidates as a WMF staffer during the “crisis”, so I'll go into some more detail. I do not believe it is worth rehashing the specific events that have happened recently, so I'll try to speak more generally.

The staff do not trust the board. Starting with the November all-staff meeting, an overwhelming amount of staff have lost their trust in the board. When legitimate concerns were presented to the board (many of which are still not, and probably will never be public), the board did not act, forcing staff to bring the matter up publicly, after strongly trying to resolve the matter internally. From what I understand, part of the problem was that one of the main sources of information the board got was from the executive director, which was not a good conduit for passing information along. Jimmy and Alice visiting San Francisco to talk with staff is a good start to rebuild trust, now both sides need to follow through by creating the discussed “ombudsperson” role that reports directly to the board.

The community does not trust the board. The other questions on this page about transparency and minutes lacking detail are simply symptoms of a lack of trust (not that they aren't important, but I'll address them in those questions). Rebuilding trust here will take a while, mostly because the board operates in a different manner than our wikis do. I believe that over the years I have earned the trust of many Wikimedians, whether in my volunteer role or as staff, and hope to bring some of that trust to the board.

The community does not trust the staff. I talked about this in my candidate statement – the community is tired of bad software being thrust upon them, and then fighting to get rid of it. It's not uncommon for me to be reading a village pump or technical page where people will sincerely state that they believe the WMF is actively harming the projects, rather than helping them. I believe that the best way to rebuild trust in this area is to get more people involved in product and technical development. Whether it be individual community members of affiliates participating in developing our software, I think the end result will be significantly better.

The staff are afraid of the community. Some (definitely not all!) community members are openly hostile to staff members, questioning their competency for their job, etc. Part of it is due to the lack of trust and past history, while other community members are just frustrated and are looking for someone to blame, and it technically happens to fall under the WMF's responsibility. The end result is that staff will now draft proposals and plans in private Google Docs instead of public wikis, or discuss things via private email instead of public mailing lists. We claim that our communities are our biggest asset, but the WMF needs to start acting like it means it.

The board does not trust itself. This one seems self-evident based on the removal of Doc James from the board. I do not have much visibility on the current board dynamics, and don't have much to offer in what to work on.

So where does the board come in to all of this? The WMF is in a partial state of dyfunction currently. It is the board's responsibility to make sure the WMF is able to function properly. The board needs to be proactive in rebuilding bridges and rebuilding trust.

All that said, a phrase that's come up recently that I strongly agree with is “change happens at the speed of trust”. There is a huge lack of trust right now, which means that change will happen slowly. That's okay. Trying to push change through too quickly will only cause more mistrust, which we desperately need to avoid.



The future of Echo

Echo (aka "Notifications") is the MediaWiki extension that provides a notifications framework for other features to use, as well as some "core" notification types. It's had a tumultuous history ever since it was deployed to Wikimedia wikis in 2013. To figure out where Echo should go, I think we have to look at its history first.

The initial deployment was pretty rough. The OBOD was gone, but no replacement. Standard APIs like hasmsg=1 didn't work either. The development team (WMF's "E2" team) iterated on those, improved the API integration and added a newer and less obtrusive new messages indicator (while also designing my personal favorite, nerdalert.js). So the time came for the deployment to be expanded to all Wikimedia wikis, at which point nearly the entire development team switched over to a different project (Flow), leaving one engineer to supervise the rollout over all projects. Umm, what???

So we ended up with bugs like "Mention notifications don't work if the sender's signature contains localized namespaces" taking nearly 5 full months to fix. That sucked.

Anyways, Echo was mostly dormant until mid-2014, when the "Core features" (previously E2) team made changes to Echo to support having Flow notifications go in a separate pane in the flyout. Except Flow was barely used at this point, so no one noticed outside of mediawiki.org really. (I was technically on the core features team at this point, but mostly doing Flow API things IIRC). But after a month or two most of the development moved back to Flow.

And then finally after the great engineering re-org of 2015, the Collaboration team (formerly Core features, but you already guessed that ;-)) also started looking at Echo seriously, and starting to fix some of the issues that had piled up over the years, including splitting the alerts and messages flyouts properly, giving user talk messages more prominence, and eventually embarking upon cross-wiki notifications.

Despite barely getting any attention from developers (until now really), Echo remains the most popular and really only successful product to come out of the E2/Core features/Collaboration team. Why?

The most useful feature of Echo is definitely the mention notifications which allow you to "ping" other users. So instead of people having to watchlist giant pages and look through history to see if anyone responded to the one thread they want to follow, they can wait for the notification that someone pinged them while responding. The once widely used {{talkback}} template is now deprecated in favor of these notifications. And for most users, this functionality is good enough. Watchlists really haven't seen any major changes in the past few years (again, until the past month, but that's another story), so something else that can do the job was welcomed by users.

So now we're in the state where we have two overlapping, but not exactly the same features: Notifications and Watchlists. Gryllida has written up an RfC titled "Need to merge Notifications and Watchlist or lack thereof", discussing some of the similarities and differences. Over the next few months I'd like to flesh out the RfC a bit more and work on a solid proposal.

I also wrote an RfC yesterday titled "Notifications in core", which discusses merging parts of the Echo extension into MediaWiki core. I think this is crucial in improving the usability of notifications from both a user and developer perspective, as well as improving the architecture by requiring less hacks. And it can probably be done before the reconciliation of notifications and watchlists.

So, that's where I think Echo should go in the next year or two. I probably won't have time to actually work on this, so we'll see!


Why I am still here.

This is a follow up to my probably depressing blog post on Friday, entitled "Why am I still here?" I think I figured out the answer, it's a lot of different things!

So now that you're here, lets talk about something happy: cross-wiki notifications!

In 2013, notifications were introduced and collected events that users wanted to know about in a flyout. The initial response was that it looked like Facebook (it really did), and anger (well I'm not sure what the right emotion was) that the "Orange Bar of Death" was gone. But the longer term impact has been a significant change to how on-wiki discussions work. One of the new features was that users can get "pinged" when someone links their username in a discussion comment. These days usage of templates like {{talkback}} is extremely rare, and entirely replaced with pinging.

So, cross-wiki notifications. This has been a long requested feature request, and I had actually written an RfC about implementing it long before I was on the team tasked with implementing it (well, technically I was on the E2 team at that time, but history is confusing). Of course, I would be remiss to mentoin that most editors already have a cross-wiki notification system through their email inbox. Regardless, some people are so excited they didn't even read the announcement. ;-)

However, there were some limitations in the architecture that made expanding it for cross-wiki usage difficult. The most significant was that the formatting system was designed for specific types of notifications, hard to extend, and most importantly, difficult for developers to understand. We probably could have hacked our way around it, but the extensability was going to be a serious problem. And if we were going to touch some code, we should leave it better than we found it. :)

So, what to replace it with? I came up with a presentation model system that every notification type would implement. This made it much easier to follow instead of the old data-driven style of formatters. We also built an API (sorry, logged in users only) around it, so the frontend code can handle all of the display, and not have to parse out HTML to figure out the notification icon (yes, it used to do that :().

So now...going cross-wiki. We effectively had two approaches we could do, the first being created a central database table that all notifications went into, and then having each wiki read from it. This provided some logistical problems like checking revdel status, page existence, and local message overrides. The other approach was to make API requests to other wikis, and then merge the output into the flyout. We ended up going with the second one (cross-wiki API requests) after seeing crosswatch in action, which used the same strategy for its implementation of cross-wiki notifications.

We created a database tracking table which contains the number of unread alerts and messages you have on each wiki, so it would be known on what wikis to query notifications for. We also realized we could make these requests client-side (using CORS) for now, and move them server-side later.

And that's pretty much it. There were also some significant frontend changes, like converting it all to use OOjs UI, but I didn't really work on those, and can't speak much about them.

So, go to Special:BetaFeatures on your favorite wiki, check "enhanced notifications", and let us know what you think! And if it's not on your wiki yet, nag someone in #wikimedia-collaboration on freenode!

Update: The list of wikis where it is available are listed on MediaWiki.org, and it should hit all other wikis by the end of March.


Why am I still here?

Honestly, I'm not sure. I'm not working at the WMF for the money. And it's not because my job is fun - it stopped being fun a few months ago. I'm mostly still here because I don't really have anything else to do right now. I've been doing this for nearly 3 years now...and even though I have other plans lined up in a few months, I'm kind of a lame duck right now.

I'm embarrassed at everything that's gone on. I don't want to tell my friends that the WMF is all screwed up and not going to get better. I don't want to have to explain how after I praised how all of my work is public and transparent, that our organization no longer is.

The whole Arnnon situation never really sank in for me. It was mostly surreal, with me thinking that there was no way they'd actually let him stay on the board. All the right people spoke up, signed the petition, and then he was removed. The process worked! Not at all. The excuse of not knowing about it because they didn't use "google.com" is ludicrous, that I don't even know what to say about it.

It greatly frustrates me, and upsets me that my friends are in danger of having to leave the country if they lose their jobs, and can't speak up because of that fear.

Honestly, I don't understand why the current leadership hasn't left yet. Why would you want to work at a place where 93% of your employees don't believe you're doing a good job, and others have called you a liar (with proof to back it up) to your face, in front of the entire staff? I don't know everything that's going on right now, but we're sick right now and desparately need to move on.

I love, and will always love Wikimedia, but I can't say the same about the current state of the Wikimedia Foundation. I've been around for nearly nine years now (nearly half my life), and it feels like that world is slowly crumbling away and I'm powerless to stop it.

And that's all my incoherent rambling for tonight.

And now for something less depressing and totally awesome, I mostly-finished the first version of the WikimediaPageViewInfo extension, you can see a demo at http://bf-wmpageview.wmflabs.org/wiki/Taylor_Swift?action=info. It's currently going through a security review, and myself and Addshore are going to work on getting it deployed!


Future plans

I've decided to go back to school starting in April for the spring quarter. I'm tenatively going to be studying journalism, because that seems interesting to me right now. We'll see how long that lasts. ;-)

At the same time I'm also going to be switching to a part-time contract with the Wikimedia Foundation, transferring to the Parsing team, and working on the implementation of shadow namespaces.

So...yay change! But that's enough change for now.


linux.conf.au - Day 2

Day 2! I hopped around quite a bit, so I'll just talk about the talks I particuarly enjoyed. First was a talk about site reliability at Pinterest. The main things I thought that were interesting was how they have also created their own deployment tool, and that they've made it easy for the deployer to rollback their code ranges. After that was a fun talk by Dan about his experiences with FOIA requests in New Zealand.

After lunch was another site reliability talk by Dropbox, which discussed a lot about how they made their pages much smarter, so people only got paged if they actually needed to be paged.

I finished up the afternoon with two functional programming talks, the first talking about how Swift can be used to do functional programming, and the second was about how Facebook uses Haskell in production, and dispelling some of the common myths that surround Haskell. I would definitely recommend watching the recording of the latter.

There was a keysigning party in the evening, as well as the professional delegates networking session, which was a great place to meet new people.

Onto day 3!


linux.conf.au - Day 1

Today was day 1 of the linux.conf.au conference in Geelong. The first two days are organized as "MiniConfs", where rooms have a general topic and presentations are scheduled by the MiniConf organizer instead of the LCA programme committee. In the morning I primarily attended the documentation/technical writing MiniConf. I was slightly surprised by how many of the presenters were from Red Hat, but I'm not surprised that they write a lot of documentation. In the morning, I liked the "On working from home" talk, and some of the tips that different remotees shared. I particularly enjoyed the one where someone set a 3 hour expiry on their ssh keyring so they would be reminded to eat lunch while trying to log into a server.

In the afternoon I attended "Assorted Security Topics in Open Cloud: Overview of Advanced Threats, 2015’s Significant Vulnerabilities and Lessons, and Advancements in OpenStack Trusted Computing and Hadoop Encryption", and I have to say that I didn't really follow most of the talk; I don't think I was the right audience for it. After that I went back to the documentation MiniConf and listened to the amusing "My beautiful jacket" talk.

After afternoon tea (a totally awesome concept by the way), I went over to the bio MiniConf for "Building and deploying the Genomics Virtual Laboratory on the cloud(s)", not really sure what to expect. It turned out to be a pretty good technical talk, and some of the technology they had built looked really interesting, including their cloudbridge library. And we finished with one of the best named talks, "Sequencing your poo with a USB stick". As someone who doesn't really enjoy biology or understand most of it aside from what I learned in high school, I thourougly enjoyed this presentation. The presenter explained most of the concepts, and was an extremely engaging speaker.

The Ingress BoF was in the evening, so it was nice getting to meet some other players and have dinner with them, even if there wasn't much Ingressing. ;)

Ready for day 2!


Cross-wiki notifications on test wikis

As was teased in this week's tech news, cross-wiki notifications are now available as a BetaFeature on testwiki and test2wiki. Simply go to your preferences, enable "Enhanced notifications", trigger a notification for yourself on test2.wikipedia.org (e.g. log out and edit your talk page), and open up your notifications flyout!

The next steps are going to be populating the backend on all wikis, and then enabling it as a BetaFeature on more wikis (T124234).

Please try it out! If you find any bugs, please file a task in the newly-renamed Notifications Phabricator project.



2016 Wikimedia Developer Summit recap

A lot happened at the Wikimedia Developer Summit over the past week, I had a fantastic time and enjoyed getting to meet up with everyone again. Here's a quick recap:

  • Learned about the status of the Differential migration, I feel more reassured about the workflow now, just not sure about when it's going to happen.
  • Attended a very productive meeting with the MediaWiki Stakeholder's group, Mark (hexmode) has written up a good summary of the meeting. I'm optimistic about the future.
  • Held an impromptu session about shadow namespaces, which left me with lots of questions to answer. I haven't had a chance to summarize the notes yet, will do so later this week.
  • Had an exciting main room discussion about supporting non-Wikimedia installs of MediaWiki and our other software, which continued out into the hallways. I think we need to continue with more research and talking with hosting providers about how MediaWiki is actually used. For a while now I've been concerned with whether we're able to get our users to actually upgrade. WikiApiary says that 1.16.x is nearly as widely used as 1.25.x (the current legacy release).
  • Had an early morning session about beta rollouts, usage of BetaFeatures, and communication channels.
  • Attended the "software engineering" session about dependency injection and then SOA. I mainly just listened in this one.
  • Went to a session by community liasons about interacting with communities and stuff. Also mainly just listened.
  • Finally, had a really productive session led by bawolff about code review, and how we can improve the situation.

I hacked on quite a few different projects, more on that later :)

And, new laptop stickers ^.^


Packaging MediaWiki for Debian

Over the past few months I've been working on an updated Debian package for MediaWiki. After working with Luke, Faidon, Moritz, and a few others, we now have a mostly working package! It's being developed in Gerrit (the mediawiki/debian repository), so it hopefully will get more visibility, and we can keep it in sync with current development.

If you're interested in testing, I've uploaded it to people.wikimedia.org for now.


2016 Wikimedia Developer Summit

I'll be at the 2016 Architecture MediaWiki Wikimedia Developer Summit over the next week. I mainly want to talk to/nag people about:

  • Configuration
  • Extension management
  • Librarization
  • Composer (okay, not really that much)
  • Wiki farms!
  • "Global" and cross-wiki tools/tooling
  • Why JSON is better than YAML (really)

I've also proposed a session about shadow namespaces, which I'll probably discuss informally with people or in an unconference slot.

And in case you missed it, I got a new laptop sticker ^.^


Life happens

It's been a while since I last blogged, so here's a quick summary of what's been going on (totally out of order):

Also, new laptop stickers ^.^


Upgrading to Fedora 23

I upgraded to Fedora 23 last night, and it went pretty smoothly.

I started with sudo dnf system-upgrade download --releasever=23, which started with a fun dowload of 2.5G of package upgrades (still better than Xcode!). I ran into the documented upgrade issue with the vagrant package, but the workaround of --allowerasing fixed it.

All the copr and rpmfusion packages I was using were all set for Fedora 23, and nearly all worked right away (more on that in a bit).

Then came the scary sudo dnf system-upgrade reboot. I entered in my encryption password, and it started spewing out the list of packages it was going to upgrade. Aaand then about a thousand packages in, it went screwy. After freaking out for a few seconds and tweeting about it (priorities!), I hit esc to go into the GUI mode, and esc again for text mode, and everything was fine. Yay!

Everything else went fine, and it booted up into Fedora 23 properly. The first thing that went wrong was that it did not automatically connect to my wifi network, and it couldn't see any of them. Turning the wifi off and on fixed it. I had seen thiis issue once on Fedora 22, so I don't think this is a new regression. Since VirtualBox broke last time, I tested it next...and it was broken. The kernel module wasn't built. I noticed that the kernel-debug and kernel-debug-devel had not been upgraded, so I ran sudo dnf update --best --allowerasing to force their upgrade. After a reboot and sudo akmods, VirtualBox started up! (Related, there's been some progress on getting MediaWiki-Vagrant to work with libvirt so I can drop VirtualBox.)

After that, I tested out Chromium and PyCharm since they are from a copr repository, and both worked fine. The only major thing left I haven't tested is the Google Hangouts plugin, but I try and avoid using it so that might take a while.

Overall, this upgrade went pretty nicely, and felt relatively non-disruptive. So far the most jarring difference is the new wallpaper. :-)


Lets Encrypt first impressions

Today I spent two hours setting up an SSL certificate with Lets Encrypt for the wikiconferenceusa.org website.

It. Was. Easy. It was relatively straightforward, and I felt comfortable with all the steps I went through.

First, I cloned the git repository, and ran the letsencrypt-auto script, which installed the necessary dependencies and started setting up our account and fetching the SSL certificates. At this point it complained that we had a service running on port 80 (varnish) and that we had to stop it temporarily for the process to continue. That wasn't really ideal as it would have caused downtime. After asking in #letsencrypt on freenode, I was pointed to the --webroot-path option, which worked, and required no downtime!

At that point, the certificates were saved in /etc/letsencrypt/ and ready for usage. Since we already had a different certificate for wikimediadc.org, we had to set up SNI, which also was pretty straightforward. Except I made a typo and spent 30-45 minutes randomly debugging until I noticed it, and then everything worked!

In conclusion, it was really easy. I've signed up legoktm.com for their beta, hopefully it is approved soon, so you'll be able to read this over HTTPS :-)


Introducing mwmon

Ocassionally some of the MediaWiki wikis I help maintain would go down, usually due to heavy traffic or a DoS of some kind. Sometimes Apache would be overloaded, or even MySQL being hammered (I'm looking at you DPL).

When this was happening around WikiConference USA time to that wiki, I wrote a quick Python script that would text me whenever it went down.

I've now generalized that script to be more easily configurable, and support an arbitrary number of wikis, named mwmon, which now features a basic web frontend.

For each wiki, it checks that the home page, Special:BlankPage, and the API are responding. Ideally the home page check will test the cache, BlankPage will hit MediaWiki directly, and the API is used to get the version that is installed.

Notifications are delivered over email, which I have configured to use AT&T's email to text gateway (@txt.att.net), so it'll go to my phone.


Attention K-Mart Shoppers

This is not my typical music post. Yesterday I had the pleasure of visiting the Internet Archive (more on that in a future post), and they mentioned a collection called "Attention K-Mart Shoppers", which is just tapes they played over and over at K-Mart. And for some reason, people enjoy listening to it!

I spent my morning today listening to it. It was really soothing and relaxing. The ads for random TV shows and store announcements were amusing, but didn't really interrupt what I was doing.

Definitely worth listening to.


Self-hosted git

As part of using only free software, I also started thinking about the various non-free services I am dependent upon. One of those I had already started working on replacing was Github. Github is currently the canonical source location for a lot of my various projects, including some that aren't even on my laptops.

So, alternatives. First I considered a hosted service that runs free software, but those don't really exist any more. Gitorious has shut down, and it turns out that GitLab has gone open core.

Alright then, self-hosted git it is. I tried out and evaluated two projects: cgit and gogs.

cgit is a web viewer for git repositories written in C. I like the UI, having used it while browsing Fedora and Linux kernel repositories. The basic set up of it was pretty simple, I downloaded and unzipped it, set up some Apache CGI rules, and bam, it was running. I imported 2 git repositories, and they showed up right away. I started trying to enable some other features like syntax highlighting, and that's where it stopped being easy to work with. I tried both Pygments and a Perl highlighter, neither worked. Around this point I got bored and gave up.

gogs is a full blown clone of Github's features written in Go. The UI is extremely similar to Github, so it was very easy to figure out. Set up was a little tricky, I had to create a "git" user for it to run as, and then fiddle with setting up an Apache proxy rule so /git proxies to localhost:3000 (I originally started out in a sub-path instead of a full sub-domain). After that, I was able to import a few Github repos directly, and clone them. Yay! It also has a mirror feature that can synchronize with an external repository every hour. I found a gogs-migrate tool that claimed to set up mirrors of all your Github repos in a gogs installation, but I couldn't get it to work. I ended up writing my own Python version called gogs-mirror. And for bonus points, I submitted an upstream pull request to improve an interface message.

Currently I have gogs running at git.legoktm.com. All of my non-forked Github repos are mirrored there, and it also is the canonical source of gogs-mirror. The next step will be to switch the mirroring, so that the canonical source lives on git.legoktm.com, and Github is a mirror. I'll also want to update links to those repositories on places like PyPI, various wikis, etc. More to come!


I joined the FSF today

Wow, two blog posts in one day. Okay, I was close and just didn't finish in time.

As the title alludes, I became an FSF member today yesterday. Aaand then I proceeded to buy their Super Sticker Mega Multi Pack. ^.^

I thought this would also be a good time to audit my usage of non-free software...at least what's installed on my computer:

  • Dropbox - mainly used to automatically back up photos from my phone to 3 different computers. I can probably replace it with Syncthing.
  • Google Hangouts - I had to install some plugin rpm for it to work. Since I need it for work, I don't really have an option here.
  • PHPStorm - I kept hoping that JetBrains would release an open source version like they did for PyCharm...but they haven't. Our MediaWiki open source license was also never renewed so I've been using the free EAPs, which I'm definitely not comfortable with. I tried out GNOME builder for a Go project, and didn't really like it. I might try out Atom at some point.

I also have my old MacBook, which at this point is mostly used for playing music or watching videos (all using free software!). I'm getting tired of Apple's updates and all the "OMG log into iCloud" nagging, so I'll probably replace it with some flavor of GNU/Linux at some point (see what I did there! ;)).



CalGames 2015: An adventure into go, networking, and robots

This past weekend I volunteered at the 2015 CalGames tournament, which played the 2015 FIRST Robotics Competition game, Recycle Rush. This year CalGames had a few new things going on, and one of them was using a new field management system (FMS), called Cheesy Arena. Cheesy Arena was written by Team 254 for their own offseason event and is branded as a "field management system that just works".

It's also written in go. I didn't know go. I still don't know go that well. Myself and Lee were in charge of making two major modifications to the system:

  • Use the 2015 round robin elimination system instead of the traditional bracket
  • Create an audience display to display the eliminations rankings

In retrospect, we should have lobbied during the rule change process to use the bracket system since the code was already written for that. Neither of us were involved in the conversation that early on, so we never had that opportunity though. And while it would have definitely made my life easier, I'm very fortunate it didn't happen because I got the opportunity to play with and learn some go.

So, the actual changes. Integrating into Cheesy Arena was actually pretty simple, the function to generate the next rounds in the bracket was called after every match was saved, so we replaced that function with our own that generated the schedule. To figure out what matches needed to be generated, we counted how many elimination matches were already scheduled, and how many had been played. If 0 had been scheduled, we generated the quarterfinals schedule. If only 8 had been scheduled and played, we sorted all the alliances by their cumulative score over the two matches, and advanced the top four. Our code was designed to be as uninvasive as possible, so we had to reverse engineer what alliances teams were on, since the mapping was only stored in the other direction (what teams are on each alliance). We also did not implement any of the tie breaking rules, and didn't really know what would happen if we encountered a tie. Lee and I quickly wrote some code to handle that while at the event Friday afternoon, but we never tested or deployed it. Luckily we never needed it :-).

With the backend of match schedule generation taken care of, we still needed a frontend to display the rankings. The audience displays were implemented with an HTML template, CSS, JavaScript animations and transitions, and websockets to communicate with the backend. Since my JavaScript is much stronger than go, I decided to have the backend export a list of all the teams, their scores, and how many matches they had played over websockets and do the sorting and table generation in JavaScript. I mostly grabbed code from the scoring display, and fiddled with the CSS so it looked like how I wanted it to, and was rather pleased with the result (screenshot). This was finished at about 11pm Thursday evening.

During setup of the event Friday morning, I got to learn about how the networking was configured. Cheesy Arena was running on a small Ubuntu Trusty machine, and was connected to a switch that all the driver stations were plugged into. There was another switch that provided internet access for pushing data to The Blue Alliance. All the laptops used for match controlling and scoring were connected to Cheesy Arena over ethernet cables, it was only accessible over WiFi for the FTA tablets that showed which robots had connected to the FMS. All of the critical networking equipment and Cheesy Arena server were on a dedicated UPS in case someone unplugged the field (it has happened in the past), while matches were running.

And...the event was mostly uneventful! All of the issues with robots not connecting properly were some kind of issue with the robot or an improperly configured driver station. Mostly it was people FORGETTING TO TURN ON THEIR ROBOTS. Really, how hard is it??

When it came time for eliminations, the backend code worked flawlessly, and scheduled all the matches properly. We did run into a really silly bug wih the rankings display bug though. When writing it, I was under the assumption that we would be using reasonably recent versions of Chromium, so I used the newly introduced String.prototype.startsWith function, because it made the code easier to read since it is the same as Python. Unfortunately the program that was managing the overlays and video stream was using an embedded Chromium 37, which didn't contain this function, so it would throw JavaScript errors. Quickly switching to String.prototype.indexOf. Once that was fixed, everything worked perfectly.

In the end, we only had one field fault in the very last finals match :(, which was due to an overload on the network, resulting in a "christmas tree" effect with robots flickering from green to red and back as they kept losing connections.

Overall, I had a lot of fun learning go. My experience with it was that it was very very hard to write crashy code in go. Whether it was trivial typos or glaring type errors, go build would refuse to complete if I had an issue in my code. I was only able to get go to crash once in all of my development, and that was due to a database level error that I suppressed. I'm not a huge fan of the go error handling model, but it very heavily enforces the "explicit is better than implicit" philosophy I love from Python.

I used "we" a lot above, none of this would have been possible without help from Alex M., Clayton, Mr. Mitchell, Lee, Novia, wctaiwan, Yulli, and all the other volunteers. I hope to be back next year!

Also, new laptop sticker ^.^


PHPCS now voting on MediaWiki core

I announced earlier today that PHP CodeSniffer is now a voting job against all submitted MediaWiki core patches. This is the result of a lot of hard work by a large number of people.

Work on PHPCS compliance usually comes in bursts, most recently I was motivated after a closing PHP tag (?>) made it into our codebase, which easily could have been a huge issue.

PHPCS detects most code style issues as per our coding conventions using an enhanced PHP parser, "sniffs" written by upstream, and our own set of custom sniffs. The goal is to provide faster feedback to users about routine errors, instead of requiring a human to point them out.

There's still work to be done though, we made it voting by disabling some sniffs that were failing. Some of those are going to take a lot of work to enable (like maximum line length), but it's a huge accomplishment to get a large portion of it voting.

We also released MediaWiki-CodeSniffer 0.4.0 today, which is the versioned ruleset and custom sniffs. It has experimental support for automatically fixing errors that PHPCS spots, which will make it even easier to fix style issues.


GSoC 2015

This year for Google Summer of Code, I co-mentored with YuviPanda a project to build a cross-wiki watchlist that uses OAuth and runs on Tool Labs. Sitic did a fantastic job on the project, and the final result, crosswatch, is amazing.

I use it quite frequently, and aside from the cross-wiki integration, I think the killer feature is the inline diffs that you can see by clicking on an entry. I used to use Popups to kind of do that, but being able to see the diff just as MediaWiki would have shown it is really nice.

Aside from it being immensly useful, I think crosswatch is going to set the stage for future improvements to the watchlist and related features. I'm already independently implementing two of its many features in MediaWiki: ORES integration and cross-wiki notifications. I don't think it will be long until other feature requests from crosswatch are filed and implemented.

Also, new laptop sticker ^.^




Switching to Fedora

I recently got a new ThinkPad T440S through work, and decided to install Fedora on it. This was a few weeks ago, so I started out with Fedora 20 (now running F21).

Overall, it's perfect. There are a few issues so far, but all of those seem to be hardware related. Mainly:

  • wifi randomly drops causing the entire OS to freeze
  • touchpad gets annoying while typing (I ended up disabling it)

Other than that, I absolutely love it. I'm always amazed at how fast the boot is compared to my MacBook, the SSD was totally worth it.



Quassel

I've recently switched from using ZNC+Textual over to Quassel. It has the same idea as a bouncer, with a "core" that runs on your server, and a "client" that you run locally. I've been using ZNC for over 2 years now, so this was a pretty big switch for me. I did it for a few reasons:

  • Textual didn't really handle crappy networks well, constantly disconnecting
  • Textual 5 was a paid upgrade, something I wan't too enthusastic about after already paying for it
  • Textual was having issues loading past scrollback, requiring force quits and manual cache clearing
  • ZNC would fall over and die when freenode netsplit.

In addition, I'm getting a second laptop soon and want to be able to use both for IRC and keep everything in sync. Quassel seems to make that feasible, as the core manages scrollback rather than individual clients being responsible.

Overall, I'm pretty happy with Quassel so far. It handles terrible network connections pretty well, and makes it extremely convenient to go back days in scrollback. The UI on the other hand leaves a lot to be desired for...Textual's was far superior. My main issue so far has been that until a message you typed has been sent, it doesn't show up in the scrollback. There are also some issues with the channel selector picking the wrong channel.

I'm slightly concerned about how much space quassel will use for logs. It's already using 33MB after only a week of usage, and I tend to be in some highly active channels.

For now I've left ZNC running as a backup, but I'll shut it down in another week. I still have an irssi session running in a screen on a server I'm logged into with mosh, which is handy for being on both sides of freenode netsplits :P


Pelican theming

Pelican theming isn't actually that difficult. It's based on jinja2 templates, and there were enough themes already out there to pick one and just start forking it.

I'm not too happy that they want themes installed into your system python path (or a virtualenv in my case), but since you can symlink them, it's not too bad.

I found one I liked called pelican-sober, which I ended up forking to customize some parts of it.



First

Hello, this is my first blog post. Yippe!