Tom Almeida

Tom Almeida's website

Esports will probably never "go mainstream"

This is a topic that has sat on my heart for a very long time.

When I was in my first year of university (2016), the game Overwatch, by Activision Blizzard, was released. As someone who plays a lot of games, mostly CounterStrike, and enjoyed playing Starcraft II whilst I was in high-school, I immediately hopped on board and played a lot of Overwatch. When the Overwatch League was announced in November at Blizzcon 2016, I was very hyped. Finally, esports would have a league that could go mainstream. Teams would be based out of cities, giving the chance for lots of local fans to arrive. There was a plan for creating a second and third tier leagues to make sure there was a space for new talent to develop and be discovered, and finally there would be a way to ensure that all the players had enough money to go full time. And it seemed like a real possibility.

Both traditional sport and esport organisations clearly bought into the hype as well, with the Kraft Group and Stan Kroenke, as well as Cloud9 and Envy purchasing slots in the league for $20 million USD. Top broadcast talent was moving into Overwatch as well, as Montecristo and Doa (previously best known for League of Legends commentary) and Semmler (from CounterStrike) jumped over to the Overwatch League. The start of the first season of the Overwatch League in 2018 seemed to be a success as well, with enough viewers to rival CounterStrike's majors, and plenty of exciting games to watch. But everything seemed to come crashing down over the next few years.

The commissioner of the league, Nate Nanzer, left Blizzard early into season 2, leaving the league without its (arguably) most publicly-facing figure. This was followed by a large number of other Overwatch League staff leaving as well, as well as a meta (short for "meta-game") known as GOATS, which was so dull that player and viewing numbers dwindled. There's been a number of reports that teams are all "operating in the red", and much of the broadcast talent that went to Overwatch left at the start of season 3.

So what went so wrong?


I went back to playing CounterStrike in mid-2018, and have barely looked back since. There was a brief period in time where I played both Overwatch and CounterStrike at the same, but after the GOATS meta came into full force I lost almost all interest in playing. I have, on occasion, tried to watch the Overwatch League since then, but have found it increasingly difficult as new characters and mechanics were introduced. Sigma, Ashe, Baptiste and Echo were all introduced as characters in the short months after I stopped playing, and each time I was bewildered when I saw them. How did their abilities work, what made them unique, how did they interplay with other characters? The components of the game that I could only properly understand by playing were slowly falling away with each patch, and every time I tried to watch I'd find it difficult to tell what was going on.

I've found that I have much of the same problem with League of Legends and DOTA, both of which I don't play. They both have a massive roster of characters (especially in relation to Overwatch), all with at least four unique abilities and interplay with other characters that cannot really be understood unless you have personal experience from playing. In this sense, I can't be amazed by some element of skill which I can't even grasp. Was this particular combination of abilities really hard to pull off in tandem? I have no clue, I don't play this game. Was the interplay between these two characters such that someone who won really should have lost on paper? I have no clue, I don't play this game. Was this particular tactic a mastermind choice that completely ruined any chance that the other team had to win? I have no clue, I don't play this game.

Traditional sports don't really have this problem - in comparison it's easy to tell if something is difficult. Watching a player in AFL jump over four other extremely muscular players to snatch a ball out of the air is impressive purely off of the fact that I am human, and I know that I'd have no chance. Watching soccer players curl shots into the top corner of a goal is impressive purely based on it looking amazing and being a clearly difficult task. Impressive saves in by goalkeepers boggle the mind with their quick reactions and fast movement, and I would (I think) be able to tell that even if I had never played a game of soccer in my life. I wouldn't be able to discuss tactics much without either playing myself or watching an inordinate number of games (much like esports), but the barrier to enjoyment and watching apparently spectacular plays is much, much lower.

In addition, the ruleset by which interactions occur between players is much simpler in traditional sports. I think that the simpleness of a ruleset can be modelled by how easy it to describe the objectives of the game. Most traditional sports are, at their core, relatively simple: Do some objective more times than someone else. Soccer - put the ball into the opponent's goal more times than the other team without the ball touching your arms; Basketball - put the ball into the opponent's hoop more times than the other team; NFL - run the ball into the opponent's area more times than they do to you. These simple end objectives may have some complicating factors - different point systems depending on where you shoot from for basketball or the offside rule in soccer - but even these are simple and intuitive. Esports, on the other hand, is another beast entirely - try explaining how to win a game of League of Legends to someone without any context. The lanes and towers and inhibitors before even getting to the base add a layer complexity which is only made worse by the Baron and Dragon (especially now that there are different Dragon types). This is even further compounded by the skill system, the levels of the characters, the abilities and the concept of a "build". I can intuitively tell that destroying a tower in one of the lanes gets you closer to winning League, the reason behind Baron or any of the Dragons is a different matter entirely.

Even the (arguably) simplest esport, CounterStrike, suffers this problem to some extent. It's much more difficult to explain the roles of the Terrorists and Counter-terrorists and the "phases" of the game (pre-plant and post-plant) than it is to explain the basic premise of soccer, even though it is inherently impressive to watch someone land three one-deags, and easy to tell why winning a round with pistols against rifles is impressive. Whilst it is easy to see the skill in CounterStrike, understanding goals of the game is much more difficult than in traditional sports.

This is a problem that is inherent in video games. It is difficult to make massively replayable games without some form of guaranteed variation, which in video games comes from additional complexity to the goals of the game. Would it be nearly as fun to play League of Legends if there were no items and a much smaller roster of characters? For me, at least, the answer is no (although fans Heroes of the Storm might argue that the answer is yes). Would it be as fun to play Overwatch if there were no abilities and no characters? I certainly don't think so, the difference and interplay between the different characters are part of the core reason why the game is fun. The complexities of the interplay between characters and build selection is part of the reason why these games are so successful, even if it means to a casual audience that any high-level play is unintelligible.

And it's to that casual audience that esports has to appeal if there's any chance of esports "going mainstream". There will always be more people that don't play a game than those that do play it, and it's that audience that needs to be captured. How many people do you know that watch some sport and follow their favourite team and yet hardly play that sport? I'm a rabid Tottenham Hotspurs fan, and I haven't played soccer for years, and there's many others like me. Of my friends who watch lots of sports, hardly any of them play the actual sport, instead the watching of a sport is a past time to enjoy with friends, whilst at a bar, or eating dinner. It's a break from real life, and can be followed from afar instead of requiring continual re-learning as new content is introduced.

To this end, I say that esports will never "go mainstream". The things that make the games fun are contrary to the things that make the games easy to watch casually, and if the game is easy to watch casually but not fun to play, no one will play it anyway.

Permalink

It's hard to sync more than one machine

I've long seen it reported that the sales of desktop machines is decreasing (although workstations are keeping steady or growing), whilst the sales of laptops, tablets (although almost entirely the iPad) and phones have shot through the roof. Most of the reporting that I've seen has claimed that this is likely due to people not requiring more powerful machines for work or daily tasks - indeed most professions don't require a powerful computer for most work, and with the ubiquity of high performance build servers, even some engineers and programmers don't require the use of a powerful computer.

I think that this is probably part of the reason why, but there's also a bit more to it.

It feels like almost everybody has some form of cloud-based storage these days. From the previous standard-like Dropbox or Google Drive to the pre-installed iCloud or OneDrive, almost every person that I know has or uses some form of cloud storage - I even have three of them! My university provides us OneDrive, which I use for storing university-related documents, I have a Google account, which I use for storing lots of personal documents, and I also use Google Photos for storing photos from my phone (and good luck trying to tell me its the same as Google Drive).

All of this feeds into one of the major gripes I have with any primarily cloud-based solution, such as the storage solutions above - synchronization is really really really hard. I've definitely run out of fingers to count the number of times that I've tried to work locally on a synced document, only to later be told that it had caused a "conflict" with the document because I'd also tried editing it on another machine whilst I was somewhere else. Countless hours have been spent manually figuring out which of the two or three versions was actually the right one and combining them in various ways, only to figure out days later that I'd chosen the wrong document to keep.

My quest (and probably everyone else's quest as well) to try solve this issue has been long and storied. I originally started out with a Dropbox account and a dream, before moving to Google Drive after I found out that it had an additional few gigabytes of free storage space. After moving to Linux (and thus losing access to a native Google Drive client), I began using Syncthing to try to synchronize all my devices. Unfortunately, none of these ever quite felt right to me whilst I was using both a desktop computer and a laptop, as I'd typically find that the synchronization time would be long enough that (if I'd done enough work recently or left a computer unsynchronized for a few weeks) I could make myself a cup of tea or coffee with time to spare.

This eventually pushed me to my current setup, which is to use git for files that I plan on keeping locally on any machines (with separate private repositories in GitHub for each area of my life) in addition to a Google Drive account that is for strictly online content. Even this, I've found hard to maintain.

git is fantastic at version control and making sure that multiple people are able to work on different parts of the same project at the same time, but its strength is certainly not in synchronization of documents. Life using git has been made even more difficult of my own volition, by the fact that a large number of my private repositories automatically deploy to somewhere, such as this blog, or my main site (even my Masters' research and resume are repositories that are automatically built and deployed). This means that I typically want changes to be somewhat atomic, such that anybody else that sees them would be able to see a coherent document at any point on the commit tree. This habit has copied over even into repositories that are entirely private (such as my repository for university), resulting in there usually being uncommitted changes at the end of the day, which I refuse to push simply because I don't want to later have to force push.

This means that it is hard for me to do work across my multiple machines at the same time, and I can almost always guarantee that one of them is out of sync, especially as the number of repositories that I have grows. The major result of this difficulty is that I've spent less and less time on all of my machines, and more and more of it on a single machine that I can do all my work on - my laptop. After all, even if my desktop computer runs much faster, has a high refresh rate monitor attached and a significantly better keyboard, its far easier for me to just sit at a desk with my laptop and do work instead of trying to spend the time making sure that my desktop is entirely up-to-date - even if I am using git and Google Drive. Further, the entire problem of synchronization is entirely masked if I only use one machine, as opposed to the Herculean effort that I'd need to put in otherwise.

Now this leads us back to the my statement at the start of this entry that there's more to the "death" of the desktop than the non-requirement of powerful hardware - why have I picked my laptop to do all my work on, especially considering my desktop is technically superior in every way? The major reason for me is mobility. Perth is largely out of the COVID-19 pandemic (we haven't had a community infection since around May), and university has started up again, which means that I spend a significant proportion of my time at the university campus taking labs and workshops. I simply can't take my entire desktop setup with me (as much as I would love to), so I instead spend the majority of my day doing work on my laptop, and once I get home at the end of the day I don't want to spend additional effort making sure that my desktop is up to date so I can do some work on it - and I imagine there's a significant proportion of people in a similar position. People are often expected to continue doing some form of work out of the office (although I can't necessarily comment on the likelihood of this outside of the programming world, where you're only really called in when something goes horribly wrong in my experience), and as such their single device needs to be portable and thus a laptop, tablet or phone.

So, why is the desktop dying? Because of requirement of mobility and the difficulty of syncing more than one machine.

(This entry also basically lays out the reasoning for why I think that "cloud storage" such as Google Drive is actually closer to an automated backup system than a synchronization system)

Permalink

Getting in contact without social media

Now that I've restarted (and somewhat revamped - I redid some of the templating) this blog, I'm left pondering a few things about how I set things up and what they actually mean for my own usage.

If you run the very excellent uMatrix or perhaps NoScript in order to prevent JavaScript or media from running without your knowledge, you will have noticed that I have (at the time of writing this) no JavaScript running at all. There's a fair bit of CSS (mostly fonts and symbols), but nowhere is there any JavaScript.

Another thing which you may (if you're particularly nosy) have noticed is that this blog is hosted on GitHub Pages. This is great in that I am able to very easily make changes without having to worry too much about deployment (I just write a markdown file and call it a day) as well as in that everything is in source control, but also not great in two very specific ways.


The first of these is that I have no clue how many people will/would/are visiting the blog (or indeed my main site), with very few options as to how to find it out. One option would be to do the old trick that is used for determining if someone has opened an email -- include an image or css file that is actually a webhook and just increases a counter. This would actually be achievable using IFTTT and a its "Webhook" connection, and is possibly a project for another day. Another option would be to introduce something like Google's Analytics (shiver) to track people as they visit. I'm loathe to voluntarily subject people to more tracking by my own volition, so arguably that's out of the question.

The second problem that I have is that there's no method for interactivity - by which I mean comments. How can I get feedback on topics on a site in which I have the only ability to add anything and there's no contact details? Blogs like Chris Siebenmann's or Chris Wellons' have easy methods for feedback, and regularly include responses to comments on previous topics as new entries. An easy methodology to contact me would also mean that if I should get anything wrong or have issues with understanding some topic, it would be easy for any readers to immediately correct me using that contact method. This problem is a little more complex than the previous in that I both have methods to solving it and do not wish to use those methods, which is further compounded by the fact that there is no way to get in contact to give me additional suggestions that I might use instead.

One potential solution is to include an email address somewhere for people to send mail to. My issue with this solution is threefold. First, I don't know what provider I should put any new mail on, or indeed what the email address should be. It would be possible to set up a Gmail (or equivalent) account, but I already have what feels like far too many Google accounts. I could potentially use/pay for mail hosting from a service provider and use this domain to receive email, however that runs into the issue of needing to pay, which the frugal university student in me cannot allow. Second, I like staying on top of all my email. I've seen many people with thousands of unread emails, and even the thought of such a scenario occurring fills me with dread. As this blog (hopefully) becomes more popular, then equally so the amount of engagement would continue, and my paltry attempts to stay on top of all my emails and reply to all those that need replying to become much more difficult.

Another potential solution to this problem would be to direct people to something like Twitter or Facebook, of which Twitter is arguably the better choice for quick feedback. I do indeed have a Twitter account that has sat dormant for as many days as I have had it, and I firmly intend to keep it that way. I've seen Twitter and other social media accounts take over people's lives to such a degree that I'm not enthused about dipping my toes in more than I need to (even though this self-imposed limitation probably means that this blog is highly unlikely to ever grow). Similarly, whilst I do have a Facebook account (which could probably be found with some rudimentary searching), I don't see messages from people that I'm not friends with, and I don't add people as friends on Facebook (or LinkedIn for that matter) that I haven't met in some capacity. All of this means that to me, at the very least, there's very little opportunity for interaction on any social media platform.

Perhaps there are other solutions that I haven't thought of to these problems, but unfortunately, at least for the time being there is no way of getting any of those solutions to me. Ah well, a problem for future me to deal with at some point.

Permalink

Starting this blog again

It's been some time since I've actually updated my blog. Despite trying several times to write more content, the last post I published was from mid-2018, and was written as a 30 minutes commentary on some code that I wrote, so clearly it hasn't been working too well for me.

But things will be (hopefully) changing today. I'm going to stop trying to write fully-formed content and ideas on my "blog" (which to be honest, no-one actually reads, or at least I don't have any stats on it), and instead start trying to write daily updates on things that I'm doing. Instead, my main site will be for fully formed ideas.

By necessity, this means that my blog will not be entirely tech-focussed as I had intended many years ago, and instead will just be simply whatever I feel like talking about. This may mean that this blog could turn into (basically) the equivalent of a Twitter account for me, with random daily thoughts and ideas popping in and out, or it may become a platform for me to practice a more casual style of writing after having written many formal papers throughout university.

Regardless, please do wish me luck as I try to embark on this journey. I look forward to seeing where (if anywhere) this goes!

Permalink

Polymorphic function mapping in C++

In 2016, Jonathan Blow published a video demo for his upcoming (but still unreleased) programming language Jai. In it, he does a demo about how the polymorphic solver in the Jai compiler is much better than the C++ version, using a demo of trying to pass a polymorphic function to another polymorphic function without explicitly stating the type as his proof, stating that it might be possible in C++17.

While he is certainly correct that the Jai polymorphic solver is (from the video at least) a lot more powerful than C++'s, as well as significantly better looking (no one likes looking at templates, lets be honest), he's incorrect about not being able to implicitly pass a polymorphic function to another polymorphic function, so long as you use a C++11 compatible compiler.

The secret lies with the decltype specifier introduced in C++11. decltype (and auto, also introduced in C++11) allows us to get the type of an object at compile type without having to specify what the type actually is. This is great for polymorphic functions as we suddenly no longer have to specify any types at all, save that which we pass in.

So using the same example as Jonathan used, a transformative map, we can actually avoid using any specific type specifiers in our program. I'm going to use std::vector<> instead of an array, but the principle is the same.

Let's start by having a polymorphic function to print out our arrays, and our polymorphic incrementer. These will form the backbone which we use to solve our types.

template <typename T>
void print_array(std::vector<T> arr) {
    std::cout << "{ ";
    for (auto t : arr)
        std::cout << t << " ";
    std::cout << "}" << std::endl;
}

template <typename T>
auto incr(T x) -> decltype(x+1) {
    return x + 1;
}

You will notice that while we declare the return type of our incrementer to be auto, we also declare a trailing return type with decltype. This is very similar to what Jonathan attempted, but he tried replacing the auto with just a decltype. In C++11, if you want a type deduced return value, you must state the type of the function to be auto and have a trailing decltype statement (This was relaxed in C++14 to no longer require the trailing decltype, though having the decltype still helps with template deduction on occasion).

template <typename T, typename F, typename R = typename std::result_of<F&&(T&&)>::type>
auto map(std::vector<T> arr, F&& f) -> decltype(std::vector<R>()) {
    std::vector<R> out(arr.size());
    for (long i = 0; i < arr.size(); ++i) {
            out[i] = f(arr[i]);
    }
    return out;
}

Now we can see our map function. Most likely the first thing that you'd notice is the std::result_of<> in the middle of our template declaration. std::result_of<> is another compile time type declaration resolver. It takes a function and its arguments and evaluates what its return type is. We use F&&(T&&) inside std::result_of<> because there are a few quirks with it that we would hit into using F(T), such as discarding cv-qualifiers and automatically adjusting arrays and function types to pointers. If you're using C++17 and above, std::result_of<> no longer exists due to those quirks, and so instead you can use std::invoke_result<F, T>::type, which looks one hell of a lot better.

Finally we come to our test function.

int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    print_array(array);
    print_array(map(array, incr));
}

And... it fails to compile. The reason why is that the compiler doesn't know the way in which incr will be used inside of the map function, so it can't infer the function type that we are actually passing. Once again, we can solve this by using a decltype to avoid having to use any nasty types:

int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    print_array(array);
    print_array(map(array, incr<decltype(array)::value_type>));
}

If that's too ugly for you, you could declare a macro that does it instead:

#define poly_map(v, f) map(v, f<decltype(v)::value_type>)
int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    print_array(array);
    print_array(poly_map(array, incr));
}

But this compiles and works!

> clang++ -std=c++11 template_poly.cpp
> ./a.out
{ 1 2 3 4 5 }
{ 2 3 4 5 6 }

What's more, this actually works with anything that could be considered a function.

It works with normal functions:

long long_incr(long x) { return x+1; }
int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    print_array(array);
    print_array(map(array, long_incr));
}
> clang++ -std=c++11 template_poly.cpp
> ./a.out
{ 1 2 3 4 5 }
{ 2 3 4 5 6 }

It works with lambdas:

int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    print_array(array);
    print_array(map(array, [](long x) { return x+1; }));
}
> clang++ -std=c++11 template_poly.cpp
> ./a.out
{ 1 2 3 4 5 }
{ 2 3 4 5 6 }

In C++14 you can even replace the long in the lambda declaration with auto, or wrap the polymorphic function so you don't need the poly_map wrap!

int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    print_array(array);
    print_array(map(array, [](auto x) { return x+1; }));
}
> clang++ -std=c++14 template_poly.cpp
> ./a.out
{ 1 2 3 4 5 }
{ 2 3 4 5 6 }

It even works with std::binded member functions!

struct Incr {
    long incr(long x) { return x+1; }
};
int main() {
    std::vector<long> array = {1, 2, 3, 4, 5};
    Incr i;
    print_array(array);
    print_array(map(array, std::bind(&Incr::incr, &i, std::placeholders::_1)));
}
> clang++ -std=c++11 template_poly.cpp
> ./a.out
{ 1 2 3 4 5 }
{ 2 3 4 5 6 }

So that's C++11. What does the C++14 or C++17 code look like?

// C++14
template <typename T, typename F, typename R = typename std::result_of<F&&(T&&)>::type>
auto map(std::vector<T> arr, F&& f) {
    std::vector<R> out(arr.size());
    for (long i = 0; i < arr.size(); ++i) {
            out[i] = f(arr[i]);
    }
    return out;
}
// C++17
template <typename T, typename F, typename R = typename std::invoke_result<F, T>::type>
auto map(std::vector<T> arr, F&& f) {
    std::vector<R> out(arr.size());
    for (long i = 0; i < arr.size(); ++i) {
            out[i] = f(arr[i]);
    }
    return out;
}

template <typename T>
auto incr(T x) { return x + 1; }

// and also
auto incr = [](auto x) { return x + 1; }

We've lost our trailing return types and we can now use auto for lambda parameters!

There's also a fun quirk on g++ that allows for auto to be used as a function parameter, even though you're not supposed to until C++20.

auto incr(auto x) { return x + 1; }

You can find the full code of everything used here on my github.

Never underestimate the power of a good template!

Permalink