Tuesday, November 13, 2018

Comprehensive Password Security

We are doing passwords all wrong.  Requiring excessively complex passwords that are impossible to remember is absurd, especially given how much easier it is to ensure password security through other means.  Requiring numbers or special characters is far less effective than requiring a few additional characters.  Users are far more likely to write down passwords that are difficult to remember in places that are easy to find.  When users forget passwords frequently, and they have to be reset, it sets a precedent that makes it easier for someone else to fraudulently reset the password.  In general, our current guidelines for password security are a disaster.

I am going to give you a warning now: This post is going to be math heavy.  I am not going to try to explain how all of the math works, but I will try to explain what numbers mean, to make comparisons more meaningful.

To start with, you should really read this comic: https://xkcd.com/936/  According to some sources, some of the math in there is wrong, but one thing is definitely right: The second password, composed of four common English words, is more secure than the first one, which uses a bunch of numbers and special characters that make it harder to remember.

The calculations used in the above comic measure password security by calculating how much entropy the password has.  The entropy represents the number of possible passwords that could exist of that length with the combination of character classes used.  Common character classes used in passwords include letters (these can be separated into an uppercase and lowercase class), numbers, and "special characters" which is anything that can be typed on a keyboard that is not in the other two classes.  Space tends to be ignored, but it could be grouped into special characters.  To convert the entropy value into the actual number of possible combinations, you raise 2 to the power of the entropy.  So, if the entropy of a password is 28, you can calculate the number of possible combinations with 2^28 = ~268m.  On average, a brute force hacking algorithm should be able to figure out your password with only having to try around half of the possible combinations, in this case 134 million, which a modern computer can easily do fairly quickly.  We won't bother looking at actual numbers of combinations from here on out, but keep in mind that adding one bit of entropy doubles the difficulty of cracking a password.

Most accounts require a password at least 8 characters long, with letters, numbers, and special characters.  Assuming the password is just a randomly selected combination of all characters that appear on a typical keyboard, that is 95 possible characters, which has an entropy of about 6.6 per character or 52.4.  So let's use 52 as a minimum acceptable security level, since that is approximately what most web sites require.

The xkcd comic suggests that a password of four common words, in all lower case, has an entropy of 44.  When something like known words are used in a password, we cannot just calculate entropy per character.  First, each word must be treated as a whole entity itself, because a word has different entropy (typically less) than the same number of characters.  If a hacking algorithm tests common words before going to brute force (most do now days), it will break your password faster than a pure brute force algorithm.  English has around 3,000 common words.  That's close to 11.5 bits of entropy per word (verified by calculating 2^11.5).  Four words thus will have about 46 bits of entropy (which makes xkcd's estimate right on, if he rounded down to 11 bits per word).  The average English word is about 5 characters long, so a four word password is around 20 characters long.  That might seem like a lot, but memorizing a password of four common words is a lot easier than memorizing eight random characters.  An entropy of 46 bits, however, is well under 52 bits, which is our minimum reasonable limit (a 6 bit difference means the 46 bit password is 64 times less secure).  Four common English words is not enough for good security.

The Second Edition of the Oxford English Dictionary contains around 180,000 words that are still in use (as well as 47,000 obsolete words, which I will leave out for convenience, but which might actually be good to use, since a dictionary attack is unlikely to include them).  That is around 17.5 bits of entropy per word.  A password composed of four randomly selected words from the 180k words that are still in use listed in this dictionary would have a total entropy of 70.  That's 18 more bits (2^18 = 262k times more secure) than our baseline of 52.  It will probably be a little bit harder to memorize than the common words, but it should still be pretty easy, and it is still only around 20 characters.  Without capital letters, numbers, or special characters, this password is five orders of magnitude more secure than the baseline.

We can do better though.  There are a few reasons words are so good for passwords.  They are all based on the fact that words are easier to remember than random character strings.  First, pronounceable words will be easier to remember, and second, the meanings of words make them easier to remember.  These happen in different parts of the brain.  Pronunciation of a word will be stored in areas of the brain associated with decoding the meaning of language heard, as well as areas associated with speaking.  Meaning will be stored in regions associated with abstract thought.  These will form connections that make the password much easier to remember.  If we make up pronounceable words and then give them meanings though, we eliminate the possibility of a dictionary attack, falling back to brute force, where we add up the entropy of each bit.  Now it depends on the length of the password again, and we don't have this issue where a long word has no more entropy than a short one.

Before we do this, we should look at the entropy for various sets of character classes.  The letter class (uppercase and lowercase) has 52 possibilities, for 5.7 bits of entropy.  Letters and numbers has about 5.9 bits of entropy.  Letters, numbers, and special characters (everything that is on a modern keyboard) has about 6.6 bits of entropy.  Notice that adding numbers only increases the entropy by 0.2 bits per character (a 14% increase in possibilities).  Adding special characters only increases entropy by 0.7 bits per character (a 62% increase in possibilities).  For an 8 character password, adding numbers increases entropy by a total of 1.6 bits, and adding special characters only increases entropy by a total of 5.6 bits.  Adding a single character increases overall security by more than adding either of these.  Adding numbers and special characters to an 8 character password increases overall entropy by 7.2, which is better than adding a single character but far worse than adding two.  In short, adding length to a password makes a far bigger difference than requiring more character classes.  Requiring numbers and special characters makes only a very small difference.  The primary value in this is making it harder for users to make terribly insecure passwords, by discouraging the use of single common words.  The key for getting secure passwords is all about length, not character classes.  For an additional character class to add even 1 bit of entropy per character, it has to double the total number of possible characters.  Adding length, on the other hand, increases security exponentially.

Now, let's say you make a password of four made up words, all lowercase, without spaces, where the length comes out to 20 characters.  Because the words are made up, a dictionary search won't work, so we have to use brute force.  Each character adds 4.7 bits of entropy.  (All lower case gives 26 possibilities per character.)  This password has a total of 94 bits of entropy.  That is extremely secure compared to the baseline of 52.  Because these words are pronounceable, they are easy to remember.  If you give them meanings, that will make them even easier to remember.  Because there are so many characters, despite its minimal effect, throwing in a number will increase the entropy by 4 (16 times harder to break, which is honestly not as impressive as it sounds), giving you 98 bits of entropy.  Throwing in a special character or two will increase the entropy by 14 bits (which is a little bit more impressive but still pretty low compared to initial 20).  Together, you get a total entropy of 112 bits, at the cost of having a couple of characters that make it harder to memorize.  (Honestly, I am still questioning the value of adding the numbers and special characters, because that's only a total 16% increase in security, at the cost of being harder to memorize...  Adding only 4 additional characters would have a bigger effect.)  The biggest effect you can get is by capitalizing the first character of each word, which doubles the number of possibilities per character, increasing the entropy by 1 per character.  That gives you a total entropy of 114 bits, without adding numbers or special characters.

Good password security is not about using numbers and special characters in your passwords.  It is purely about length.  If you avoid using combinations of characters that are coherent in any common language (ie, don't use words that exist in common languages), you can still create easily memorable passwords that are long enough to be extremely secure.  Just 20 lowercase characters has an entropy of 94, which is 42 bits stronger or 2^42 =  4.4 trillion times more secure than the minimum security required by most sites.  The only problem you may run into is some sites using poor security practices themselves limit password length to 16 characters (75.2 bits for all lowercase made up words and 92.1 bits if you include some uppercase letters).  Even with this limitation, using made up words is massively superior to short passwords that are too random to be easy to memorize.  In fact, with only lowercase letters, you only need 12 characters of made up words to have a password of equivalent strength to an 8 character password including lower case, upper case, numbers, and special characters, and with both lower and upper case, you only need 10 characters to beat it.

The takeaway here is that the best passwords are made of multiple made up words that are pronounceable and have meanings.  Passwords like this, with at least 16 characters, will be significantly stronger than the weakest password most sites allow, and on top of that they will even be easy to memorize, and all of this is true even if you use only lowercase English letters.

Thursday, October 11, 2018

Minecraft Mod Economics

I have been playing Minecraft with mods off and on for a few years now.  Instead of installing huge mod packs, I generally prefer to play only a few at a time, and I typically play just one major mod with a handful of support mods.  When I play with multiple major mods, I generally find myself focusing on only one of them but rarely more.  I recently analyzed this behavior, and I found that the reason is economics.

Vanilla Minecraft has a pretty good economic balance.  I have a few complaints, but they are mostly the consequences of my play style, not any clear flaw in the game.  I generally prefer to focus on exploring, gathering, and building.  I don't spend a lot of time on combat, and I never go out of my way to find and fight bosses.  I prefer to make mob farms to get drops rather than fight them on even footing.  This means I don't get a lot of Ender pearls, I have never obtained a Nether Star, and I have not even gone to the End, let alone defeated the dragon.  This works well though, because Ender pearls are mostly used for finding the End, which I have little desire to do, Nether Stars are good for building beacons, which mostly boost combat stats which I don't care about, and all the Ender Dragon provides is a trophy and experience (which can be obtained from a good mob farm).  In short, the economics of vanilla Minecraft work fairly well, even for different play styles.  Perhaps the only real complaint I have had is that rails cost a lot of iron and gold for any significant distance, and getting the materials is incredibly time consuming.  As a result, I don't use rails much anymore, which is a shame, but the game is still fun even without them.  For the most part though, the economy of the game is well balanced, allowing resources to be obtained as they become needed without costing more time than they are worth.

Mods, on the other hand, often fail at economic balance.  The least susceptible mods to poor economics is mods that add their own resources to the game and use built-in resources minimally.  They are successful, because it is easy to balance your mod with new resources that are not used for anything outside of your mod.  Most mods don't do this though, and even many very popular mods fail in economics.  Recently, I have played using BuildCraft, MystCraft, Immersive Engineering, Thaumcraft, Twilight Forest, and Botania.  These are all fairly popular mods.  They are all very well made.  But most of them have at least a few economic balance issues, and some have pretty serious issues.  The biggest problem with economics is using built-in resources in ways that cause competition with the vanilla content or with other mods.  For example, Botania and Thaumcraft have a fair amount of content that uses Ender pearls, gold, and diamonds.  Most major mods have some economic flaw like this.

Botania really fails hard with terrasteel, where one bar of terrasteel costs an iron bar, an Ender pearl, and a diamond.  The iron is not a terrible problem, especially given that the equipment made from terrasteel replaces your iron or diamond equipment.  The Ender pearl is a very high cost though, given that Endermen are generally rare, and they are notoriously hard to farm.  The diamonds are a less dramatic cost, because the terrasteel is taking the place of diamond or lower tier equipment, but now we are competing with vanilla on two equipment materials that are fairly limited.  This makes getting terrasteel very frustrating, because first you are spending a lot of time mining when you really want to be doing other things, and second, the mining itself is consuming some of the materials you are trying to collect.  It's true, the terrasteel tools have some pretty awesome time saving features, but by the time you finally have all of the materials to make them, you have already spent so much time getting the materials that the time saved does not feel like it will ever make up for it.  It also does not help that after spending so many hours mining, you just don't feel like mining ever again, making the nice tools a lot less motivating to use.  Ender pearls are also a problem when it comes to making the elf portal.  This portal basically opens up the second half of the content in Botania.  Unfortunately, you have to craft a set of equipment that is fairly heavy in Ender pearl cost to make the portal work.  I spent tens of hours during one game trying to find Endermen to kill for Ender pearls.  I had the portal completely constructed, and I only needed a couple more Ender pearls to power and open it.  After days of spending a few hours a day hanging outside at night looking for Endermen to kill, I gave up.  I got one spawn, and it did not even drop a pearl.  It was beyond frustrating, and now when I even think about opening a portal, it feels like an insurmountable task.  Botania also has a few items that cost a lot of gold.  If you want to automate anything in Botania, you need hovering hourglasses as redstone timers.  Unfortunately, each one costs four gold bars, which is quite a lot.  Yes, you can make gold and iron farms, and once you get to the End, Ender pearls are really easy to come by, but all of these things take a lot of time and planning.  I'll come back to this later, but part of the issue when it comes to stuff like this is that I have done it so many times already that it feels incredibly tedious when all I want to do is experience the content of the mod.  Where Botania does really well is in the resource it adds.  Botania adds mana to the game.  Early on, it is hard to generate mana in large quantities, but by the time you need it in large quantities, it is easy to generate in large quantities.  Mana production relies mostly on renewable resources.  Some methods can generate manage by consuming non-renewable resources, but most have renewable options.  For example, mana can be generated using food or charcoal as fuel.  Automating food production is easy even in vanilla Minecraft, such that there does not need to be any competition between vanilla and the mod.  Botania makes automating food production even easier, with the hovering hourglass taking over the role of a larger and more elaborate redstone clock (though at a high cost in gold).  Automating charcoal production is much more difficult before the elf portal is opened, but charcoal is generally not terribly difficult to manually produce in significant quantities.  In general, Botania's economic balance when it comes to its own resources is quite good, but it does a poor job of economic balance when it comes to its use of vanilla resources.

Thaumcraft has similar issues to Botania, though perhaps not as severe.  Part of the issue with Thaumcraft is timing related though.  Early Thaumcraft has really poor balance when it comes to iron and gold.  Thaumcraft especially competes very strongly with vanilla uses of iron.  The last time I played Thaumcraft (a month or two ago). I found myself spending around 20+ minutes underground mining for iron for every minute I spent above ground doing anything with Thaumcraft.  I burned through iron so fast, and I frequently found my Thaumcraft progress stalled by lack of iron.  For a significant period of the early game I did not even have iron armor, because all of my iron was going to Thaumcraft stuff.  Eventually I did start accumulating significant amounts of iron, but right around that time, I started hitting the same problem with gold.  It was less severe, but I was still spending more than half my time mining for gold for a while.  Thaumcraft does have an ore doubling mechanic, but early in the game it is too expensive (and environmentally harmful) to use a lot.  The most frustrating thing, however, was (again) Ender pearls.  There is one type of resource in the mod that has no other source except Ender pearls, and Ender pearls only provide a very small amount of that resource.  Thaumcraft is not as bad on this as Botania, using this resource only in limited amounts for some pretty late game stuff.  Unfortunately, that late game stuff is pretty trivial to unlock somewhere around mid-game.  Worse, the mod does nothing to clarify the fact that this is late game stuff, so when you unlock it, it feels like you should be able to (and maybe need to) use it right away, when in reality, the only way to use it effectively is to farm Endermen in the End.  For a majority of its resources, Thaumcraft is actually very well balanced.  Nearly all of its resources are produced from vanilla content.  Half of those that are not displace vanilla resources, but they displace resources that are plentiful and farmable (trees, mostly).  Thaumcraft does add some resources, but a vast majority of its resources are derived from vanilla resources.  For a most of those resources, it does an exceptional job of balancing.  In part, this is because nearly everything in vanilla can be turned into Thaumcraft resources.  Thaumcraft does not rely as heavily on scarcity for resource control as it does side effects.  You can easily obtain enormous amounts of most of the resource types quickly and easily, but the consequence is that your land becomes diseased and extremely inhospitable.  The motivation to only produce what you really need is pretty strong.  The places where Thaumcraft fails is where it relies on the scarcity of certain vanilla materials to control the availability of some very valuable resources, because in those places it ends up competing with vanilla features for resources.

Immersive Engineering also does a lot of things right but a few things wrong.  It adds some resources to the game, and where it does this, it does extremely well.  It adds uranium for power generation, and its rarity is perfectly balanced with its value.  It has been a while since I played this mod, so forgive me if I get the occasional detail wrong, but I do recall some struggles.  I seem to recall this mod having a serious problem with iron.  Most of the machinery uses a ton of iron. The Crusher does double the ore obtained from ore blocks, but this only mitigates the issue, and the Crusher is not available till mid game anyhow (not to mention has an absolutely _massive_ iron cost, which won't be doubled, because you don't have the Crusher yet).  Ore doubling is actually a common method for reducing resource competition between vanilla and mods, but there is only so much it can do, especially when it is a mid game tech.  Immersive engineering also adds a power resource, Redstone Flux (RF), which is reasonably well balanced but not, perhaps ideal.  There are three different kinds of wire, which can carry different amounts of current and different voltages.  They are made from different materials as well, and this adds some challenges of its own.  Some of the larger equipment in the mod has extremely high power consumption, and the result is that it costs a lot of resources (including a lot of vanilla) to power them.  This is not actually as bad as it sounds though, because most of this is really late game stuff that is not easily accessible earlier.  It largely just ends up being very time consuming.  It does compete somewhat heavily with some vanilla resources, but the pacing of the mod makes it so that you are pretty likely to already have most of what you need by the time you have the mod progressed up to that point.  The Immersive Engineering Excavator is especially useful in keeping up on resources, but it is pretty late game, which means a lot of its benefit comes too late to help a lot.  It also costs enough iron to be a serious project to build even late game, when you think you already have a ton of iron.  The last time I played this mod, most of the struggle was early on, and I spent a lot less time trying to keep up with vanilla and Immersive Engineering than with other mods.  Part of this is that Immersive Engineering heavily supplements vanilla resources with its own, limiting competition.  It also adds more automation stuff earlier on than other mods, reducing the burden in the early and mid-game.  I found this mod less frustrating than some others, though it is still not perfect.

BuildCraft has some resource competition issues in the early game as well.  It does not compete as heavily as some other mods, because it largely limits competition to resources that are less rare, but there are some points where excessive time has to be spent gathering resources.  BuildCraft does manage to keep these pretty minimal though.  It does this by tiering a lot of its content based on vanilla resources.  For example, pipes can be made from a lot of materials, where higher tier materials give them better properties.  This makes a lot of BuildCraft content available quite early on.  There is a period between high iron competition and highly automated mining where things tend to drag on a bit, but it is actually pretty fast to get through this compared to other mods.  The biggest difficulty with BuildCraft is probably the time spent on the liquid fuels required to automate at high speeds.  These are made from BuildCraft oil, which either must be carried (in buckets or similar container) or piped to where it is to be refined.  The refinery equipment is expensive enough that it is often more economical to just move it from one oil spring to another, but this is pretty tedious.  Piping is another solution, but for any real distance, it gets expensive very fast (and then you run into chunk loading issues...).  Unfortunately, this is not a completely automatable task either, because oil springs eventually run out.  Once you have a good system, it is worth the effort, but early on it is very tedious.  Overall, BuildCraft does a fairly decent job with game economy.  It has a few points where things are difficult for a bit, but it does pay that back fairly quickly once you get automated mining going.  Once you reach this point, BuildCraft can even produce enough mineable resources to support a couple additional mods.  BuildCraft's massively advanced automation control stuff also really helps reduce drudgery in the late game, but progressing to it involves a lot of drudgery.  I have only gotten a little way into the circuitry and such, mostly because the more valuable chipsets are so expensive (using iron, gold, diamonds, and Ender pearls).  But, these are not critical to solid automation in the mod.  The more expensive stuff is more about convenience than anything, so doing without them is not a huge burden.  Overall, BuildCraft's economy is pretty solid.  There are a few hiccups, and there are some failures in the very late game stuff, but none of this is anything terribly critical.  My biggest frustrations with BuildCraft come from inventory management when there are such huge quantities of resources coming in, but this is better than not enough, and there are good support mods specifically for dealing with this problem.

Mystcraft only produces one serious challenge when it comes to economics.  This is leather, and it brings up another issue some of the above mods occasionally have.  The problem is that sometimes, there will not be any cows near spawn.  When I say "near", I mean, on two occasions in the recent past (out of the 5 or 6 worlds I have created recently, which means about 1/3 of the time), I have landed half a day's (MC time) boat ride from the nearest cows or more.  In at least one, I never did find cows.  Luckily, that time I was using a mod that made it pretty easy to make leather from rotten flesh, without first needing leather to start the mod off.  (The other time, I was playing Thaumcraft, and I am not even sure how far I ended up going from spawn to find a cow just so I could craft the mod documentation/progress book.  I think I ended up holing up for at least one night during the journey.)  Mystcraft requires a lot of leather if you are doing it right, and when you cannot find any cows, the mod is straight up impossible to do anything with.  Now, I cannot blame the mod entirely for this, however, mod makers should be aware of cases where a critical material for the mod may be rare enough to seriously hold up the mod progression.  For leather, a simple solution would be to just add a recipe for crafting rotten flesh to leather.  It does not have to be one for one, and it can even be more expensive than just the rotten flesh, though if leather is a major resource for the mod (Mystcraft, for example), the cost should be quite low.  Two good options I have seen are four rotten flesh crafted in a square to get one leather, and cooking a piece of rotten flesh in a furnace turns it into leather.  One costs four for one, and the other costs fuel on top of the main ingredient.  Neither is terribly overpowered either.  The point is, mods should be aware how scarcity of resources used in the mod can vary.  In the current game I am playing with Botania, I have not found a biome with emeralds yet.  If I needed emeralds for some critical, early part of the mod, this could be a serious problem.  (As it is, I have found a village, so for the mid to late game, I do have access to emeralds.)  Overall, Mystcraft is a very well balanced mod, when it comes to economy, but it does occasionally have issues, because it fails to account for a scenario that is not terribly uncommon.

The last mod I want to discuss is Twilight Forest.  I have far less experience with this mod than the rest.  (In fact, I am currently playing it for the first time.)  From what I have seen so far, it is very well economically balanced, mostly because it primarily focuses on non-resource content.  The only resource (vanilla or otherwise) I have spent on it is a single diamond, to open the portal to the forest dimension.  While diamonds are legitimately very rare, a one time cost of a single diamond for the mod only barely competes with vanilla.  One diamond ever is not a significant burden.  In the worst case, it could be a cost of 30 minutes to an hour of mining, in exchange for a ton of additional content.  By focusing on content that has no resource economy, Twilight Forest has managed to almost entirely avoid the economic issues common in other mods.

It should be pretty easy to see, at this point, how some mods harm the economy of the game by adding resource competition without sufficiently improving access to those resources.  When mods are combined though, resource issues can be further amplified.  Thaumcraft and Botania both expect a certain level of access to Ender pearls, and neither provides a decent means of increasing production.  This means that now the mods are competing with each other in addition to the game.  Similarly, Thaumcraft and Immersive Engineering both do ore doubling, but when you play them together, now you need three times the ore but neither can do more than double ores.  Mod authors should also consider how competition with other mods is going to affect their mod economies.  When you have multiple mods and the vanilla game all competing for the same resource, players are not going to enjoy the game as much.  Generally this will result in players picking one or two mods to focus on, and in big mod packs, odds are your mod won't be one of them.  This can be mitigated by adding support for other mods to your own or by using standard resources that can be shared between mods without too much competition, for example, a significant number of mods use the Redstone Flux energy that Immersive Engineering uses, allowing players to use the same power generator for multiple mods, instead of having to construct a different generator for every mod.  It is important for mod makers to at least consider how their mod will play with others, and that includes their impact on resource economy.

The last thing to keep in mind about resource economies in mods is that most people playing with mods have a fair amount of experience under their belts.  To be clear, they have already experimented with automation, they have already played through the early game many times, and they are not playing your mod because they want to spend hours upon hours of more time mindlessly mining.  They probably won't mind spending the same amount of time in early game that they would in vanilla, but they don't want to spend a ton of extra time in the early game, because your mod is competing for resources.  If your mod is going to compete for resources in the early game, make sure there is a fairly immediate benefit that makes up for it.  Things like more effective mining tools are more appropriate early on than later, once the player has already spent far too much time mining.  Providing the player with quality of life improvements early on is far more valuable than later.  Instead of making the enhanced tools purely late game items (like both Botania and Thaumcraft do), offer mildly enhanced tools early game and push back some of the more impressive content till a little later.  A pickaxe that can mine a 3x3 area that is easily accessible between the iron and diamond tiers is not game breaking at an expense slightly higher than iron, and players will appreciate it far more than many things mods offer at that point that are often more valuable later on.  Experienced players neither need nor want the long slow early game, and they certainly don't want it dragged out longer by your mods.  So instead of purely trying to add end-game content, also add some early game content that will make it a little easier and faster for seasoned players to get to your awesome late-game content.

Balancing game economies is tough by itself, but it is really easy to break a well designed game economy by throwing in new content without fully considering how it is going to affect the existing economy.  And this includes time as well as resources.  If you are making or considering making a mod, please keep in mind that every unit of a resource your mod uses is a unit of resource that cannot be used for anything else.  If your mod is going to compete with vanilla or another mod for resources, make sure your mod provides something to mitigate that, otherwise your players will be spending a lot more time doing things that are boring and grindy, instead of having fun with your mod.  And likewise, keep in mind that mod players are generally experienced.  They don't want to spent a ton of time in the early game because of your mod.  Throw them a bone.  It's not terribly difficult to speed up the early game a little bit without going too far.  Again, keep in mind that the sooner the player is able to really use your mod content, the more they will enjoy your mod.  The last thing you want is for players to spend so much time and energy on other mods that they do not have time to enjoy yours.  Instead, make your's the mod that gives them the time to play all of them.  Maintaining balance is tough when making games and when modding games, but if you make sure to pay attention where resources are coming from and how much time is required to obtain them, it is not terribly difficult to avoid seriously upsetting the balance.  We have a lot of very good mods out there, but with a little tweaking on economic balance, they could be absolutely epic.

Friday, August 10, 2018

Web Next

Whatever the next version of "the web" is (it's hard to say what we are on right now, as not everyone agrees, and the version numbers are more hype than anything anyhow), there are some specific changes that really should be made.  Up to now, each new version of HTML, CSS, JavaScript, and any other web technology has primarily added new features while keeping most or all of the old ones.  We have not dropped much, but there are some things that really should be dropped.  In fact, some of these things have already had support widely dropped or at least restricted by modern browsers.  Most of these things could easily be handled differently, either on the development end or by adding new features to replace their core functionality, without carrying over their problems.  Some of these things may have had a place originally, but none of them do anymore.

There are two specific features I have recently been thinking about that just plain need to be dropped from the standards, and browsers have already taken initiative on these.  These two are popups and alerts.  We should probably also tack on other modal dialogs here too, including browser level authentication for certain types of sites.  Another feature I want to discuss is the HTML blink element.

Popups are already blocked by most browsers, unless the user explicitly allows them.  Essentially, in modern browsers, popups are opt-in.  Probably 99% of popups on the internet today have malicious intent.  The two biggest uses of popups are covert ads (popups are instantly minimized, to prevent the user from noticing, with the hope that later the user will see the window, think it was opened intentionally, and get sucked into the ad before realizing it is an ad) and virus distribution (using security flaws to download a virus still takes time to complete the download, and what better way to avoid the user closing the window too soon than putting it in a popup and hiding it immediately?).  Because of the inherently dishonest nature of popup ads that instantly move themselves to the background, a vast majority of these are for less than reputable products and services (more often than not, pornography).  The occasional website has innocuous popups that are part of the site navigation, but these are relatively rare.  The fact is, with modern technology, popups are no longer necessary.  In fact, it is questionable whether they were ever necessary.  Long before popups were even possible, there was the ability to provide links that users could click to open a new window.  Only in rare cases is this not sufficient, and in those cases, modern technology has already provided an excellent alternative.  The fact that modern browsers can get away with summarily blocking all popups, until the user acts to specify otherwise, should tell us that popups are no longer important or even useful to the web.  Now days, they are almost exclusively used for dishonest advertising practices and for delivering harmful programs.  We can do without them, and the next versions of the various web standards should eliminate them entirely.

Alerts in early JavaScript were primarily used for debugging, but there has never been a time when they were not primarily an annoyance to end users.  For some reason, every JS course started out teaching how to create an alert, and as a result, they were quite common for a long a time.  JavaScript alerts have never been a good idea.  From the very beginning, trolls have been using them to make web pages designed purely to infuriate users, by constantly creating alerts, which would prevent the user from doing anything but clicking "OK" or "Cancel" in the browser, in vain attempts to end their misery.  (Because they are modal dialogs, even closing the browser is generally impossible when an alert is present.)  On rare occasions, alerts would be used to convey useful information to the user, but even in these cases, the result would often be frequent interruptions monopolizing the browser.  This is another "feature" modern browsers have taken initiative on mitigating.  Most modern browsers provide an opt-out option for alerts after the same page has produced more than two or three.  Most web pages no longer use alerts, and it is generally considered a best-practice to avoid them.  Similar to popups, most modern sites using alerts are using them for nefarious purposes, including tricking people into consenting to have malicious software installed, denial of service attacks through cross site scripting or injection, or just causing general grief for users.  Sites that might have used alerts in the past largely now use iframe or div based dialogs (or notification bars) to achieve the same effect without monopolizing the browser or interfering with the user's work flow.  Alerts no longer have an appropriate place in the web, and the next version of web standards should do away with them as well.

As an extension of JavaScript alerts, there may also be some value in expanding out to any client side modal dialogs.  Modal dialogs prevent the user from interacting with the browser until the dialog is closed.  Any modal dialog that can be created without explicit user interaction can be used for malicious purposes.  One of the earliest modal dialogs in browsers is the authentication dialog.  This was intended for sites and protocols that have authentication built directly in, in ways that make it hard to manage authentication through other means.  The earliest web did not have interactive sites or even CGI, so page based authentication was impossible.  Things have changed very dramatically since then, but these authentication dialogs still exist.  There are plenty of ways they can be abused, but most of the time they are just annoyances used by web developers who are too lazy to write up a login page.  Because the dialog monopolizes the browser, it is very difficult to look up authentication information stored somewhere else on the internet, and having to cancel the dialog and reload is inconvenient.  In general, any time the user is interrupted with something that is artificially given priority over everything else, it results in a bad user experience.  The need for these dialogs on the modern internet is incredibly rare.  There are still a few protocols out there that just do not allow for any other means of authentication, but all of the commonly used ones do.  Sadly, HTTP actually has a header specifically intended to bring up an authentication dialog, and in researching for this article, I discovered that this has actually been a significant problem for some web developers.  No HTTP web site actually needs to use this means of authentication though, because it is trivial to manage authentication through a web page based dialog.  The only reason to use the modal dialog instead is pure, unadulterated laziness.  At least for protocols that can use other authentication methods, the authentication modal dialog should be purged from web standards.  There should be no possible way a web developer or script injection hacker can force a dialog or window to open on a client computer, without the express permission of the user.

All of these can easily be removed, without any serious negative impact on the web as a whole.  Yes, some developers will have to actually work for their pay, to make the occasional legitimate site currently using these work without them, but that is nothing compared to elimination of some pretty serious security and quality of life issues built right into current web standards.  If convenience is really such a big issue, these things can easily be replaced with far better alternatives.  Popups, alerts, and modal authentication dialogs could all be replaced with simple built-in JavaScript functions that will create centered div-like elements on the top layer of the current page, with custom content.  For popups, this could work like an iframe.  iframes are essentially web pages embedded in other web pages, with their own namespace.  Most legitimate popups could be implemented similarly to iframes, maybe with a small title bar that allows dragging and possibly even minimizing.  This functionality could all be built into the element, as part of the browser implementation.  The minimize function could default to just turning the popup into a small icon in the lower left corner of the web page.  This keeps popups entirely contained within the page that created them, without interfering with the user's ability to continue using that page.  A page that creates excessive popups and automatically minimizes them will now end up covering itself up with icons, and when the tab is closed, all of the popups will be discarded.  This would make stealth popups actually be a detrimental advertising strategy, as they would obscure the creating page, and they would not persist beyond the life of the creating page (to sneak up on you later).  Alerts and modal dialogs of any type (authentication or otherwise) could be implemented similarly, where the user can specify text and inputs, just like with JS alerts, and then when they are activated, the browser generates a div-like element over the existing page.  These could even act as standard modal dialogs within the context of the page, disabling any interaction outside of the dialog until it is closed, but it would not interfere with the use of the browser itself, like existing modal dialogs do.  Pages that use modal dialogs excessively would thus only be harming themselves, and the user could easily close the tab without having to deal with a cascade of dialogs first.  Note that all of this is already completely possible.  iframes can already be used to mimic popup behavior, within the context of the containing page, and some slick JavaScript can even simulate the minimization behavior described above.  Creating in-page modal dialogs can be done equally easily, and blocking the page behind the dialog can be accomplished with a transparent element between the page and the dialog element.  If these features are so valuable that they are worth keeping though, perhaps the above suggestions should replace them in web standards, to eliminate their potential for abuse, while maintaining any useful applications they may have.

Now, let us discuss the HTML <blink> element.  Long ago, HTML had a blink tag, that could contain text.  It would apply a blink effect to that text.  From that sprang some of the most hideous, annoying, and possibly seizure causing web sites in the history of the world.  It was not long before browsers started just ignoring this tag or at least disabling it by default.  This tag was based on blinking text used on console systems, and as far as I can tell, the only legitimate use for blinking text on console systems was to make the cursor blink, both to make it easier for the user to spot, and to identify it as not being a legitimate part of the text that has already been typed.  Outside of that one legitimate use, blinking text has never been a good idea, even on the console, but somehow it made its way into early HTML.  As with popups and alerts, browsers identified the issue and took initiative.  The difference is that blink was later dropped from web standards entirely.  Many modern browsers have never supported the blink tag, and those that did dropped their support many years ago.  In short, there is a precedent for this.  Browser makers are involved in the creation of web standards.  They are not the only ones though, and sometimes browsers can be outvoted by other entities.  When it comes down to it though, browsers implement the web standards, and if they choose not to implement some standard as a group, that standard becomes meaningless.  Of course, this rarely happens.  Typically when a browser chooses not to implement a web standard, it is either to make a statement or out of greed (MS has done both, and a majority of the time, it has suffered as a result).  (Some smaller browsers also choose not to implement standards that are especially difficult to implement, for more practical reasons.)  But when practically everyone is eliminating or quarantining some particular element of the web standards, it is probably time to seriously consider whether that standard deserves to even exist.  Blink was killed off when it was discovered that it had no legitimate use and plenty of obnoxious uses.  It is still possible to make text blink, and some web developers who lack an appropriate level of design education even do it, but it is more difficult than a simple tag, because it requires some mildly complicated CSS or JavaScript to accomplish.  Blinking text is all but non-existent on the web now though, which is as it should be.

The next iteration of web standards should perhaps focus more on eliminating problems than adding new features.  There is already a precedent for keeping web sites as self-contained as possible.  Standards regarding cross site scripting have already been created to deal with this on one side of the fence.  It's time to deal with it on the other side.  If web sites should be self-contained, they should not be able to open dialogs, windows, or tabs outside of their own space, without explicit action and intent from the user.  If the user really needs something open in another window, the user can click a link that will accomplish that.  Allowing the web page to do it without explicit interaction from the user is just plain irresponsible, and the fact that popups are commonly used in virus distribution suggests that this is a serious security issue as well.  Browsers should not have to block popups.  Popups should not even be possible.  The same applies to alerts and to modal dialogs where there is a better solution that can be limited to the scope of the web page.  The next web needs to put users back in control and stop allowing web sites outside of their own scope without explicit user interaction.  Technology has come a long way.  We don't need the most annoying features of the web anymore.  It's time to get rid of them.

Monday, July 23, 2018

Minecraft Multiplayer

I used to play Minecraft with some friends.  Eventually, around 1.9, they quit.  Their reason was that Minecraft provides a poor multiplayer experience.  There is a fundamental difference in how and why I play games compared to how and why they play games.  I can play single player for long periods of time and still be enjoying myself.  I play because I enjoy games.  I stick mostly to single player even in games designed specifically around multiplayer, like Heroes of the Storm and Hearthstone.  I actually gave up League of Legends specifically because when I first tried it, it offered a poor single player experience.  My friends, however, play games largely for the interpersonal interaction.  Minecraft seemed like a good choice, because it offered such a good single player experience.  For a while it even seemed like it was good for multiplayer, until we had been playing it together for a while, and my friends started to notice something.  We also played Terraria together, and the experience was totally different.  In Terraria, we would frequently get together for things.  We would build a single central town.  We would frequently chat.  In Minecraft, even when three or more of us were on, we would rarely work together.  We would choose separate locations to build our bases.  We would rarely chat.  For some reason, even though the games have many similarities, Minecraft did not offer the same interpersonal interaction as Terraria did, and eventually my friends decided that this was such a problem that they did not want to spend their time playing Minecraft at all.  Since that time, I have discussed the possible reasons for this with one of those friends.  This article is about what we have found and how Minecraft's multiplayer experience could be improved.

The first issue was identified by my friend.  He generally prefers to play games with their default music, because it is part of the experience designed by the creators of the game.  I generally play for a while with the default music and then turn it off and play my own music instead.  Minecraft's music is deliberately designed to be isolating.  So he was actually feeling more isolated than I was.  This may have even contributed to the decision to choose a more isolated building location.  That said though, Minecraft offers no reasons not to claim separate land and build separate bases.  When he turned the game music off and played some of his own music, the feeling of isolation diminished significantly.  Unfortunately, it still did not remove the actual isolation.

We decided to compare Minecraft to Terraria to find other ways where Minecraft discourages interpersonal interaction while Terraria encourages it, and this produced a lot of discoveries.  There are differences in world composition, transportation, character progression, game progression, and content that all contribute to the level of interpersonal interactions in these games.  Perhaps the best part is that many of these things could be improved in Minecraft in ways that improve multiplayer without destroying the deliberate feeling of isolation in the single player experience.


Shared Base

The biggest thing we found was NPCs.  Terraria has unique NPCs that need houses.  The uniqueness of these NPCs is a critical factor here.  If they were not unique, everyone could build separate bases and get their own copy of each NPC.  Because they are unique though, if everyone built separate bases, the NPCs would be split up between them, and this would make using them horribly inconvenient for everyone.  The lack of this single mechanic makes an enormous difference in the isolation of Minecraft.

Minecraft does not necessarily need unique NPCs to fix this issue, and besides, unique NPCs would interfere with the intentional feeling of isolation in the single player game.  What it could do, however, is add a server option that will generate a village at spawn and place sufficient light sources or even a wall around it, to make it safer from getting wiped out by mobs before the player is strong enough to protect the village.  This would give the players a shared base to start from.

A village at world spawn would provide several advantages that encourage players to build around the village instead of establishing separate bases.  One is that it would give players a place to stay the night.  If several of the houses had beds, it would be even better.  Yes, it would reduce the difficulty of the early game, but given that part of that difficulty is the tedium of waiting out the first few nights and the fact that the early game is generally fairly short, this is not a huge sacrifice.  This would, of course, provide players with a motivation to frequent the spawn area, since the villagers generally offer some valuable trades.  This makes the spawn a good place to establish a base, for everyone.  If the village protection is good but not perfect, that would also offer motivation to stick around to defend the village.  These advantages would encourage players to establish a shared base near the spawn village, instead of claiming separate plots of lands and building their bases away from those of other players.

These things together would probably make a bigger difference than anything else.  If the players were motivated to establish a common base, every time they returned to base there would be opportunities for interaction.  For some, however, this may be too much.  If the early game is just too valuable for you to skip, perhaps there could be another option that starts each player with a villager spawn egg already in inventory.  This does not interfere with the early game, but it does motivate players to work together to build a village to spawn their villagers in.  This even more directly encourages players to build a common base, which still ultimately has all of the benefits of starting with a village at spawn.


Transportation

The second issue was transportation.  When a player chats, "I found this awesome thing!" the best you can do is take his or her word for it.  In Terraria, if it is close to base, you can use a magic mirror or a recall potion to get there quickly.  If it is not, perhaps you have a wormhole potion.  The size of even large Terraria worlds is small enough that it is often possible to just walk there within a minute or two.  In Minecraft, if you are lucky, the player will post a screenshot some time later.  Most of the time though, we don't even bother mentioning the cool things we see or find in chat, because we already know there is no way to convey the full experience.

Minecraft's lack of solid transportation options and enormous world size means that when we are in areas far away from each other, we are isolated by the time it takes to get back together.  Even a shared base won't mitigate this, as the only way to get back to spawn fast is death, and when you die, you drop all of the cool loot and equipment you have.  Around a year ago, I attempted to make a "magic mirror" mod, that would teleport the player back to spawn when used.  Due to a number of technical difficulties, including the programming language and the poor documentation of Forge and Minecraft APIs, I was unable to get it to work correctly, and eventually I was unable to justify all of the time I was spending on it.  Someone else has actually successfully made this mod, and it appears to work well, but we should not have to mod Minecraft to get an acceptable multiplayer experience.  When a single player is playing the game, the enormous world size is not a huge issue.  When playing multiplayer though, Minecraft really needs the magic mirror and other teleportation mechanics more than Terraria does.

There are two critical transportation things Minecraft needs for a good multiplayer experience.  The first is a simple way to teleport to spawn, without losing all of your items and experience.  The second is a way to teleport to other players.  These don't need to be cheap, as long as they are reasonably accessible.  The important part is that it is not terribly difficult for players to gain the ability to group up easily.

Ideally, the teleport to spawn item should not require any resources you cannot get in the overworld.  The mod I found allows a magic mirror to be constructed from gold, diamond, and lapis, if I recall correctly.  This is expensive, but it is accessible by mid-game.  Being able to construct a magic mirror device before going into the nether would make it easier to avoid getting stuck there.  Personally, I think this is desirable, but if it is not, the magic mirror might not work outside of the overworld.  Note, however, that this would seriously hinder its usefulness in multiplayer, because the nether and end would still be very isolating and prevent quality multiplayer interaction.  Because the main boss is in The End, the ability for players to group up there is critical for a good multiplayer experience.  Keeping players trapped there while others are working to get the necessary gear to be successful is probably a very bad idea in multiplayer as well.

Teleportation to other players is not as easy as in Terraria, because you cannot just locate them on a world map.  Instead, it might be better for players to be able to make and distribute special items that will allow other players to teleport to them.  When I was attempting the magic mirror, I planned to do this with wall mirrors.  A friend might construct a wall mirror, and then he could give it to me.  The mirror would retain the identity of the creator though, so when I use the mirror, I would teleport to him.  I could then get a wall mirror from each of my friends, put them all in my house, near my spawn, and then if I wanted to go to one of my friends, I could magic mirror home and then use the appropriate wall mirror.  Alternatively, potions could be used this way, but it would be impossible to stack potions from one player with those from another, and this would become a serious inventory burden, especially with groups of 5 or 6 players or more.  To be feasible, an inventory item based solution would probably need a new UI to select who to teleport to.


There are other general transportation improvements that could also be made though.  For example, it is frustrating when you end up spending as much time transporting goods to the surface as you do mining them.  The minecart system is still not good enough for general purpose goods transport.  It can work, but it is often more work to setup the system than it ultimately saves.  Some old rogue-like games had a "Rune of Return" item, scrolls, or spell.  When used in a dungeon, this item would teleport you home.  It would keep track of where you were though, and when used at home, it would teleport you back to where you were in the dungeon.  This could be a valuable addition to the game.  Again, it does not need to be cheap, but it should probably be obtainable in the overworld, if it is the only "magic mirror" style item available.  It could be an item rarely obtained when enchanting books, or it might be only obtainable from dungeon chests.  This would fill the role of a magic mirror item, as well as making cave exploration and mining more accessible.

Now, while it would not bother me, some players might complain about items like this being free to use.  If resource management with these items is desirable, there is a simple solution: Make use cost experience.  It should not cost a lot, but maybe 10 experience would be a decent cost for a magic mirror, 15 to 20 for a rune of return, and 30 to 40 for a wall mirror.  In addition to reducing some of the tedium of travel and making the game more friendly for multiplayer, adding another use for experience would give players more motivation to go fight monsters.  As it is, it is trivial to collect 30 to 40 levels if you are good at avoiding death, and there is not really a ton of value in enchanting a lot of stuff with that.  Most importantly though, transportation mechanics like this would dramatically improve the multiplayer experience, with or without some usage cost mechanic.


Group Activities

The third issue is lack of motivation to work together and communicate.  There are not really any things in Minecraft that require or even encourage team work.  Two people mining is no better than just one, because when you have two people, you need twice as much resources.  In fact, mining separately improves odds of finding rare ores.  There are far more reasons not to work together than to work together.

Terraria's strongest motivation to work together is bosses.  Minecraft's lack of many bosses is a major weakness when it comes to multiplayer.  Terraria also has events where multiplayer is good, but again, Minecraft does not really have these.  "Zombie sieges" were added at one point, then for several releases they never triggered due to a bug, and then they were added back in at 1.8, but they are not really that impressive, as far as events go.  The stakes are pretty high, since they can wipe out a village, but there is no indication that they are happening and often villagers are wiped out entirely, sometimes before the player even knows that there is a village nearby.  In addition, there is no reward (or conditions) for "beating" the event or even for killing the zombies, so it is generally easier to just build a wall or put blocks in front of the doors and wait out the night.  This particular event could be improved by making event zombies drop more loot and be more likely to drop rare loot.  If event zombies had twice as many drop chances and the odds of rare loot drops doubled, that would be some pretty good motivation to engage them, especially in the early game, when their iron gear, iron ingot, and plantable crop drops are incredibly valuable.  And increased rotten flesh drops would be nice in villages with priests that will buy it.  This is only good for the early game.  Later on (perhaps when the local difficulty is higher), skeletons might be added to the events.  The increased bone and arrow drops would also be a pretty good motivator, while the ranged attacks would increase the challenge.  Adding small numbers of spiders and/or creepers even later could also be valuable, as their drops are of significant value in later game potion making.

The place where Terraria really excels in motivation for teamwork, however, is bosses.  Minecraft (we are talking about the Java version here) only has two bosses.  The Wither is sort of the nether boss, and the Ender Dragon is the End boss.  There is no overworld boss though.  There is plenty of room for bosses.  The overworld's strongholds could have bosses near or in the End portal chamber.  Nether fortresses could have bosses.  (Though, one might argue that this is exactly what the Wither is.  That said, a mini-boss that has a 100% chance of dropping a wither skeleton skull would be a valuable addition.)  There is already a model and some data for a giant zombie.  This could be a good overworld early-game boss.  Perhaps it would spawn after a certain number of zombies were killed, or maybe it could be summoned with a zombie head on top of some particular type of stacked blocks, similar to golems and the Wither.  If it moved a little faster than regular zombies, did fairly high attack damage, and had a lot of health, it could easily be a boss worthy of facing multiple players.  If it was a summon-only boss, it might also be able to break blocks by kicking them or ramming them (break, but not destroy, allowing players to pick them up during or after the fight).  This could make for a pretty intense boss fight that a team of players might work together to prepare for.  It should be beatable for 3 to 4 players in leather or iron armor, given the early-game nature of the fight.  If the boss was summoned using a zombie head, zombies might have a chance of dropping zombie heads when killed by players during siege events, as they are currently very difficult to obtain early game.  Adding more bosses, designed with multiplayer in mind, would really improve the multiplayer game in Minecraft, by providing motivation for teamwork.


Item Diversity

In addition to all of this, there is one other thing that really encourages Terraria players to work together: Class items.  Terraria does not have formal classes, and Minecraft does not need them either.  Terraria does have informal classes though, established through item bonuses.  This would be very easy to add to Minecraft.  Right now, there are three major weapon types in Minecraft.  It has swords, bows, and potions.  There are a few other things you can use as weapons, but they are not anything that could reasonably be turned into informal classes.  Imagine an armor enchantment that adds damage to melee attacks.  This would not be difficult to implement.  If the player hits a mob with a sword, add some damage, depending on the level of the enchantment.  (Each level adds half a heart more damage.)  If you can get a full set of armor with melee bonuses, you can do rather a lot more damage with a sword.  Likewise, enchantments could be added for increasing ranged damage (which may or may not apply to a thrown trident, as well) for players specializing in bows, and enchantments could be added increasing the effectiveness of splash potions (for a twist, this could also boost the effectiveness of splash healing potions, allowing for a "healer" potion subclass).  Adding class items to Minecraft could be as easy as adding some enchantments that boost damage from specific weapon types.  Why do class items encourage working  together in Terraria though?  Because if a player is focusing on ranged damage and gets a melee class item, that player is more likely to offer the melee class item to another player that specializes in melee.  This also applies to weapons and other things.

Item diversity in general is one of Terraria's strong points when it comes to multiplayer.  Even within classes, different items behave differently.  One player might prefer a very hard hitting sword that is slow to use, while another might prefer a lighter sword with autoswing that is much faster.  As long as they are reasonably comparable, damage per second is not a major factor.  The same applies to other classes as well.  A magic class sword that makes damaging stars fall on enemies might be perfect for one player, while another prefers a spell that fires lots of tiny crystal shards a short distance.  This encourages players to help each other by giving away items they are not interested in but that someone else might like.  Item diversity is a huge multiplayer advantage in Terraria.

One place Minecraft could really expand, aside, perhaps, from more interesting items in general, is "accessory" items that provide passive boosts.  The Baubles mod shows exactly how this can be done, by adding necklace, ring, and belt slots.  Combine this with the class items thing above (as Terraria does), and now you can have a belt of strength that increases melee damage, a ring of dexterity that increases ranged damage, and maybe a necklace of...intelligence?...that boosts splash potions.  Or course, accessories could also provide defensive boosts, some of the underwater breathing and swimming enchantments, and so on.  Accessories, unlike weapons, could be indestructible (aside from things like lava and cacti), providing enchantments that might normally be found on other gear, at a higher price.  For example, the Infinity bow enchantment allows a bow to be used without consuming arrows.  When enchanting an accessory, there might be a chance for this enchantment to be given, but only for high end accessories.  Perhaps a ring crafted from copper would have no chance of getting the Infinity enchantment, but maybe a diamond ring crafted from gold bars and a diamond would be eligible, at the 30 level enchanting tier.  When wearing the ring, the enchantment would apply to any bows the character uses, but the ring would take up a valuable accessory slot that could instead be used for something else.  The mod provides one necklace slot, two ring slots, and one belt slot.  Enchantments for accessories could include some of the existing weapon and armor enchantments, always applied to whatever appropriate item is in use, as well as increased walking speed, higher jumping, better fishing results, faster mining, shorter mob aggro distances, and more.  Accessory items could be crafted and then enchanted, using the existing enchantment mechanics.  There might be occasional enchantments that only show up on items in dungeon chests but never when enchanting with the enchanting table.  Having a large variety of accessory enchantments would again motivate players to give or trade accessories that not as useful to their play style.


These are not the only places Minecraft could improve its multiplayer experience.  Some have suggested that the cave/dungeon generation system needs an overhaul, to make underground exploration more interesting.  This would improve both the single player and multiplayer experienced.  Some have suggested more mob diversity, which would also improve the game in general.  More decorative items, like furniture, would certainly improve the artistic building aspect of the game.  The items I have discussed above, however, are where Minecraft could improve the multiplayer game the most.  Something to encourage players to work together on a shared base would make a huge difference.  Transportation mechanics designed specifically with multiplayer in mind might make the biggest difference of all.  Group activities would give players more things to do together.  Greater item diversity would encourage cooperative play progression.  Most of these are also not terribly difficult.  Adding a server option to create a village at the spawn location would be simple.  Items for teleportation would be fairly simple to add as well.  Adding more group activities would be more work, but improving the one event that already exists, to make it more motivating to engage in would be fairly easy, and it would make a good group activity.  Increasing item diversity would be a lot of work, but the low hanging fruit of adding class specific enchantments to the existing enchantments would not be terribly difficult.  Minecraft certainly has a ton of room for improvement, but right now, the biggest place it could improve is the multiplayer game.  A Minecraft with a better multiplayer game would be far more popular than the current Minecraft, and given how popular the game already is, that is really saying something.

Sunday, July 1, 2018

Cast Off the Training Wheels

Modern programming languages have too many training wheels.  In this context, training wheels are language "features" that arbitrarily restrict what the programmer can do with that language.  In Java, this includes things like forbidding multiple inheritance and requiring functions to either handle or declare that they can throw errors.  In Rust, this includes pointer ownership.  Most high level languages deliberately forgo any mechanic for direct memory allocation and manipulation.  All of these are "training wheel" features, designed to prevent the programmer from getting into trouble, but they also all take decision making power away from the programmer, and most deny the programmer some very valuable tools.  Humans have been programming for well over a century, and we have been programming physically existing machines for over half a century.  Isn't it time to take off the training wheels?

Like early bicycles, early programming languages did not have training wheels.  Restrictions were typically the result of practical considerations.  For example, dynamic typing requires a far more complex compiler, and it requires the program to store types and do frequent type checking.  Hard coding types make code faster and easier to compile, and it produces much faster programs.  On systems or in applications where performance is important, static typing is the practical choice.  As computing systems grew in resources, development time grew in priority, and languages with dynamic typing were invented.  Then people started arguing about the meta, and the concept of type safety was born.  Something similar happened with multiple inheritance.  Object oriented programming was invented to provide a thought model that is easier for the typical human to understand.  Inheritance was invented to allow this model to support more abstraction.  Multiple inheritance was added to allow for even more abstraction.  High levels of abstraction are associated with faster programming (though it neutralizes the benefit of making programming easier for regular people to understand).  Then someone pointed out that multiple inheritance can result in ambiguous code, and Java decided it need to solve this by eliminating multiple inheritance and replacing it with a similar, far more limited mechanic, that eliminates the ambiguity possible in multiple inheritance at the cost of a significant chunk of the benefits of multiple inheritance.  This was not a decision born from practicality.  It was deliberate training wheels, designed to protect programmers from themselves, at the cost of some of their freedom.  Rust's ownership system for dynamically allocated memory is another example, where someone noticed some common mistakes programmers make and decided to provide training wheels to make that mistake impossible, again at the cost of eliminating some legitimate and valuable options.  It was an ingenious solution, but it comes at the cost of making the language less powerful and the highly skilled programmer less productive.  Training wheels are great when they are needed, but to the skilled professional, they often prevent optimum performance.  On a bike, training wheels limit the maximum safe speed a cyclist can make a sharp turn.  In programming, training wheels lock up some very valuable tools.  Now that we have training wheels, we need to learn when it is time to cast them away.

Instead of training wheels, many modern vehicles have warning systems.  If you are following too close or are about to hit someone while backing up, an alarm will sound.  The car will not prevent you from continuing your action, but it will let you know that what you are doing is not safe.  Vehicles that have attempted to second guess the driver have been known to cause accidents or make accidents worse.  Sometimes, briefly following too close is necessary to avoid an accident.  Sometimes backing into a parked car is better than getting hit by someone driving recklessly.  And the driver should always be able to correct any mistake the car might make.  In addition, imagine a car that stops at every crosswalk and train crossing, when you can clearly see that no one is coming.  Some of these "features" are more like a car that simply won't ever drive across a crosswalk or train crossing, whether it is safe or not.  Safety systems are not a bad idea, but machines are still not even close to being able to judge complex situations as well as humans can.  Warning systems that allow for human judgment are currently far superior to systems that just refuse to do certain things in the name of safety.

The same applies to programming languages.  The language should not try to second guess the programmer.  That variable that is assigned a value and then never used might exist to add a critical delay during the initialization of a microcontroller.  Warning the programmer that the variable is not being used is useful, but refusing to compile until the variable is used is counterproductive.  Warning systems do not have to be built into languages either.  In fact, in general, they are better as separate programs, where possible.  The C compiler does not need to warn the user about style errors.  In fact, it is far more convenient when concerns are separated.  Linters warn about style errors, and compilers should stick to syntactical errors that prevent compilation entirely.

The fact is, compilers should not care about best practices, because in software engineering, best practices are heavily dependent on application.  Linters are where we are already doing most of the verification of best practices, and they make a good place to verify even more.  Linters cannot prevent compilation.  The best they can do is issue warnings.  But warnings are enough when it comes to best practices in software engineering.  Some warnings can and should be ignored.  Sometimes the best practice for a particular problem is not the general best practice.  Instead of enforcing best practices in language definitions, we should be using external tools to verify best practices and giving programmers the full set of available tools in our programming languages.  The best practice in engineering does not include taking away tools that are easy to misuse in ways that are harmful (otherwise, engineers wouldn't have any tools).  The best practice is to provide all available tools but include safety protocols that will prevent harm when they are properly followed.  In programming that means making languages that allow anything and protocols and linters that will warn users when they are doing things that are potentially harmful.

It's time to take off the training wheels, so programmers can do their jobs better and more efficiently.  It's time to take best practices out of our programming languages and put them into linters, so that we will finally have access to full, rich array of programming tools in addition to the ability to use them safely.

Soft Safety in Flexible Programming Languages

For almost half a century now, there has been an ongoing war between type safety and flexibility in programming languages.  It is time for that war to end, because Python 3.5 has demonstrated the mechanics for a solution.

The argument goes like this: One side says that static typing it critical to writing correct programs, because without compile time type checking, it is impossible to guarantee type consistency.  In practice, this means that serious bugs can be easily missed and come up rarely enough to not get noticed and fixed but often enough to cause serious problems.  In addition, type mismatches are frequently a symptom of the programmer not fully understanding the problem.  The most strictly and statically typed languages often produce almost or even completely bug free programs, once the program will compile.  On the other hand though, dynamic typing provides some extremely powerful advantages.  The ability to add attributes and methods to objects dynamically in Python is an example of dynamic typing that has great value.  Likewise, the ability to give a collection of unrelated objects methods with the same names and use them as if they were similar, without using some inheritance mechanic to invoke polymorphism, is also a function of Python's dynamic typing.  Dynamic typing is essential to meta-programming, which can allow for writing very complex programs in very little code.  Static typing advocates assert that static typing is necessary to avoid obvious errors, while dynamic typing advocates point out research showing that skilled programmers do not generally make those errors even without static typing.  It is all a trade off, but the consequences are very real.  Static typing is significantly less powerful, often making it take significantly longer to program in statically typed languages than dynamically typed languages, but the stronger guarantee of correctness has the potential to cut significant time out of debugging.  The fact is, there is no clear consensus whether one or the other is objectively better, but there are definitely cases where the cost of one is significantly higher than the other.  This is especially true in cases where meta-programming would be beneficial, because static typing does not allow for meta-programming.

The ideal language would probably be one that can handle both dynamic and static typing.  Unfortunately, no such language exists.  Yes, it is technically possible to use something like dynamic typing in C and C++, with liberal use of type casting, unions, structs, and void pointers, but it is ugly and time consuming to do this.  In languages like Python, one might enforce a sort of static typing by constantly checking types and throwing exceptions when the wrong type is used, but this misses the benefits of compile time checking, and it comes at a very high performance cost.  Like dynamic typing in C/C++, it also makes for ugly, time consuming coding.

The important thing about static typing is that there is a compile time type consistency check.  Haskell uses implicit typing.  In other words, it looks at how things are used, and it guesses their types based on that.  Sometimes it is necessary to explicitly specify types, but most of the time it guesses correctly, and type errors are typically a sign of deeper bugs.  C and C++ require the programmer to always specify the type when a variable or function is declared.  The similarity, however, is that all of these languages verify type consistency at compile time.  A dynamically typed language like Python instead keeps track of types during runtime, only verifying types when there are interactions.  Strong runtime type consistency checks are called strict or strong typing (not static typing).  Lose or weak typing is when a language uses implicit type casting to avoid type mismatches.  There is a general consensus that strict or strong typing is better than lose or weak typing, because lose typing frequently leads to unexpected and undesired behaviors (PHP is a widely criticized language, in part due to the negative side effects of loose typing).  Aside from JavaScript (which has also faced significant criticism over this), pretty much all common modern languages have fairly strict typing, limiting implicit casting almost exclusively to casts to floats in math where floats and ints are mixed.  (Python also allows some list-like objects to be multiplied by scalars, to repeat the list that many times, but this involves no actual type casting.)

Python 3.5 added type annotations to the language.  Prior to this, some Python programmers would place comments after initial variable declarations, specifying the type, to help them keep track of types.  The creator of Python decided that this kind of informal annotation was valuable enough to some people that it deserved to be formalized.  Type annotations were made a formal part of the language in Python 3.5.  Note however, that type annotations are nothing more than a new, formal comment syntax for noting types.  They don't look like comments, but they function like comments.  According to the document introducing type annotations, the language does not and never will enforce them.  They are also known as "type hints".  They are not intended for runtime type checking (you still have to do this manually if you want/need it), and aside from making sure the syntax is correct, the Python interpreter totally ignores them.

What is the point of formal type annotations that are ignored?  How are they better than comments?  The reasoning mentioned in the PEP document introducing the feature, is that comments annotating types make the code less readable and can interfere where other comments are needed.  Python has been carefully designed to be as self documenting as possible, and adding a formal type annotation syntax helps with this.  That said, there is perhaps a more important use for type annotations.  The Python interpreter does not and will likely never enforce static typing.  All static typing is, however, is a compile time check, to ensure type consistency.  In standard Python, creating such a check would require some type inference mechanic,  which is complex and difficult to write.  With type annotations, however, one could easily write a linter that will use the annotations to determine type and then verify that types remain static throughout the program.  There are two important things though.  The first thing is that compile time checks do not actually have to happen at compile time.  Compile time is merely the latest they can happen.  The second thing is, we do not actually need annotations to be part of the language.  This merely makes the code more readable.  If the Python community had come up with a consistent comment annotation syntax for this, it would have been just as useful as formal annotations for pre-compile static type checking.  It turns out that dynamic typing has never been a problem.  Anyone with the skill could have developed a comment syntax and written a linter to verify that typing remains static, for any language with dynamic typing.  If no one cared enough to do that, then clearly static typing in any particular language was never a priority for them.

There is a massive advantage to this soft type safety though, and that is that it is possible to mix static and dynamic typing.  Most programs written in dynamically typed languages only rarely take advantage of the dynamic typing.  Most variables keep their initial type for their entire lifetime.  Enforcing static typing on most variables is a valuable strategy for catching and eliminating bugs.  But, when you need dynamic typing, the cost of working around that in a statically typed language can be enormous.  When this happens, the additional debugging time that might be required in a dynamically typed language pales in comparison to the extra time spent working within static typing instead of using meta-programming.  So, what if we could make some variables static and some dynamic?  This is precisely what a soft type checker in a dynamically type language can allow.  With Python's type annotations, we can use annotations on variables where we know the type should never change, but we can also not use annotations where we need dynamic typing to use our time more economically.  This provides the advantages of both!

Type safety is not the only place where this might have value though.  One of the things that makes C such a powerful language is its freedom to allocate and use memory dynamically and freely.  Unfortunately, this freedom also makes it prone to memory leaks and segmentation faults.  Rust introduces an ownership mechanic that allows for most of the power available in C, with the minimal restrictions required to ensure memory safety.  This mechanic also ensures thread safety.  Except, there are a few places where this safety is not necessary or where it would be better for the programmer to deal with the memory safety instead of the language.  Like dynamic memory, these places are rare, but they do exist, and when they come up, it can make a huge difference.  Could we perhaps implement a kind of soft memory safety for C, just like type annotations allow for soft type safety in Python?  I believe so.

C already has a built-in annotation mechanic.  Pragmas are used by the C preprocessor as a sort of annotation that tells C to alter its behavior in a particular way.  They can be used to silence certain warnings, for example warnings saying that a variable is never used may be irrelevant in embedded systems programming.  In GCC, compiler plugins can also use pragmas.  For example, pragmas are used to tell OpenMP where and how to use threads in the program.  Pragmas that are not used by anything are generally just ignored.  It should be fairly simple to define a pragma syntax that can be used to indicate where and how ownership mechanics should be used, and then a linter could easily be made to interpret those pragmas and verify that the ownership mechanic is being used coherently.  Of course, the program will compile whether ownership is being violated or not, but if the linter is used consistently and all errors fixed, a C program using this mechanic should be just as memory safe as a Rust program using language enforced ownership.  The best part is that if you have some memory where you know memory safety is not an issue or where you need to ensure memory safety manually for whatever reason, you can just indicate that ownership mechanics should not be applied to it.  With this, C itself does not have to guarantee memory safety for your program to have a memory safe guarantee.  With a little extra syntax, memory safety can be guaranteed before the compiler even touches the code.

In my opinion, soft safety is actually better than hard safety, because it does not eliminate the benefits provided by more flexible and powerful programming languages.  It allows us to take the best aspects of very strict languages and apply them to more dynamic and flexible languages, without losing any of the flexibility.  What we need is not type, memory, and thread safety built into languages.  What we need is syntax and software that can analyze our source code and guarantee exactly the amount of safety we want and need, without eliminating the features that make many languages so powerful.  This kind of soft safety could be another step in the evolution of quality for software development.

Monday, April 16, 2018

Video Game Design: Narrative

Narrative in video game design is much more important than in table top game design.  In table top games, narrative is generally limited to providing a motive for the player to play the game.  It explains why the player wants the most points, to control the most territory, or to complete some goal before everyone else.  In video games, narrative is not a static answer to the questions, "Who am I?" and "Why am I here?" in the context of the game.  In video games, the narrative is the story the player is either playing through or creating.

Some games do not need narratives.  An excellent example of this is Tetris.  There is neither room nor need for narrative in Tetris.  Unlike table top games though, most video games are designed to be played for more than just an hour or two.  Many games have transitions between levels, regions, or even activities, and each transition often needs some kind of narrative.  Because the narratives for a particular game need to be associated with each other, that means games need stories, where each narrative is a progression of the previous narrative.  While not all video games need narratives, most do, and many need narratives that fit together into a whole coherent story.

Video games are a form of art that turns many different artistic elements into complex compositions.  The obvious artistic elements in video games are visual art, music, and sound effects.  Stories are art too though, and many of the best games integrate narrative art forms into themselves.  Unfortunately, narrative is frequently added to games as an afterthought.  This can result in games with awesome graphics, audio, game mechanics, and so on, but with terrible, unmotivating stories.  Narrative is often an essential element in the composition of video games, but it also tends to be overlooked, resulting in mediocre games.

Narrative in a game should answer a few basic questions.  It should explain who the character is and why the character is there.  This, in turn, should provide at least initial answers to the question of what the immediate goals of the character are, and ideally it should motivate the player to want to achieve those goals.  In video games, the narrative should form a story, that explores the goals of the character, revealing new goals when necessary and helping the player understand how his or her actions are helping the character to achieve those goals.

Especially good narratives also explore the development of the character, and they can even ask hard questions designed to make the player think.  For example, in a serious game, the narrative might prompt the player to consider whether the things the character is doing are wrong or right.  It might ask questions that encourage the player to reconsider his or her point of view on something.

As with books, television shows, or movies, narratives can add spice by throwing in plot twists.  Engaging the player with narrative is often just as important or even more important than engaging the player with pretty graphics, good music and sound effects, and well designed game mechanics.

Narrative also provides an additional setting for exploration, which can be a very strong motivator.  Players like to explore, because they can find interesting things.  This may be new scenery, interesting locations in the game, new items, new friends, or new enemies.  A good narrative also provides room for exploration.  The Myst series is an excellent example of this.  There are many places where you can find books or other text that reveals back story, information about the relationships of the characters, how things work in the game worlds, and more.  There are also occasional opportunities to explore narrative by talking to people in the game.  A vast majority of the narrative, however, is revealed through settings found during physical exploration.  In the original Myst, you can learn about the personalities of the brothers by exploring their dwellings in the various Ages that are available for you to explore.  This provides more motivation for physical exploration.  In Myst 3: Exile, a man that got caught up in the schemes of the brothers reveals back story and plot information at various points during the early and late game.  This narrative exploration is at least as interesting as the physical exploration of the game.  Wanting to know more of the story can be a very strong driving force, when the narrative is well designed.

For many genres of video game, narrative is critical.  Not all games need narrative, but when a game needs narrative, it is important to do it well.  Just like artists do not put epic paintings in crummy frames, because the frame will detract from the painting, game designers should not combine poor narratives with epic graphics, sound, and game mechanics, because the narrative will devalue all of the higher quality art work.  The popularity of retro games has also shown that narrative quality is often more important to players than the quality of other art in a game.  Good narrative might be hard to do, but it is very important to many kinds of video games.

Thursday, April 12, 2018

Video Game Development: Cameras

If you have not read Video Game Development: Relativity, you might want to do so, before reading this article.  If you don't have a solid grasp of basic relativity, this article may be confusing and hard to follow.

While the concept of cameras is generally only a major topic in 3D graphics, this discussion is going to be limited to cameras looking at 2D game worlds.  The reason for this is that cameras in 3D space is a whole, fairly complex topic of its own, and a full explanation would take many, fairly long articles.  The purpose of this article is to explain how cameras work in 2D and why we need them.

One of the most important places for understanding relativity in video game development is in cameras.  The most obvious reference frame for games is the screen or window, where the upper left corner is (0, 0).  This works fine, until your game area is larger than the screen.  What happens when your game is on an 800 by 600 screen, but the game map is several thousand pixels wide and tall?  In most video games, a majority of the game map is off of the screen at any given time.  To view the entire map, we have to be able to change where on the map the screen is looking.  In my experience teaching video game design, the most obvious solution is to move the map around, keeping the screen as the reference frame.  This is like the pre-Copernican heliocentric model of the solar system though.  Every time the player wants to see a different area of the map, every single entity on the map has to be moved.  This even includes entities that are not supposed to be able to move, like trees or buildings.  As with the heliocentric model of the solar system, this results in a lot of extra math, and that can be very expensive when it comes to performance.  The fact is, the screen makes a pretty poor reference frame, when the map is larger than the screen.

If there is a main character, it might be tempting to make the main character the reference frame.  Unfortunately, this is no better than making the screen the reference frame.  In fact, generally these are identical, as the screen is generally always centered on the character, and if the character is the reference frame, the position of the screen never moves.  So again, we are back to the heliocentric analog, where we have to move everything, including stationary objects, whenever the character moves.  As before, the character is a pretty poor reference frame as well.

What if we make the reference frame the thing that is supposed to be stationary?  If everything is supposed to appear to be moving relative to the map, then maybe the map should be the reference frame.  This allows stationary things to remain stationary, instead of wasting processing time to move with reference to the character or screen.  But now we have a problem: How do we move everything relative to the screen, so we can see different parts of the map?  This is the wrong question.  This is like ancient astronomers asking, "How does everything move relative to the Earth?", and it is what resulted in the heliocentric model in the first place.  The answer that lead to the truth was that we are moving, relative to something else that makes a better reference frame.

What if we allow the player to move?  (It is important to distinguish here between character and player.  The character is an entity in the game that is controlled by the player.  The player is the person sitting at the desk in front of the computer.)  The player's eyes, in the game world, are essentially the screen.  So what I am suggesting is, what if the screen moves relative to the world?  It turns out this is the right solution.

In a video game, the screen represents the player looking into the game world.  It essentially represents a camera in the game world that is transmitting what it sees to the players screen.  In this context, if we want the player to see a different part of the world, we just move the camera.  In 2D games, the camera can be represented by a rectangle.  Typically the rectangle will be the size of the screen (if it is not, that generally represents scaling), and the position of the rectangle in the game world determines what part of the world the player sees.

Implementation of a camera is actually fairly simple.  The camera data structure contains four elements.  It contains an x and y coordinate that together represent its position in the game world, and it contains a height and width, that represent the area in the game world that it can see.  If that area is larger than the screen, that represents scaling down the image to fit that area onto the screen (essentially zooming out), and if that area is smaller than the screen, that represents scaling up the image to fill the screen area (zooming in).  Typically, however, the camera size is either the same size as the screen, or it is the size of the portion of the screen where the game world will be displayed.

Once you have a rectangular representation of your camera, all that is necessary is a little bit of additional math when rendering.  Consider, if an entity in the game is at (0, 0), and the camera is at (10, 10), then the entity will be rendered 10 pixels past the top and 10 pixels past the left of the camera.  This position is (-10, -10).  This suggests that all that is necessary to apply the camera is to subtract the camera from the position of each entity as it is being rendered.  It turns out this will get the correct results.  All you need to implement a camera is a representation of the camera as a rectangle and two subtractions for each entity being rendered.

Scaling requires a little bit more work.  Typically, the camera is the same size as the area of the screen where the image will be rendered, but sometimes we want to make the camera smaller or larger, to scale the image.  This allows the ability to zoom in or out.  To scale, we have to divide the screen area by the camera area, and then scale the image being rendered by that value.  For example, if our window is 100 by 100, and the camera is 200 by 200, then we need to scale the image by 0.5 by 0.5 (that is (100/200, 100/200)), for it to fit.  Some rendering libraries will handle this automatically, if you tell them what size you want the source and destination to be.  Others provide scaling functions that may be used by the developer, if desired.  In 2D games, zooming is not a terribly common feature, but with a well implemented camera, it is not hard to accomplish.

Cameras come with some additional advantages.  For example, if you perform collision detection of each game entity with the camera, you can avoid rendering things that are not on the screen, by only rendering entities that collide with the camera.  This can result in significant performance improvements, as rendering is generally far more expensive than collision detections.

Another advantage of cameras is that the view is not bound to the map or any other game entity.  For a game where the player controls a single character, keeping the character in the middle of the screen is as easy as centering the camera around the character each time the character moves.  If you want the player to see some other part of the map for some reason though, it is easy to move the camera to see that location for as long as necessary.  In games where the player should be able to view any part of the map as desired (real-time strategy, for example), the player can have complete control over the camera position.  Adding this flexibility to a game that uses the screen or character as the reference frame is significantly more work than just moving a camera around.  In addition, handling this with a camera is significantly more efficient.

Cameras are the simplest and most efficient way of providing a means for the player to view different parts of a game world.  They are not initially obvious to everyone though.  With a basic understanding of relativity, cameras are quite easy to implement, and they come with many benefits.