Monday, April 7, 2014

Time Again

A Shaggy Mouse Nudibranch found in
tide pools at Discovery Park in Seattle.
We recently picked up two more pinhole cameras that had been sitting around Seattle all winter long.  One spent about 9 months on the roof of an apartment building in Wallingford, and another spent about 3 months behind a sealed-up military housing unit at Discovery Park.  Both pieces of photo paper came out remarkably wet and sticky, but with intact photos. It was interesting that these cameras which used a much smaller aperture than the previous ones were burned red with the exposure instead of the violet or black of the previous cameras, and it's unclear whether this is due to the lower level of exposure to the sun or the extreme dampness of the photo paper due to the elements.  Curiously, these negatives also required much less correction using color curves than previous exposures.  One lesson that has proved useful in selecting sites for the pinhole cameras is that the subject should be as strongly backlit as possible.  In the case of the abandoned military housing, we selected a site where the pinhole camera would frequently fall directly in the shadow of the subject, which made for a beautiful shot.

Pinhole exposure lasting roughly 9 months on the roof of an apartment building in Wallingford.  Exposed July 2013-April 2014.
Pinhole exposure from Jan 25, 2014 until April 5, 2014.  Boarded-up military housing at Discovery Park.




Saturday, April 5, 2014

Transmutation

What's left of the old Hanford high school.
Ever since I moved to Seattle, I had always wanted to go tour the Hanford Nuclear Reservation.  Even as the military used the Hanford site to transmute Uranium into Plutonium, they transmuted a gorgeous desert landscape into a terribly contaminated mess.  It's hard to imagine anyone having such disrespect for the landscape today, but back then contaminated water was actually piped directly into the Columbia river and sometimes outright dumped into the desert.  Waste that would be toxic for 10,000 years was pumped into tanks designed to last 50.  The American and Washington state taxpayer is paying the price for this cleanup today, but the tour guides will tell you that it is likely that parts of the reservation may have to be controlled and monitored for the foreseeable future.  That said, the B reactor is a marvel of engineering that is very much worth your time to come out and see and perhaps the best historical site tour I've been on since the Titan Missile Museum.  You can make reservations online at their website, hanford.gov.

These cylinders were loaded with Uranium slugs while the reactor was in operation.  Every month, the reactor was shut down and the slugs were removed and sent for processing at a nearby facility where Plutonium could be extracted.  The B reactor did not produce any electricity, only plutonium.  The waste hot water was piped to cooling ponds and ultimately back to the river.

The B reactor staff was well prepared for the event of a radioactive zombie attack.

Systems in the reactor were either remarkably high-tech or remarkably low-tech.  In the case that power to the horizontal control rods was lost, giant buckets of stones would be dropped onto a cylinder of hydraulic fluid, pushing the control rods into the safe shut-down position.
A final backup plan was stored in the control room office space.  No, the backup plan was not keys to a rocket car or the phone number to the President.  It was the one thing they were sure to need in case of a meltdown.
"SCRAM" is a term originating from the early days of nuclear reactors, when the safety back-up plan was "Single Control Rod Axe Man."  In these early reactors, a single control rod was positioned above the reactor suspended by a rope.  In the event that the reactor went out of control, Single Control Rod Axe Man would cut the rope and drop the control rod into the reactor and stop the reactor.  In the B reactor, there were vertical control rods suspended above the reactor that could be dropped into place in case the horizontal control rods be rendered inoperable and for historical reasons this was known as SCRAMming the reactor.  So, yeah: Simiao and Theo should just go ahead and mash those buttons.

Thursday, February 20, 2014

What is the Best Hearthstone Deck Composition?

A friend recently introduced me to Hearthstone, an online collectible card game made by Blizzard.  The rules are described in detail at the link above, but the general idea is that you spend some resources ('mana crystals') to use cards in your hand with the goal of defeating your opponent by doing 30 damage to them.

Like many collectible card games, there are two major elements to Hearthstone's strategy.  First, you must design a deck of 30 cards which you believe will give you the best opportunities to thwart your opponent.  Second, you must play the hand you actually draw wisely in order to best take advantage of the opportunities presented to you during the game.

This post focuses on the former phase.  In particular, I want to know what the distribution of casting costs (in mana crystals) of the cards in my deck should be in order to make maximum use of the resources available to me.  If you are not interested in my methodology, you should skip to the results section below.

The Model of Hearthstone

Hearthstone is a complicated game.  There are numerous synergies between cards that would make you want to play them together, and there are equally many situations where you might not want to play them at all based on the cards your opponent has played.  I start with a simple assumption:  the player who uses the most mana crystals will tend to win.  This isn't strictly true, but it's easy to see that the player who has used more of his resources will probably be better off more often than not.

It's immediately clear that there's a flaw in my model:  the cards in a player's hand have a non-zero value just for being there!  There are cards with zero casting cost, and they are undeniably useful in real play.  So, how much is a card in your hand worth?

Figure 1:  Arcane Intellect card.  Card and artwork is copyrighted by Blizzard, and is used here for non-profit educational purposes only.

Figure 1 shows a card called Arcane Intellect.  It costs 3 mana crystals to use, and it allows you to draw two cards.  However, when you use Arcane Intellect the card itself is used up.  I am now left with a simple equation:

3 mana crystals + value of 1 card = value of 2 cards

Solving for the value of 1 card, I see that a card in my hand is worth 3 mana crystals.

Finally, I assume that the game is balanced.  That is, each card's usefulness is roughly equal to its casting cost in mana crystals plus the value of 1 card in my hand.

ActionUsefulness in Mana Crystals
Character Ability2
Cost 0 Card3
Cost 1 Card4
Cost 2 Card5
Cost X Card3+X

The Greedy AI

The AI that I use to model plays with my deck is greedy.  Given the turn T that I wish to win at, it will try to maximize the total usefulness of cards and abilities played at that turn.  Thus, I can design an early-game win deck that attempts to win at turn 8, or a late-game deck that attempts to win later, around turn 17.

Note that my AI plays with complete disregard for the opponent or the opponent's plays, and it does not know or care about what card it is playing beyond its casting cost.  It is not possible for my AI to win or lose--the opponent in the simulation never does anything.  My AI greedily seeks to maximize the usefulness of the cards played at every turn.

My AI always goes first.  At the beginning of the game when all or part of the hand can be re-drawn, it tries to minimize the casting cost of cards in its hand.  The AI thus discards all cards whose casting cost is greater than the expectation value of the cost of cards remaining in the deck.

Each turn, the AI draws exactly one card.  I assume that it never draws any additional cards due to spells like Arcane Intellect in Figure 1.  If it does, then the value of the card drawn is represented by that card's usefulness in mana crystals (in the case of Arcane Intellect, the usefulness is 6 as shown in the table above).  Then, the AI will play the combination of cards and abilities that maximizes the total usefulness of cards played this turn.  If multiple combinations have the same usefulness, the combination that leaves the AI with the largest hand size is chosen.  This is identical to the knapsack problem where the items are the set of cards in the player's hand plus the character's innate ability, the weights are the cost in mana crystals, the backpack is the player's current pool of mana crystals, and the values are the usefulness from the table above.

For example, if the AI had 3 mana crystals and it had a cost 1, 2 and 3 card in its hand, it would play the cost 1 and cost 2 cards for a usefulness of 9.  A player in an actual game might find that it is actually more useful to play the cost 3 card, or to use the character's innate, cost-2 ability.

At turn T, the AI reports the total utility achieved in this simulated game.  This varies from simulation to simulation and the average is reported from a large number of simulations.

The Deck Designer

To design the deck with the highest chance of winning at turn T, the deck designer starts with 30 random cards with casting costs from 0 to 8 mana crystals.  The designer does not care specifically what cards these are, only their cost in mana crystals.  The AI is run on this deck composition a large number of times, and the average usefulness at turn T is reported.

Next, the gradient descent algorithm is run by taking a card out of the deck and replacing it with a card with a different casting cost for each possible combination of removed card and inserted card costs in mana crystals.  If any of the modified decks has a better usefulness than the original, then gradient descent is applied again on the modified deck.  If none of the modified decks has a better usefulness, then we have reached a local maximum of utility and this composition is saved as a candidate best deck.

The deck designer is run a large number of times, and the mana cost distribution with the best average utility is reported as the best overall composition.  Both the deck designer and the results of the simulated games are non-deterministic, so it's possible that a better deck exists than the result reported by the deck designer.

Results

I had the deck designer optimize for several different compositions:  an 8-turn quick kill composition, an 11-turn mid-game kill composition, a 14-turn late-game composition, and a 17-turn ultra-late game composition.  These were the best deck compositions according to my model, but when using these as a guideline in designing your deck, please keep in mind that the heuristic model should certainly come second to the actual synergies between your cards.  If you don't have exactly the same number of cards of each casting cost as the ideal composition, don't sweat it.  The error bars on the predicted strength are large enough that a difference of a few cards here or there is not going to make a noticeable difference.

Figure 2:  Simulated optimal deck compositions in mana crystals for decks designed to win at particular turns. 

Figure 2 shows the best simulated deck compositions.  Again, these compositions should be used as a flexible guideline as the synergies between the actual cards in your deck are going to make a much more significant difference than a slight deviation from the optimal casting cost distribution.  It seems that for all decks, the cost-3 and cost-4 cards are the most important workhorses.  A deck designed for an 8-turn kill has over half of the cards in the deck of 30 with casting cost 3 and a sprinkling of cards from cost 0 to 5.  On the other hand, a deck designed for a 17-turn kill has only one cost-2 and cost-3 card, with roughly equal parts cost-4 to cost-7+ cards.

Figure 3:  Total usefulness versus turn for the four optimal deck compositions shown in Figure 2.  Note that decks that take an earlier lead pay a price for that agility in the late game.

Obviously, a deck loaded with cost-3 cards like the 8-turn kill deck is going to pay a price in the late game.  Figure 3 shows this tradeoff between early-game strength and late-game strength.  The 8-turn kill deck gains a sharp advantage early on, then fizzles.  After turn 10, it's at a huge strength disadvantage compared to the other decks.  On the other hand, the 11 and 14-turn kill decks strike a better balance between early game strength and late game staying power.  Given that the 17-turn kill deck has only a slight advantage over the 14-turn kill deck at turn 20, I don't believe it would be worth designing a deck that is to win this late in the game.

Conclusions

Since running these simulations, I've changed my deck compositions to strive for an 8-turn kill.  This has been enormously successful for me in practice.  By choosing cards that harmonize well and selecting a deck composition with a high chance of getting these cards into play before my opponent can react, I've realized a substantial improvement in win rate.

I would recommend that the reader starts out with the 8-turn or 11-turn kill compositions depending on the desired play style.  The 14-turn composition may work best for a deck whose concept is to simply survive in the early game and to outgun the opponent in the late game.

Sunday, February 9, 2014

What Is the Most Underrated DotA 2 Team?

My previous posts about betting at the DotA 2 Lounge have been enormously popular.  There have been over 1000 games since my last post, and I decided to revisit the vastly expanded data set.  I have previously made the scripts available to anyone wishing to perform these analyses themselves.

First, a moment to talk about parimutuel betting.  The DotA 2 Lounge runs a parimutuel betting scheme.  Suppose that there are two teams playing and a better bets on the winning team.  That player's payout, assuming no house take, is simply given by the value of the total pool times the fraction of the pool in favor of the winning team that was bet by the player.


It's noteworthy that a successful gambler is not so much betting for the winning team as to bet for the team that maximizes their expected winnings.  Let's take a look at an example to demonstrate this.

TeamTeam ATeam B
% of Bets10%90%
True Chance of Winning25%75%

In this example, I have two teams playing, Team A and Team B.  90% of the bets are in favor of Team B.  This means that if Team A wins, then the payout will be 9-to-1.  The important piece of information here is the true chance of winning.  In the real world, this is certainly not known exactly so I have made it up for this example.

The expected return on investment for a bet on Team A is then 0.25 / 0.1 = 2.5.  The expected payout for a bet on Team B per unit bet is  0.75 / 0.9 = 0.83.  Therefore, a smart better will bet on the underdog in this case.  A bet on Team B is actually more likely to lose money than to gain it.

The two important variables here are the proportion of bets on each team, which is given to you at the DotA 2 Lounge.  You should bet as late as possible so that you can get the best possible information on who the other gamblers are betting on.  Next, you should estimate the chances of each team winning.  Bet on the team that you believe has a higher chance of winning than the proportion of bets shown.  If you believe that the chances of each team winning are identical to the proportion of bets, then which team you bet on is irrelevant.

Parimutuel betting with no house take is a zero-sum game.  If the gamblers are good at appraising each team's chances of winning, then it is irrelevant which team you bet on.  This is the second figure I have shown in each of my previous studies.  Previously, I found that the crowd was very good at this: random betting neither had a significant profit or loss.  This still appears to be the case.


Figure 1:  Random betting strategy at the DotA 2 lounge in the first 1637 (1414 of them were bettable) games.  Average 48 rares lost.   Standard deviation 39.6.   Zero (the break even point) is within 2 standard deviations of the mean, so I conclude that the random betting strategy is neither a winning nor a losing one, and that the crowd is still rational.

Now I'd like to know what the most overrated and underrated teams are.  In order to determine this, I will use the following method:  For each team that played at least 10 games at the DotA 2 Lounge, I will propose that I bet one rare on them per game and record the payout.  Then, I will simulate 10,000 trials of random betting only on the set of games that this team played.  I will use these simulated random bets to determine the mean and standard deviation of the payouts in games played by each team.  The standard score of each team will be equal to:


Standard scores over 2 or below -2 will be considered significant enough to call a team "underrated" or "overrated", respectively.  All other standard scores will be considered statistically insignificant, and will not be reported.

Most Underrated Teams
TeamScore
Alliance15.0561
Na'Vi8.79236
LGD.cn8.74577
DK8.02724
VG5.32555
TongFu5.19414
Kaipi4.9593
Speed4.7966
iG4.08545
Fnatic.eu3.77795
Liquid3.57337
Orange3.37028
Sigma.int3.25397
Mski3.15417
MiTH2.91098
Empire2.55101
mouz2.24567
FD2.20341

Most Overrated Teams
TeamScore
WPCA-3.60975
RS-3.36541
aL-2.24014
MUFC-2.12442
Remarkably, many popular teams scored so well over time that I was actually unable to simulate a scenario where random betting did better than just betting in their favor every time in 10,000 trials.  Alliance, Na'Vi, LGD.cn, and DK in particular seem to be very underrated.

The only significantly overrated teams at the DotA 2 Lounge were WPCA, RS, aL and MUFC.  Remarkably, WPCA never won a match, so random betting always did at least as well as betting on them!  These teams are all quite overrated, so next time you see them playing, you might consider betting for the opposition.

Note that all of this is based on historical data and does not guarantee future results.  If based on this data everyone starts betting for Alliance then Alliance may soon be one of the most overrated teams!  Or, if aL picks up some awesome new players, it may not be at all fair or accurate to base their future performance on their past failure to deliver.

Remember, like I said above: parimutuel betting is all about deciding whether the crowd has predicted the correct probabilities of winning, and betting against the crowd favorite is a good idea in any situation where you think that the crowd favorite's actual chance of winning is less than the proportion of bets in its favor.

Good luck with your bets!

Saturday, January 18, 2014

Stories from Hawaii

A gold dust day gecko we rescued
from the window in our bathroom.
Simiao and I spent Christmas in Hawaii, which we found was a welcome reprieve from the frigid weather in the North.  When we were planning our vacation, there was a brief debate between whether we should stay in a hotel or use airbnb for lodging and ultimately decided on the latter.  It turns out that this was the right decision.

On the first day of our trip when we visited Ko Olina, we discussed the song Cheeseburger in Paradise.  I think that staying in a hotel is akin to going to a different world just to try more of the same.  So many tourists claim that they want authenticity, but end up travelling to far off lands only to seek out the very same options available to them at home.  Our host family made it possible for us to experience Hawaii in a way that we never could have at a hotel.  Think, Ono Hawaiian Food instead of Alan Wong's, a real family holiday party instead of a carefully choreographed Luau for tourists, and poke from Tanioka's on the beach instead of a burger on Waikiki.

How to catch a gecko that has found its way into your bathroom. The process was somewhat complicated by the gecko's persistent refusal to cooperate and Simiao shrieking, "I divorce myself from this! I am not responsible for anything that happens!"

Simiao walking on the rocks near Ko Olina.

Turbulent water near Koko Head Park.
Myself with our hosts' dog, Peanut.
Hawaiian Monk Seals littered the beaches.

Panorama of Hanauma Bay.  The bay had remarkably beautiful and clear snorkeling in the morning but was milky with silt from people playing in the sand by noon.

Snorkeling with the sea turtles.  Video taken with Simiao's iPhone in a waterproof case.

Sunday, December 1, 2013

Truly Tricky Graduate Physics Problems

For a long time, my collection of Physics problems has been one of the largest draws on my website.  Now, many of my favorite graduate Physics problems are available in an e-book from Bitingduck Press, Truly Tricky Graduate Physics Problems.  

The solutions that have previously been available here are still available for free, and always will be.  I find them helpful as a reference and wanted them to be available to others as well.  The solutions in the book include my own favorite problems as well as new problems from the other authors, Jay Nadeau and Leila Cohen, and are written in a format that can easily be cut-and-pasted into your favorite computer algebra system.

If you've found my Physics problem collection helpful in preparing for your qualifying exams, you may want to check out the e-book as well for more challenging Physics problems to help you prepare.

Sunday, November 17, 2013

Pinhole Picture from Arizona

Last January, I hid 5 pinhole cameras in the desert near Phoenix, AZ with the goal of taking some stunning pinhole photographs, taking advantage of the bright desert sun and the reflective desert sand.  In order to ensure that they wouldn't be found, we pulled over at a random place along N. Bush Highway, walked about 50 feet in the desert, and plastic tied the cameras to stumps or desert flora.

I still think it was a pretty solid plan, but there were a few flaws.  First, the desert in Arizona gets so hot that even the photo paper doesn't want to deal with it.  It looks like after only a month or two of hot days, the photo paper curled up in the oven-like can and completely stopped exposing.  Second, my estimation of how much light reflects off the sand was pretty far off.  The parts of the image that looked at sand were just dark and indistinct.  The simple fact is that the sun is so bright that everything else may as well be black.  Third, it turns out that the desert there must see more pedestrian traffic than I expected.  One of my cameras was totally destroyed, and one had been torn open and the photo paper taken out.

The one shot that came out all right was less than ideal.  I had placed the can horizontally and facing East on the theory that I would get reflections of the sunrise off of the desert sand, and that the sunrises over Four Peaks would give a striking horizon.  I guess it sort of worked, except that the desert was nothing distinct and Four Peaks was partially obscured by some desert plants.

Sunrise over Four Peaks in Arizona.  Thanks to the Christoph family for helping me to place and retrieve this photo!  Placed Jan. 5, 2013, retrieved Nov. 9, 2013.  Film curled up in the can, so it was not exposing for much of that time.