Thursday, January 29, 2015

Bayesian Network Reconstruction Using Systems Genetics Data: Comparison of MCMC Methods

Several former colleagues and I collaborated on a paper that was finally published in Genetics on January 28.  You can read it for free online here.

I won't regurgitate the entire work here, but I did want to take a moment to say why this research is important for many fields, not just genetics.  In fact, the Bayesian network reconstruction problem arises in numerous places including finance, machine learning, and crystallography.  The premise of it is this:  suppose you had a black box that, based on some inputs that are supplied to it, would perform a possibly random calculation and then give some results back to you.  Now it is your job to figure out how the machine works.  How are the outputs related to the inputs, and more importantly what is the underlying calculation that happens to produce these outputs?  Keep in mind that this machine could be doing nearly anything under the hood:  performing entirely different calculations based on one input, rolling dice and giving a random result, or throwing your inputs out entirely.

To computational biologists right now, the cell is such a black box.  We are able to measure a lot of things about it and do a lot of things to it, but we are unable to actually look at all of the things that are happening inside a cell in real time.  Not only are they happening at really fast timescales, we don't have microscopes capable of watching all of the relevant interactions that happen inside a cell at once, for example DNA unwinding, RNA synthesis, ribosomes making proteins, protein folding, protein-protein and extra-cellular interactions.  As a result, we are left measuring things such as single-nucleotide polymorphism and expression data and then trying to infer how the genes in the cell worked after the fact.

How we reconstruct a cell's regulatory network is remarkably naive:   we construct a completely random network, tweak it to make it more plausible (e.g., simulated annealing or gradient descent), and then repeat this many many times.  Finally, we propose that the links in the resulting networks that appeared most often are likely to be the true connections in the real network.

This isn't a perfect approach, but it has yielded new insights in the past.  Perhaps someday someone will come up with a greatly improved approach to reconstructing Bayesian networks, and it will be a boon to technology and mankind worthy of a Nobel prize.

Saturday, January 10, 2015

Nash Equilibrium in the Sealed-Bid Senior Auction

At work we've been thinking a lot about auction theory, and I was challenged to solve for the Nash equilibrium in the sealed-bid senior auction.  I found the solution and I think the result is pretty interesting, but before I get to that point I want to give some background about what an auction is and this scenario in particular.

An sealed-bid auction is in its purest sense a game played between a number of bidders who all would like an item and would like to pay as little as possible for it to maximize their profit.  Unlike an auction like eBay where the current hammer price is public knowledge, in the sealed-bid auction every bidder will write down how much they want to bid for the item.  The auctioneer will then collect all of these secret bids and select the highest bid to be the winner1.  Nobody has a chance to change their bid after they pass the auctioneer their secret bid.

It's crucial to understand what a Nash Equilibrium is, and why it's important.  Simply put, a Nash equilibrium strategy is a strategy such that if it was known that all other bidders were following this strategy, then it would not be optimal to deviate from the strategy.  A kind of Nash equilibrium is a dominant strategy, where all bidders should follow the dominant strategy regardless of what the other players were doing.

Let's start with a familiar example of a dominant strategy:  the second-price auction.  In a second-price auction, the highest bidder wins but only pays as much as the second-highest bidder.  An example of this is an eBay auction.  In a second-price auction, the dominant strategy is to bid your valuation v of the item at auction.  In the case where you lose the auction, you pay nothing and gain nothing.  In the case where you win the auction, you get the item (with value v) and you pay the second-highest bidder's bid (which we will call b for now).  Your profit is then v - b, and is maximized because if you had bid less than b then you wouldn't have won the auction, if you bid between b and v then you are indifferent because you would have paid b and made profit v - b anyway, and if you had bid more than v you might have actually lost money because you would have paid more than the item is worth to you if the second bidder had bid higher than v.  Thus, no matter what, even if you knew that everyone else was going to bid zero, you would still bid your valuation of the item in the second-price auction.

The first-price sealed-bid auction, on the other hand, does not have a dominant strategy.  In a first-price sealed-bid auction, the highest bid wins the item and pays their bid exactly.  In this case, bidding my valuation v is actually not a good idea:  I will never make a profit!  If I lose, I get nothing, and if I win, my profit is my valuation v minus my bid v, which is zero!

In fact, in the first-price sealed-bid auction, the best strategy actually depends on the distribution of valuations of the item in question.  That is, if I thought everyone valued the item between $0 and $1 with a uniform distribution, then the optimal strategy would be different than if everyone valued the item in a normal distribution around $0.50.  It becomes even more complicated in the case where I am uncertain in my appraisal of the item, in which case I must consider the winner's curse!

I will sidestep all of this complexity by asserting that in every case I study below, the bidders all know that every bidder's valuation of the item is selected from a uniform distribution between 0 and 1, and that every bidder knows how many other bidders there are.  The solutions below are only applicable given this assertion.

In the first-price sealed-bid auction, the ideal bid would be only slightly higher than the second place bidder's bid.  Naively, I will make the zero-order guess that all bidders will actually bid their valuation except for the one who actually does the math.  Now I'd like to know the bid that maximizes this privileged bidder's profit, which is the maximum of the expectation value of profit given a particular bid b, valuation v and number of bidders N.  So,

Equation 1:  Expected profit in the uniform-distribution first-price sealed-bid auction assuming that all bidders except one privileged bidder bid their value.

The first term is the profit if the privileged bidder wins with bid b, and the second term is the probability that all other bidders had valuations and bids less than b.  The expected profit is maximized if the first derivative is zero2

Equation 2:  Solving for the the maximum expected profit in the uniform-distribution first-price sealed-bid auction, if all bidders but one bid their valuation

It is now straightforward to solve for the optimal bidding strategy.

Equation 3:  I will show that this is the Nash equilibrium strategy in the uniform-distribution first-price sealed-bid auction.

I made a crucial assumption above:  that every other player would bid their valuation.  A strategy is only a Nash equilibrium if the strategy would be optimal given that everyone else is using the strategy.  If everyone were using this bidding strategy, would it be worthwhile to deviate?  To find out, I can substitute this bidding strategy into the profit term in equation 1 and solve again.  Now, all of the other bidders are using the strategy in Equation 3.  Can our privileged bidder do better?  The first term in Equation 1 remains the same:  the profit will just be his value minus his bid.  However, the probability that a bidder wins with this bid has changed because everyone else's value could actually be somewhat higher, v = N b / (N - 1).

Equation 4:  Solving for the maximum profit in the uniform-distribution first-price sealed-bid auction, if all other players bid the Nash equilibrium.

Just like in equation 2, I can solve for the new optimal strategy.  It should be obvious that the optimal bid, which is just the solution to


is just the same as equation 3.  Therefore, equation 3 is the Nash equilibrium.


I can now calculate the winner's expected profit in equilibrium by substituting equation 3 into equation 4, yielding

Equation 5:  The expected profit given that all players bid rationally in the uniform-distribution first-price sealed-bid auction.

It turns out that equation 5 is something more universal than might be expected.  The Revenue Equivalence Theorem tells us that as long as we set the rules of the auction so that the highest bidder wins the item and a bidder with value 0 has an expected profit of 0, all sealed-bid uniform-distribution auctions will have the same expected profit as in equation 5!

For example, let's consider an all-pay auction.  In an all-pay auction, every bidder pays their bid but only the highest bidder goes home with the item.  Obviously, a bidder should bid less than in the first-price auction above because their risk is higher: a bidder will always pay his bid but may not actually get the item to show for it.  As long as a bidder's valuation is more than zero, then he should bid more than zero because there is a small chance that he could get a spectacular deal on the item!  How much, exactly, should a bidder bid?

I'll solve this with the revenue equivalence theorem.  The revenue equivalence theorem tells me that the expected profit must be the same as in Equation 5, because the highest bidder still wins the auction and a bidder with the lowest valuation, 0, will simply walk away from the auction neither profiting nor paying.  Therefore, equation to solve is:



The left hand side of the equation comes from the revenue equivalence theorem.  The right hand side is justified as follows:  if a bidder has the highest valuation v (and therefore the highest bid), then he gets the item and recovers value v.  All bidders always pay their bid, b, in the all-pay auction so that value is simply subtracted.   This is trivially solved:

Equation 6:  The Nash equilibrium bid in the uniform-distribution all-pay sealed-bid auction.

Now I am ready to tackle the original challenge:  the Senior Auction.  In the Senior Auction, the first-place and second-place bidders both pay, but nobody else does.  Like all other auctions discussed here, the highest bidder will still be the one who takes home the item.  A rational bidder should bid at least as much in the all-pay auction because their risk is lower, and should bid less than in the first-price auction because their risk is higher.  Let's find out exactly how much using the Revenue Equivalence Theorem.

This should be fairly straightforward to solve.  The profit for the highest bidder is v - b, and the cost for the second-highest bidder is their bid b.  Thus, I have:


Which is just


This is straightforward to solve for b in terms of v:
Equation 7:  The Nash equilibrium bid in the uniform-distribution sealed-bid senior auction.

For example, let's take a look at the Nash equilibrium bid in the first-price, all-pay and senior auction for 10 bidders.

Figure 1:  Nash equilibrium bid as a function of valuation in three different types of auction, given 10 bidders with uniform valuations on [0,1]. 

Figure 1 shows two things that give us confidence in the result from equation 7.  A rational bidder always bids at least as high in the senior auction as in the all pay auction, and never bids more than in the first price auction.  This makes sense because the bidder's risk in the senior auction is higher than in the first price auction, but lower than that in all pay.  Another comforting observation in equation 7 is that for N = 2, the result is exactly the same as in the all pay auction as would be expected.

I'd like to verify that this is actually a Nash equilibrium by inverting equation 7, substituting back into the expected profit function and taking the first derivative, but that is algebraically cumbersome.  Instead, I will employ a computer algebra system, wxMaxima, and verify that the solution is a Nash equilibrium for some chosen values.  In my Maxima notebook (download here), I require both bid(v, B), which is the putative equilibrium bid given valuation v and number of bidders B and value(b, B), which inverts bid(v, B) with respect to valuation. The function ExpectedWinnings(b, v, N, B) gives the expected winnings in a top-N-pay auction given bid b, valuation v, N the number of top bidders who pay their bid, and the number of bidders B.  The 
expression solve(diff(ExpectedWinningsAllPay(b, v, B), b), b) determines the optimal bid b by solving 



given that all other bidders engage in the strategy prescribed by bid and value.  If the result is the same as bid, then bid is a Nash equilibrium because there is nothing to be gained from deviating from this strategy given that everyone else is using it.

If you execute my wxMaxima notebook yourself, you will see that Equation 7 is indeed a Nash equilibrium for the chosen valuation and number of bidders, so it is empirically true that Equation 7 is the Nash equilibrium for the uniform-distribution sealed-bid senior auction.

1 Not all kinds of sealed-bid auctions give the item to the highest bidder, and not all kinds of sealed-bid auctions require that each bidder places only one bid.  For all examples here, the highest bidder wins.  Interested parties might read about unique bid auctions.


2 Actually, it may be maximized or minimized. A more thorough person would check the second derivative to determine whether he found a local maximum or a local minimum. This is an exercise left to the reader.

Tuesday, November 11, 2014

Wedding Photos

Josh, our wedding photographer, recently sent us our wedding photos and we're really excited about them.  I looked at roughly a hundred photographers when we were planning our wedding as I like to think I have high standards for photography, and I separated them into four groups:  "People who picked up a camera and decided that they too make money at weddings," "People with a decent understanding of photography but who don't know how to shoot weddings," "People who understand photography and who know what shots people want taken at weddings," and "Artists."  Of about 100 photographers, there were three that I considered artists--people who understand photography, understand weddings, and who have an eye for a shot that looks natural and beautiful.  One of them was booked for our weekend a year in advance, and of the other two Simiao preferred Josh.  If you are looking for a wedding photographer, I couldn't recommend him more highly.  He was even willing to work with us to create a package that was exactly what we needed, and I think the quality of his work speaks for itself.  I've posted some favorite shots here, and if you want to see the rest of them you can look at the watermarked photos in my Google drive.  Please feel free to share or repost the watermarked photos.  If you want an un-watermarked photo suitable for printing, let me know and I will grant you access to the printable photos.

This was actually the first moment I saw Simiao in her wedding dress.

Our extremely silly wedding party.

The big moment--we were both doing our best to keep it together.

Cocktail hour in the Founder's room.

The festivities went on until midnight.

Please feel free to download, share, and enjoy the entire collection.  If you'd like a printable, un-watermarked original of one, let me know and I will give you access to those.

Monday, October 13, 2014

I Am Now Married

I apologize for the radio silence over the last several months!  I have been working on a few projects that have been more time consuming than originally expected, but when they're ready I promise that it'll be worth the wait.

First, in May I moved to Chicago to be with Simiao, who matched to Northwestern for Emergency Medicine.  As I prepared to move, I realized that most of the stuff I owned was worn out, so I threw out nearly everything that I couldn't mail or fit in my car and left Seattle on the very path that I drove in on two and a half years earlier.  I also necessarily changed teams at Google because, naturally, I also had to change offices.

Moving into a new city and a new apartment with no stuff presents a new issue.  It took us nearly a month to assemble the trappings of a normal life.  A bed, a couch, a dinner table, a television.  These aren't things you think a lot about until you realize that you don't own one, and so when we could finally come home, relax, and actually feel like we were at home it felt like an achievement.  It was an achievement that we didn't have much time to savor because following the move we were now far behind schedule on planning our wedding.  Thinking about it now, it's actually kind of amazing (and a credit to Simiao's awesome family who helped us along the way) that it all came together as well as it did.

Our wedding on Saturday was wonderful.  It was small, personal, and the Morton Arboretum is a beautiful venue.  It meant a lot to us that we could have a place that felt private and natural, and that didn't give any acknowledgement to a religion which neither of us subscribe to.  It was everything that I ever wanted in a wedding and I can't wait to share the photos with you when they're ready.

For now, I have two treats for you.  First, our vows.  Simiao's were so sweet that I can still barely read them without tearing up.  In fact, it seemed like everyone present could barely hold it together.

Simiao:
I vow to pursue patience, compromise, and reciprocity every day.
I vow to take pride in who we are, individually and together.
I vow to support you in times of trouble.
I vow to honor and respect you for all that you are and will become.
Above all, I vow to give you my love freely and unconditionally, for all the days of our lives.

Myself:
I vow to treat you with honor, love and dignity.
I vow to use life's challenges to edify our relationship.
I vow to protect you with all of my strength.
I vow to inspire your trust.
Likewise, I vow to give you my love freely and unconditionally, for all the days of our lives.

Second, we got our engagement photos!  I've placed a few favorites here, and the rest of the watermarked ones can be found publicly in my Google drive here (feel free to share these on social media if you like).  If you'd like access to the printable ones to print a photo for your personal use (you may not share them on social media), please request access here.








Monday, April 7, 2014

Time Again

A Shaggy Mouse Nudibranch found in
tide pools at Discovery Park in Seattle.
We recently picked up two more pinhole cameras that had been sitting around Seattle all winter long.  One spent about 9 months on the roof of an apartment building in Wallingford, and another spent about 3 months behind a sealed-up military housing unit at Discovery Park.  Both pieces of photo paper came out remarkably wet and sticky, but with intact photos. It was interesting that these cameras which used a much smaller aperture than the previous ones were burned red with the exposure instead of the violet or black of the previous cameras, and it's unclear whether this is due to the lower level of exposure to the sun or the extreme dampness of the photo paper due to the elements.  Curiously, these negatives also required much less correction using color curves than previous exposures.  One lesson that has proved useful in selecting sites for the pinhole cameras is that the subject should be as strongly backlit as possible.  In the case of the abandoned military housing, we selected a site where the pinhole camera would frequently fall directly in the shadow of the subject, which made for a beautiful shot.

Pinhole exposure lasting roughly 9 months on the roof of an apartment building in Wallingford.  Exposed July 2013-April 2014.
Pinhole exposure from Jan 25, 2014 until April 5, 2014.  Boarded-up military housing at Discovery Park.




Saturday, April 5, 2014

Transmutation

What's left of the old Hanford high school.
Ever since I moved to Seattle, I had always wanted to go tour the Hanford Nuclear Reservation.  Even as the military used the Hanford site to transmute Uranium into Plutonium, they transmuted a gorgeous desert landscape into a terribly contaminated mess.  It's hard to imagine anyone having such disrespect for the landscape today, but back then contaminated water was actually piped directly into the Columbia river and sometimes outright dumped into the desert.  Waste that would be toxic for 10,000 years was pumped into tanks designed to last 50.  The American and Washington state taxpayer is paying the price for this cleanup today, but the tour guides will tell you that it is likely that parts of the reservation may have to be controlled and monitored for the foreseeable future.  That said, the B reactor is a marvel of engineering that is very much worth your time to come out and see and perhaps the best historical site tour I've been on since the Titan Missile Museum.  You can make reservations online at their website, hanford.gov.

These cylinders were loaded with Uranium slugs while the reactor was in operation.  Every month, the reactor was shut down and the slugs were removed and sent for processing at a nearby facility where Plutonium could be extracted.  The B reactor did not produce any electricity, only plutonium.  The waste hot water was piped to cooling ponds and ultimately back to the river.

The B reactor staff was well prepared for the event of a radioactive zombie attack.

Systems in the reactor were either remarkably high-tech or remarkably low-tech.  In the case that power to the horizontal control rods was lost, giant buckets of stones would be dropped onto a cylinder of hydraulic fluid, pushing the control rods into the safe shut-down position.
A final backup plan was stored in the control room office space.  No, the backup plan was not keys to a rocket car or the phone number to the President.  It was the one thing they were sure to need in case of a meltdown.
"SCRAM" is a term originating from the early days of nuclear reactors, when the safety back-up plan was "Single Control Rod Axe Man."  In these early reactors, a single control rod was positioned above the reactor suspended by a rope.  In the event that the reactor went out of control, Single Control Rod Axe Man would cut the rope and drop the control rod into the reactor and stop the reactor.  In the B reactor, there were vertical control rods suspended above the reactor that could be dropped into place in case the horizontal control rods be rendered inoperable and for historical reasons this was known as SCRAMming the reactor.  So, yeah: Simiao and Theo should just go ahead and mash those buttons.

Thursday, February 20, 2014

What is the Best Hearthstone Deck Composition?

A friend recently introduced me to Hearthstone, an online collectible card game made by Blizzard.  The rules are described in detail at the link above, but the general idea is that you spend some resources ('mana crystals') to use cards in your hand with the goal of defeating your opponent by doing 30 damage to them.

Like many collectible card games, there are two major elements to Hearthstone's strategy.  First, you must design a deck of 30 cards which you believe will give you the best opportunities to thwart your opponent.  Second, you must play the hand you actually draw wisely in order to best take advantage of the opportunities presented to you during the game.

This post focuses on the former phase.  In particular, I want to know what the distribution of casting costs (in mana crystals) of the cards in my deck should be in order to make maximum use of the resources available to me.  If you are not interested in my methodology, you should skip to the results section below.

The Model of Hearthstone

Hearthstone is a complicated game.  There are numerous synergies between cards that would make you want to play them together, and there are equally many situations where you might not want to play them at all based on the cards your opponent has played.  I start with a simple assumption:  the player who uses the most mana crystals will tend to win.  This isn't strictly true, but it's easy to see that the player who has used more of his resources will probably be better off more often than not.

It's immediately clear that there's a flaw in my model:  the cards in a player's hand have a non-zero value just for being there!  There are cards with zero casting cost, and they are undeniably useful in real play.  So, how much is a card in your hand worth?

Figure 1:  Arcane Intellect card.  Card and artwork is copyrighted by Blizzard, and is used here for non-profit educational purposes only.

Figure 1 shows a card called Arcane Intellect.  It costs 3 mana crystals to use, and it allows you to draw two cards.  However, when you use Arcane Intellect the card itself is used up.  I am now left with a simple equation:

3 mana crystals + value of 1 card = value of 2 cards

Solving for the value of 1 card, I see that a card in my hand is worth 3 mana crystals.

Finally, I assume that the game is balanced.  That is, each card's usefulness is roughly equal to its casting cost in mana crystals plus the value of 1 card in my hand.

ActionUsefulness in Mana Crystals
Character Ability2
Cost 0 Card3
Cost 1 Card4
Cost 2 Card5
Cost X Card3+X

The Greedy AI

The AI that I use to model plays with my deck is greedy.  Given the turn T that I wish to win at, it will try to maximize the total usefulness of cards and abilities played at that turn.  Thus, I can design an early-game win deck that attempts to win at turn 8, or a late-game deck that attempts to win later, around turn 17.

Note that my AI plays with complete disregard for the opponent or the opponent's plays, and it does not know or care about what card it is playing beyond its casting cost.  It is not possible for my AI to win or lose--the opponent in the simulation never does anything.  My AI greedily seeks to maximize the usefulness of the cards played at every turn.

My AI always goes first.  At the beginning of the game when all or part of the hand can be re-drawn, it tries to minimize the casting cost of cards in its hand.  The AI thus discards all cards whose casting cost is greater than the expectation value of the cost of cards remaining in the deck.

Each turn, the AI draws exactly one card.  I assume that it never draws any additional cards due to spells like Arcane Intellect in Figure 1.  If it does, then the value of the card drawn is represented by that card's usefulness in mana crystals (in the case of Arcane Intellect, the usefulness is 6 as shown in the table above).  Then, the AI will play the combination of cards and abilities that maximizes the total usefulness of cards played this turn.  If multiple combinations have the same usefulness, the combination that leaves the AI with the largest hand size is chosen.  This is identical to the knapsack problem where the items are the set of cards in the player's hand plus the character's innate ability, the weights are the cost in mana crystals, the backpack is the player's current pool of mana crystals, and the values are the usefulness from the table above.

For example, if the AI had 3 mana crystals and it had a cost 1, 2 and 3 card in its hand, it would play the cost 1 and cost 2 cards for a usefulness of 9.  A player in an actual game might find that it is actually more useful to play the cost 3 card, or to use the character's innate, cost-2 ability.

At turn T, the AI reports the total utility achieved in this simulated game.  This varies from simulation to simulation and the average is reported from a large number of simulations.

The Deck Designer

To design the deck with the highest chance of winning at turn T, the deck designer starts with 30 random cards with casting costs from 0 to 8 mana crystals.  The designer does not care specifically what cards these are, only their cost in mana crystals.  The AI is run on this deck composition a large number of times, and the average usefulness at turn T is reported.

Next, the gradient descent algorithm is run by taking a card out of the deck and replacing it with a card with a different casting cost for each possible combination of removed card and inserted card costs in mana crystals.  If any of the modified decks has a better usefulness than the original, then gradient descent is applied again on the modified deck.  If none of the modified decks has a better usefulness, then we have reached a local maximum of utility and this composition is saved as a candidate best deck.

The deck designer is run a large number of times, and the mana cost distribution with the best average utility is reported as the best overall composition.  Both the deck designer and the results of the simulated games are non-deterministic, so it's possible that a better deck exists than the result reported by the deck designer.

Results

I had the deck designer optimize for several different compositions:  an 8-turn quick kill composition, an 11-turn mid-game kill composition, a 14-turn late-game composition, and a 17-turn ultra-late game composition.  These were the best deck compositions according to my model, but when using these as a guideline in designing your deck, please keep in mind that the heuristic model should certainly come second to the actual synergies between your cards.  If you don't have exactly the same number of cards of each casting cost as the ideal composition, don't sweat it.  The error bars on the predicted strength are large enough that a difference of a few cards here or there is not going to make a noticeable difference.

Figure 2:  Simulated optimal deck compositions in mana crystals for decks designed to win at particular turns. 

Figure 2 shows the best simulated deck compositions.  Again, these compositions should be used as a flexible guideline as the synergies between the actual cards in your deck are going to make a much more significant difference than a slight deviation from the optimal casting cost distribution.  It seems that for all decks, the cost-3 and cost-4 cards are the most important workhorses.  A deck designed for an 8-turn kill has over half of the cards in the deck of 30 with casting cost 3 and a sprinkling of cards from cost 0 to 5.  On the other hand, a deck designed for a 17-turn kill has only one cost-2 and cost-3 card, with roughly equal parts cost-4 to cost-7+ cards.

Figure 3:  Total usefulness versus turn for the four optimal deck compositions shown in Figure 2.  Note that decks that take an earlier lead pay a price for that agility in the late game.

Obviously, a deck loaded with cost-3 cards like the 8-turn kill deck is going to pay a price in the late game.  Figure 3 shows this tradeoff between early-game strength and late-game strength.  The 8-turn kill deck gains a sharp advantage early on, then fizzles.  After turn 10, it's at a huge strength disadvantage compared to the other decks.  On the other hand, the 11 and 14-turn kill decks strike a better balance between early game strength and late game staying power.  Given that the 17-turn kill deck has only a slight advantage over the 14-turn kill deck at turn 20, I don't believe it would be worth designing a deck that is to win this late in the game.

Conclusions

Since running these simulations, I've changed my deck compositions to strive for an 8-turn kill.  This has been enormously successful for me in practice.  By choosing cards that harmonize well and selecting a deck composition with a high chance of getting these cards into play before my opponent can react, I've realized a substantial improvement in win rate.

I would recommend that the reader starts out with the 8-turn or 11-turn kill compositions depending on the desired play style.  The 14-turn composition may work best for a deck whose concept is to simply survive in the early game and to outgun the opponent in the late game.